Bayesian statistics and experiments on stochastic variables

2 downloads 0 Views 778KB Size Report
V.E. Bunakov a,1, H.L. Harney a and A. Richter b a Max Planck ... Introduction. In Hans Weidenmiiller's work, stochastic observables frequently occur and have.
NUCLEAR PHYSICS A

Nuclear Physics A560 (1993) 71-84 North-Holland

Bayesian statistics and experiments on stochastic variables * V.E. Bunakov a,1,H.L. Harney a and A. Richter b a Max Planck Institut fiir Kernphysik, W-6900 Heidelberg, Germany ’ Znstitutfir Kernphysik der Technischen Hochschule Darmstadt, W-6100 Darmstadt, Germany Received 30 December 1992 Abstract:

If an observable x does no or only barely overcome the error inherent in an experiment designed to measure x then the extraction of information, e.g., of an upper limit on X, is plagued with conceptual difficulties. They are discussed in the present note. It is concluded that the bayesian concept offers a satisfactory answer. It is applied to the reanalysis of tests on time-reversal symmetry breaking via detailed balance.

1. Introduction

In Hans Weidenmiiller’s work, stochastic observables frequently occur and have been thoroughly studied. Nuclear resonance reactions and fluctuating cross sections ‘1 together with their application to violations of basic symmetries lend impressive examples. As an acknowledgement to him and stimulated by this part of his work, we will discuss the extraction of information on stochastic observables from experimental data. Stochastic observables that are sensitive to the violation of basic symmetries and that have been studied experimentally are e.g.: isospin forbidden cross sections ‘1; parity-breaking matrix elements connecting pairs of states of opposite parity in a highly excited nucleus 3-6); the time-reversal symmetry-breaking scattering amplitude in a compound-nucleus reaction 7,8). In all cases a system is formed - the highly excited nucleus - which behaves chaotically. This entails (i) a surprising enhancement of the observable sensitive to symmetry breaking 9-15>and (ii) a sto c h as t’ic b e h aviour of the observable 2,16-18). The first point makes chaotic systems interesting for experiments on symmetry breaking, the second point entails interesting questions as to their interpretation. These questions are the subject of the present work. Usually there is not so much interest in particular values of a stochastic observable (as the matrix element connecting a specific pair of states or the Correspondence to: Dr. H.L. Harney, Max-Planck-Institut fiir Kernphysik, Postfach 103980, D-6900 Heidelberg 1, Germany. * Dedicated to Hans A. Weidenmiiller on the occasion of his 60th birthday. ’ On leave of absence from the Leningrad Nuclear Physics Institute, 188350 Gatchina, St. Petersburg District. Russia. 03759474/93/$06X@

0 1993 - Elsevier Science Publishers B.V. All rights reserved

72

V. E. Bunakou et al. / Bayesian statistics

time-reversal breaking amplitude in a given reaction at a given energy) simply because these values are random numbers. Of interest is a quantity u that characterizes some aspect of the distribution of the observable, usually its root mean square value. In the present note we want to point out that the assignment of error limits to u out of the results of an experiment can be done in different ways. This is due to different possible concepts of confidence, since an error limit is a statement of the type: u is smaller than u with confidence K. Different concepts of confidence in turn result from different possible interpretations of probability. In sect. 2, we shall resume the concept of Bayes. The Reverend Thomas Bayes 19> (1702-1761) seems to first have written down a clearly “subjective” interpretation of probability as being a representation of one’s knowledge on some variable. We shall see that his approach has some very appealing features. It is, however, little known to physicists. We shall confront it with the “objective” or “frequency” interpretation of probability, cf. chapter 7 or ref. 20). Although both approaches yield the same results in a class of simple cases, they differ, when conclusions on the distribution of stochastic variables are drawn. This is the subject of sect. 3. Philip W. Anderson stated in a recent essay in “Physics Today” 21) that bayesian statistics “are the correct way to do inductive reasoning from necessarily imperfect data”. We shall describe the philosophical and mathematical difference to the frequency approach and point out the gratifying consequences of Bayes’ theorem. There will also be - in our conclusions - a note on the open ends of the bayesian approach. We agree with Anderson in that bayesian statistics lead to more cautious statements than the frequency approach. Sect. 4 contains an evaluation of the upper limit on time reversal symmetry violation from a set of detailed balance experiments 7,8) by help of the bayesian approach. These experiment have been interpreted several times 7,8,22,23).The interpretations differ by the observable which is constructed from the originally measured cross sections but they all follow the frequency approach. We offer another analysis here because we want to exhibit - by way of an experimental example - the difference between bayesian and frequency approach. Although the precision of the approximate procedure used in ref. 23) to bypass the Ericson correlations between the experimental points remains an open question, we strictly use, in sect. 4, the reasoning and the input data of ref. 23) except for the bayesian argument.

2. The bayesian argument Consider an observable that has the true value x. A measurement observable in general leads to a result A #X because of the experimental

of the errors.

1/E. Bunakov et al. / Bayesian statistics

To the variable A, one assigns a probability distribution probability to find A if x is given. We require

/

dAp(AIx)

= 1.

73

&I

1x) meaning: the

(2.1)

According to the frequency interpretation, &I I x) is the frequency distribution of the outcome A of repeated experiments while the parameter x represents a fact and therefore cannot be a statistical variable. According to the subjective interpretation of probability, &4 1x) represents one’s imperfect knowledge on the outcome of the experiment. Within this interpretation there is no conceptual difference between A and x and one can define a distribution p(x I A) that quantifies, where one expects x given the result A of the experiment. This is done as follows: Let us postulate the existence of an a priori distribution p(A) [p(x)] stating where one expects to find A [xl before the experiment is carried out. Then, however, pL4 I x>p(x) is the probability to find the value x and the value A. Since this statement is symmetric in A and x, it follows that

This equation is called Bayes’ theorem. Note that the integral over pL4 I x)p(x) with respect to x equals the unconditional probability to find A: P(A) =/dxp(Ax)p(x).

(2.3)

Hence, one obtains

(2.4) and the conditional probability of x is normalized. This last equation expresses the change of knowledge on x due to a measurement: if originally any possible statement on x was represented by the a priori distribution p(x), after the experiment it is given by the distribution (2.4). Note that this procedure can be iterated. Suppose that an experiment has yielded the result A, and, hence, p(x I A,) from eq. (2.4). Suppose that a later experiment yields A,. The information from both experiments can be combined by using p(x I A,) as the a priori distribution of x in the construction of p(x I &A,) yielding

74

VTE. Bunakov et al. / Bayesian statistics

A Fig. 1. The definition of uF. Choosing x = uF fulfills eq. (2.7) and the shaded area equals K. If x < uF then the integral of eq. (2.7) is < K. If x > uF the integral is > K. Note that p(A I x) is normalized whatever value x has.

This exhibits the subjective character of bayesian on all previous knowledge that one includes into the The upper limit uB (the superscript B stands for for a given confidence K is naturally defined by the /“dxp(x -cc

I A,)

= K

probability: p(x I A) depends a priori distribution. the bayesian approach) of x equation (2.6)

stating that the probability to have x =GuB - given the result A, of the experiment - is equal to K. Within the frequency interpretation there is no way to define a confidence interval for x since there is no probability distribution for X. A way to bypass this difficulty and to define an upper limit uF (the superscript F stands for the frequency approach) is to require that uF be the value of the parameter x which ensures that A is larger than the actual outcome A, with the confidence K, i.e. cdAp(AIuF)

=K.

(2.7)

Fig. 1 illustrates that - for reasonable distributions p - the integral is a monotonically increasing function of u F. Hence, eq. (2.7) means: if x were larger than uF one should have found - with confidence K - a result A larger than the actual one A,. This statement uses a confidence interval of A and relates it in a reasonable although contorted way to a value of X. Thus one bypasses a probabilistic statement on X. However, it is not clear, whether in the phrase “if x were larger than.. . ” there is already an allusion to a probabilistic interpretation of x because it allows x to assume one value or another and not just the factual one. Anyway, eq. (2.7) has been used for the extraction of error limits, [see e.g.

K E. Bunakov et al. / Bayesian statistics

75

of eqs. (2.6) and (2.7) will be illustrated by way of

refs. 7~8,23)]. The consequences examples in the next section.

3. Discussion of the bayesian approach There is a class of cases, where the different definitions of the last section yield the same upper limit U. Suppose that the probability pL4 I x) to find A given x is a function of A -x as is e.g. the case for a gaussian -(A

p(Alx)

=p(A-x)

= (25~~~)~~‘~ exp i

-x)”

2E2

1

(3.1)

with mean x and variance E (the standard error). Then one easily rewrites eq. (2.7) in the form

/

uFp(Al-~)

dy=K.

--m

P-2)

Suppose further that a priori nothing is known on x which - in the bayesian spirit - means that before the experiment every value of x is equally probable or P(X) = const. Eq. (2.4) then yields p(x I A) =p(A -xl and eq. (2.6) becomes UBp(A,-x)

/ --m

dx=K,

(3.3)

hence uF = uB in this case. A shght variation of the case leads to different results from the two approaches and the bayesian one turns out to be superior: Suppose one knows from the outset that x is positive. The experimental procedure shall still be such that it can produce negative A. This can happen e.g. if A is a number of counts left after subtraction of background. The distribution p( Al x) shall again be given by eq. (3.1). There is no way to incorporate the c1priori knowledge into the frequency approach. Therefore the definition (2.7) or (3.2) of uF can produce a manifestly wrong result if the experiment is marginal in the sense that the true value x is of about the same size as the error E. Suppose x = 3~. The experimental result A = - 2.~ - two standard deviations from the true value - is conceivable. From this result one would of course conclude that one has done a null experiment i.e. that x is too small to give a signature reliably distinguishable from the error. However, a null experiment should at least give an upper limit on x. Requiring 90% confidence one deduces from eq. (3.2) uF = -0.22~ in contradiction to x > 0.

V.E. Bunakov et al. / Bayesian statistics

16

The bayesian approach to the same problem gives a reasonable result. One would then set the a priori distribution p(x) equal to the step function O(x) and obtain from eq. (2.4) if x is

(3.7) and therefore ~(4..-

the probability to find (A,. . . AN) given u is A,)u)=/dx

,...

dx,p(A,...A,Ix,...x,)w(x,...x,lu)

A”k = [2~r(.5~ + u’)] -N’2 exp - $J /,,r 2(E2 + U’) I . I

(3.8)

KE. Bunakov et al. / Bayesian statistics

77

If one supposes that a priori nothing is known about u and puts p(u) = const. then an obvious generahzation of eq. (2.4) yields p( u I A,. . . AN) = norm-$(A,

. . . A, I u)

(3.9)

with

norm = /dv

The function (3.9) has exactly one m~mum

V

(3.10)

(27r( E* + ~*))-“~exp

max=

and the most probable value of u is

[ Ck AZ/N - G] 1’2

if this is real

0

otherwise.

(3.11)

Hence, the position of the maximum of p(v I A,. . . ANI is a very reasonable estimate of the variance of X. It also yields the criterion for a null experiment: if the maximum is at v,, = 0, the variance of x cannot be distinguished from zero. For large v, the distribution (3.9) behaves as p(vIA,...A,)~const

vTN

(3.12)

and therefore something dramatic happens if N = 1: Then the integral of eq. (3.10) diverges and the distribution p(v I A,) of eq. (3.9) cannot be normalized. In this case, the confidence K of eq. (2.6) and, hence, the upper limit uB of v are undefined: The bayesian approach says that from a single sampling of a stochastic variable x nothing can be learned about its variance U. One does, however, learn about the particular value X, since one can assign confidence limits to it using eqs. (3.1) and (3.3). These results are intuitively plausible: from a single sample of a stochastic variable one cannot infer anything about its fluctuations. The frequency approach leads to a qualitatively different conclusion in case of N = 1. The distribution p(A, I xl is given by eq. (3.8). We can insert it into eq. (2.7) and deduce the upper limit uF of U. Hence, within this framework one does obtain limits on the variance u from a single sampling of X. In the next section, we present the bayesian analysis of several detailed balance experiments that yield an upper limit on the time-reversal s~met~-breaking amplitude. The analysis in terms of the frequency approach has been given in ref. 23). We thus demonstrate the bayesian analysis by way of an example and exhibit the difference of the results. From the present section we expect that the bayesian approach yields the more conservative upper limits.

78

K E. Bunakou et al. / Buye~~~ statistics

4. The detailed balance experiments In refs. ?*s), excitation functions

a,, and uba of the reverse reactions

24Mg + (Y9Al+p

(4.1)

have been measured in order to test time-reversal symmetry breaking (TRSB). The correlation between forward and backward cross sections is expressed by the observable *

which has been constructed the energy E,,

by weighting the experimental

cross section cCd(Ek) at (4.3)

~cd(&) = %i(Ek)Gkt

with the inverse of the error Ed at E,, G,

=E;?

(4.4)

The angular brackets ( > denote the average over the experimental points k = 1.. . M. Note that A = 0 if the two excitation functions are equal to each other which is the case if there are no errors and no TRSB. If one expresses c?,,(E,), Gba(Ek)by their average

and their difference

one finds up to the second order in the differences

A=

(2$+1[ ;& - ($d)-l(

$skdk)2]-

(4-7)

Disregarding the second term in the square brackets, A is proportional to the sum of the squared differences. One can show that the d, have gaussian distributions [cf. ref. 23)]. If they have all the same variance then A must have a x2 distribution, see the discussion below. The extra term in the square brackets is due to the fact that A is independent of the overall normalizations of the functions @as and cba. l

Note that there is a misprint in eq. (2.1) of ref. 23).

VIE. Bunakou et al. / Bayesian statistics

79

These overall normalizations had to be removed from the observable because the precision of the experiments is very high only with respect to the relative cross sections. The observable A is a stochastic variable because the TRSB amplitude fluctuates at random as a function of energy. More precisely, if one expresses the cross sections by the TRS conserving amplitude f and the TRSB amplitude f’, a,, = 1f+f’

if-f’

a,,=

12,

(4.8)

i2,

(4.9)

then both, f and f’, fluctuate randomly and independently parameter

16). We introduce the

p= lf’12

(4.10)

2

IfI

characterizing the relative strength of the TRSB and TRS conserving amplitudes via the ensemble averages lf’12 and lf12. The quantity 5 - or rather an upper limit to it - shall be extracted from the data. In the spirit of the frequency or “objective” approach its value is a fact given by nature. It was treated as such in ref. 23), where the distribution p of A given 5 was derived as p(Al,$)

= &/dx

eixA

1 +ix(sls)-‘[l

Here, M is the number of statistically independent sections have been taken and (sIs)=

;

s;.

+52877&k])-1’2.

(4.11)

points at which the cross

(4.12)

k=l

The qk are eigenvalues of the matrix _G‘I2s 1/2P~1/2G1/2. Here -G is a M-dimen_ _ sional diagonal matrix with - in the diagonal - the entries G,. . . G,. The corresponding definition applies to ;. The matrix P is the projection operator

p=l-

I sxsI (sls)

(4.13)

constructed by help of the column vector I s) which has the entries sl.. . sM. The projection P forces one of the eigenvalues of _G1/2$/2P$/2_G1/2 to be zero. The remaining eigenvalues qk appear in eq. (4.11).

VIE. Bunakov et al. / Bayesian statistics

80

The integral representation (4.11) looks rather complicated. In order to get some understanding of that function consider the typical case [cf. eq. (7.9) of ref. 23)], where the experimental errors Ed are proportional to the square roots of the cross sections

In this case the A4 - 1 eigenvalues qk are all equal to distribution with A4 - 1 degrees of freedom

P(A,t)

=

q

[r(!y)A]-‘(g-”

and eq. (4.11) yields the x2

exP(

-;)

(4.15)

with Ps= (sls)-‘[l

+1281flz4].

(4.16)

In order to simplify the picture even more, consider the case when all d, of eq. (4.6) are uncorrelated random numbers and eq. (4.7) can be approximated by

Inserting A

Cftl’

d;

(4.18)

p( = 2(1+ t281 f I 2q) into eq. (4.15) one recognizes that p(A I() is the same as the distribution (3.8). The corresponding bayesian probability p([ I A) would then be given by eqs. (3.9) and (3.10). Since in the TRSB experiments d, = .sk, the maximum of p([ I A) is at 5 = 0, see eq. (3.11). Its asymptotic behavior for large E is p( 5 I A) s

const tPMf’,

(4.19)

cf. eq. (3.12). Therefore p([ I A) cannot be normalized for it4 G 2. Hence, one needs A4 > 3 in order to obtain information on 5. The reason is crudely speaking that one experimental point is used to get rid of the overall normalizations of the cross sections in the definition (4.2) and one remaining experimental point does not give sufficient information on 6, see the discussion following eq. (3.12). The case M = 2 is of course suited to check the presence of a TRSB amplitude f’(E) at E, and E,.

KE. Bunakov et al. / Bayesian statistics

81

TABLE 1

Input data 23) to the present analysis of detailed balance experiments. The line k = l-5 refer to the experiment by Blanke et al. ‘1 and k = 6-8 refer to von Witsch et al. ‘). For the combination of both 2 experiments, k = l-5,7,8 have been used with 1f 1 = 3.0 mb/sr and A = 3.96~ 10m6 k

Incident energy E, in L-t, P)

sk

(m$sr)

CIO-zErkb/srI

Gk

(mb/sr)-’

lf12

A

(mb/sr)

(MeV) 1 2 3 4 5

10.210 10.240 10.380 10.410 10.440

13.94 11.52 3.20 3.70 0.20

2.52 2.29 1.21 1.30 0.302

533 503 264 285 66.2

148 148 148 148 148

3.0

3.37 x 10-6

6 7 8

13.440 13.735 14.590

23.8 0.0967 2.38

4.36 0.0834 1.01

546 116 236

112 373 153

3.0

2.77x 10W6

This clearly shows that in measuring stochastic variables it is advantageous to increase the number M of uncorrelated experimental points rather than their individual accuracy EL ‘. Since the power law (4.18) yields - especially for small A4 - a behavior of ~(5 I A) very different from that of a gaussian, one must be careful in interpreting the confidence level for the upper bounds. If for instance an upper bound to (80) for 80% confidence is given, one intuitively expects the upper bound 5a(99> at the 99% confidence level to be approximately twice as large. This “intuition” is an inference from gaussian statistics. Since the distribution function for the stochastic variable LJ is non-gaussian, the ratio ,$,&99)/5,,(80) may be much larger if M is small. On the other hand, ~(5 I A) may be approximated by a gaussian if M is large - as the central-limit theorem predicts. Having got some insight into the properties of the distributions p(A 16) and ~(51 A) in a situation which is simplified according to eqs. (4.14) and (4.17), we now proceed with the use of eq. (4.11). The input data furnished by the experiments 7*8)are given in table 1 of ref. 23), where their extraction from the original cross sections is discussed more in detail. They are reproduced here in table 1. The entries k = l-5 come from the experiment by Blanke et al. 8>, the entries k = 6-8 are from the experiment by von Witsch et al. 7). The last column of the table gives the observable A as measured in either experiment. It is the quantity defined in eq. (7.7) of ref. 23). So far we have only retraced the procedure of ref. 23). Now following Bayes, we postulate the distribution of ,!j given A to be

(4.20)

82

VTE. Bunakov

Results

of the present

Experiment

et al. / Bayesian statistics TABLE 2 of detailed balance

analysis Confidence

Von Witsch et al. ‘) M=3 Blanke et al. ‘) M=5 Combination M=7

K

experiments. 6,”

1.68X 10-3 3.98x 10m3

80% 95% 99% 80% 95% 99% 80% 95% 99%

1.92x 1O-3 3.32x 1O-3 5.0 x10-3 1.25~10-~ 2.15x 1O-3 3.0 x low3

See text 557 4.42x lo3 1.82x lo-’ 8.80 x 10-z 2.38x 1O-3 4.41 x 10-s 8.00x 10m3 1.33x 10-3 2.24x 1O-3 3.50x 10-3

see eq. (2.4). It is assumed that one knows a priori that 5 is non-negative, i.e. P(5) = O(5). In ref. 23) the function p(A 15) was generated from the experimental data (see table 1) and used in eq. (2.7) to obtain the upper bound t,, which we call t,” in the present context. Now the same function is used in order to define the bayesian probability via eq. (4.19). Inserting p(& I A) into eq. (2.6) the upper bound &,” is obtained. The results are given in table 2 and fig. 2. The top three lines of table 2 contain the results from the experiment of ref. 7). Here M = 3. The next three lines refer to ref. ‘). Here M = 5. The last three lines result from the combination of both experiments. Here M = 7. Fig. 2 illustrates that ~(5 I A) peaks at 5 = 0 and that it falls off much more slowly for the case M = 3 than for the case M = 7, see eq. (4.19). One clearly recognizes the features of the bayesian approach that have been anticipated: (i) It yields the more conservative upper limits. (ii> The distribution

5 rn untts Fig. 2. The

probability

densities combination

of

10e3

p(( I A) for the experiments by van Witsch of it with the experiment by Blanke et al. ‘).

et al. ‘) and

for the

KE. Bunakov et al. / Bayesian statistics

83

~(5 1A) tails off very slowly for small M. (iii) The ratio of the upper bounds at 99% and at 80% confidence is ~~(99)/~~(80) = 20 for M = 3. For M = 7 this factor goes down to 2.3. If 5 had a normal distribution this factor would be 2.02.

5. Summary We have demonstrated that the bayesian approach is very attractive for the interpretation of “marginal” experiments. This is especially true if stochastic observables are considered. Its first advantage over the frequency approach is a conceptual one: it introduces a probability density for the quantity x to be measured and makes use of the natural confidence level definition (2.6) instead of the contorted argument behind the definition (2.7). It further allows to define the conditions under which the prescriptions of the frequency approach can be used. Here, we note that there is a connection between the bayesian approach and the maximum likelihood method (MLM). The functions P(A I x> of eqs. (2.21, (3.8) and (4.11) are often called “likelihood functions” [see sect. 5.3 of ref. “>I and the MLM says that the best estimate of x is the value which maximizes p(A I x) considered as a function of X. Accepting x as a probabilistic variable is already in the spirit of Bayes. Moreover one sees from eqs. (3.9)-(3.11) that this is exactly the maximum of the bayesian function p(x I A). As to the definition of confidence, the MLM prescriptions are based on the assumption that close to its maximum the function p(x I A) may be approximated by a gaussian. It can be shown from eqs. (3.9) or (4.2) that his is reasonable only for a large set 04 + 1) of independent experimental measurements. The Monte Carlo simulations of ref. 24) indicate by way of an example that for marginal experiments eq. (2.6) yields the correct confidence interval. Besides all the satisfactory features of the bayesian approach one open issue must be mentioned: The ansatz for the a priori distributions p(A) and p(x) in eq. (2.2) is not unique. If nothing is known about x, then p(x) = const seems natural. However, one could also introduce a constant probability density of, say, x2. This would entail that p(x) were proportional to x. It seems to us that the problem to define the basic probability measure is present, wherever one deals with continuous random variables and that it is only made explicit in the framework of the bayesian approach. Therefore the conclusion from the present comparison of the frequency and the bayesian approaches are: (i) For a small number M of observations one should use the bayesian functions p(x I A) for the confidence estimates. (ii) The probability density p(x I A) for small M is essentially non-gaussian and the convergence of the confidence K towards 100% is slow. Therefore quoting only one upper limit for one confidence level is questionable. (iii> The convergence of K rapidly improves with increasing M, see table 2 and fig. 2. Therefore choosing between the

84

KE. Bunakov et al. / Bayesian statistics

possibilities of increasing the individual accuracy EL’ of each point or the number M of observations the latter choice is recommended. The authors acknowledge a substantial contribution by Dr. E.D. Davis who brought the matter of the bayesian approach to their attention. They thank A. Hiipper who kindly rewrote his numerical code for the present purposes. V.E. Bunakov acknowledges the support and hospitality of the Max-Planck Institut fiir Kernphysik. A. Richter acknowledges partial support by the German Federal Minister for Research and Technology (BMFT) under contract number 06DA6411. All of us have enjoyed many years of inspiring collaboration with Hans A. Weidenmiiller. References 1) J.J.M. Verbaarschot, H.A. Weidenmiiller and M.R. Zirnbauer, Phys. Reports 129 (1985) 367 2) H.L. Harney, A. Richter and H.A. Weidenmiiller, Rev. Mod. Phys. 58 (1987) 607 3) V.P. Alfimenkov, S.B. Borzakov, Vo Van Tkhuan, Yu.D. Mareev, L.B. Pikel’ner, D. Rubin, AS. Khrykin and E.I. Shrapov, Pis’ma Zh. Eksp. Teor. Fiz. 34 (1981) 308 [JETP Lett. 34 (1981) 2951 4) V.P. Alfimenkov, Usp. Fiz. Nauk 144 (19841361 [Sov. Phys. Usp. 27 (1984) 7971 5) G. Bohm, A. Dewald, H. Paetz gen. Schieck, G. Rauprich, R. Reckenfelderbaumer, L. Sydow, R. Wirowski and P. von Brentano, Phys. Lett. B220 (1989) 27 6) J.D. Bowman, C.D. Bowman, J.E. Bush, P.P.J. Delheij, C.M. Frankle, C.R. Gould, D.G. Haase, J. Knudson, G.E. Mitchell, S. Pentila, H. Postma, N.R. Roberson, S.J. Seestrom, J.J. Szymanski, V.W. Yuan and X. Zhu, Phys. Rev. Lett. 65 (19901 1192; J.D. Bowman, C.D. Bowman, J.E. Bush, P.P.J. Delheij, C.R. Gould, D.G. Haase, J. Knudson, G.E. Mitchell, S. Pentill, H. Postma, N.R. Roberson, S.J. Seestrom, J.J. Szymanski, S.H. Yoo, V.W. Yuan and X. Zhu, Phys. Rev. Lett. 67 (19911 564 7) W. von Witsch, A. Richter and P. von Brentano, Phys. Lett 22 (1966) 631; Phys. Rev. Lett. 19 (1967) 524; Phys. Rev. 169 (1968) 923 8) E. Blanke, H. Driller, W. Gldckle, H. Genz, A. Richter and G. Schrieder, Phys. Rev. Lett. 51f1983) 355 9) T.E.O. Ericson, Phys. Lett. 23 (1966) 97 10) C. Mahaux and H.A. Weidenmiiller, Phys. Lett. 23 (19661 100 11) O.P. Sushkov and V.V. Flambaum, Sov. Phys. Usp. 136 (19821 1 12) V.E. Bunakov and V.P. Gudkov, Nucl. Phys. A401 (1983) 93 13) V.E. Bunakov, Phys. Rev. Lett. 60 (1988) 2250 14) V.E. Bunakov, Phys. Lett. B207 (1988) 233 15) V.E. Bunakov and H.A. Weidenmiiller, Phys. Rev. C39 (1989) 70 16) D. Boos& H.L. Harney and H.A. Weidenmiilier, Phys. Rev. Lett. 56 (1986) 2012; Z. Phys. A325 (1986) 363 17) E.D. Davis, Z. Phys. A340 (1991) 159 18) V.E. Bunakov, E.D. Davis and H.A. Weidenmiiller, Phys. Rev. C42 (19901 1718 19) T. Bayes, in “Essay towards solving a problem in the doctrine of chances”, 1763, published posthumously in the Phil. Trans. Royal Sot. London according to Encylopaedia Britannica, Micropaedia Vol. I, 1979; P.M. Lee, Bayesian statistics (Oxford Univ. Press, Oxford 19891 20) R.J. Barlow, Satistics (Wiley, New York, 1989) 211 P.W. Anderson, Physics Today, Jan. 1992, p. 9 22) G. Klein and H. Paetz gen. Schieck, Nucl. Phys. A219 (1974) 422 23) H.L. Harney, A. Hiipper and A. Richter, Nucl. Phys. AS18 (1990) 35 24) 0. Helene, Nucl. Instr. Meth. 228 (1984) 120