The effect of control field and measurement

13 downloads 0 Views 186KB Size Report
Feb 26, 1995 - of the system being controlled, rather they used an adaptive feedback method, based on a .... Figure 1: The graph of saturation transformation T1. Curves ... p( = x) = 1p. 2 e ?x2=2 2. ;. (7) and B is defined as. B(y) = 8>>>>>:.
URL ftp://iserv.iki.kfki.hu/pub/papers/jtoth.noise.ps.Z WWW http://iserv.iki.kfki.hu/quantum.html

The effect of control field and measurement imprecision on laboratory feedback control of quantum systems G´abor J. T´oth and Andr´as L˝orincz

Department of Photophysics, Institute of Isotopes, The Hungarian Academy of Sciences, Budapest, Konkoly-Thege u´ t 29-33, P.O.Box 77, Hungary H-1525 [email protected]

[email protected]

Herschel Rabitz

Department of Chemistry, Princeton University, Princeton, New Jersey 08544 February 26, 1995

Abstract Recent theoretical studies suggest the feasibility of laboratory adaptive feedback optimal control (AFOC) of chemical processes with the use of ultrashort laser pulses. The feedback process is introduced to provide robustness to laboratory error and electric field design uncertainties. Adaptive approaches for laboratory applications have been suggested. To foster laboratory implementation a number of questions still need to be answered. The paper addresses the problem of AFOC in the presence of laboratory errors and uncertainties both in adjusting the control parameters and in measuring the results. Through simulations we show that there exist optimization methods which are robust against systematic errors and certain optimization methods are also fairly robust against uncertainties. These results suggest that the effect of inevitable laboratory errors can be overcome. Furthermore the results suggest that noise is not a major problem and optimization in the laboratory is feasible.  J.

Chem. Phys. 101 (5) 3715–3722/.

1

1

Introduction

The possibility of using external laser fields to control specific chemical reactions or to excite molecules into prescribed states has received considerable attention in recent years. Brumer and Shapiro developed a method based on the interference between continuous wave lasers [1, 2]. The scheme developed by Tannor, Rice and Kosloff uses two ultrashort pulses to break a prescribed chemical bond [3, 4]. Peirce, Dahleh and Rabitz developed a way of applying the methods of optimal control theory (OCT) to quantum chemical problems which has become the standard formulation of the problem in recent years [5]. A similar formulation was developed by Kosloff et. al. [6]. An important step toward laboratory implementation was taken by Judson and Rabitz [7]. They developed a method which, in its extreme form, did not use any a priori information of the system being controlled, rather they used an adaptive feedback method, based on a genetic algorithm (GA), to let the laser learn to achieve control of the system. Another adaptive formulation suggests simulated annealing (SA) for laboratory implementations [8]. Earlier OCT methods (i) assumed exact knowledge of the system to be controlled, (ii) used that knowledge to design an appropriate laser pulse and (iii) assumed it would be possible to realize that pulse with great precision in the laboratory. The adaptive feedback method requires (i) a laser which can be controlled to produce suitable pulses, (ii) a measurement system capable of observing the success of control and (iii) an adaptive algorithm which can find the optimal control settings based on the measured success of previous pulses. In some cases the best approach may be the application of OCT to first provide a trial input pulse for laboratory feedback. It should also be understood that the feedback referred to here will not be in real time in the laboratory, due to the ultrafast nature of molecular dynamics. The feedback is achieved through a sequence of controlled pump-and-probe measurements. Adaptive schemes are highly suitable for laboratory implementation for several reasons. They do not presuppose a detailed knowledge of the quantum system under control, e.g., the exact Hamiltonian of the system. In addition, there is no need to solve the time dependent Schr¨odinger equation. This can speed up the optimization considerably since computations take a long time even for medium sized molecules. The solution of the time dependent Schr¨odinger equation for a 1 ps interval for a medium weight three atom molecule is at the 2

edge of present day computer power. This is to be compared to the 5 Hz–1 kHz repetition rate of high power lasers and state of the art adjustment possibilities [9, 10] that allow at least 10–100 Hz laboratory iteration frequency. All laboratory equipment can drift with time. A further advantage of using adaptive algorithms is that they can overcome this problem as they monitor a change of performance and can constantly correct the control. One of the questions to be answered before going into the laboratory is the effect of imprecision both in setting the control parameters and in measuring the success of the control. It has already been shown that the presence of noise does not make the optimization impossible [11]. In the present paper we investigate the question in more detail. We will examine the effect of random and systematic errors in both setting the control parameters and in measuring the results for feedback. Recent studies have shown that the use of global search techniques is often desirable in quantum mechanical optimal control problems [8, 12]. We will simulate the use of two such methods here: the GA and the SA methods. In this paper we will show that a wide range of systematic errors in the laser control and measuring devices does not seriously change the performance of AFOC. We will also show that with a proper choice of the adaptive system a high level of noise also can be tolerated. These results suggest that there is wide latitude for laboratory controls and observational methods, which still should give excellent feedback control. The complexity of the molecule will play a role here, and due caution is still called for. The results reported here were achieved by simulating the control of selective excitation of a four level model system. On a different mathematical problem similar results were found, and this latter problem is discussed briefly. 2

Simulation environment

The goal of this study is to show the effect of control and measurement imprecision on the success of laboratory AFOC of molecular events. The significance of these effects, to some degree, depends on the problem, the optimization method used and the kind(s) of imprecision present. It is not possible to run a test which is entirely general, therefore we chose optimization methods which seemed promising in other studies, and a representative 3

problem with characteristic imprecision types. At this stage of development simplicity is important to build a base of understanding. 2.1

The problem

In the present work we examine a simple four level quantum mechanical system containing a ground state level (level 1), a close lying excited level (level 2) and two energetically degenerate upper levels (levels 3 and 4); this is the same model as used in [11]. The transition dipole moment had the same value between each pair of states (), except for pair 1–4, for

which moment was of opposite sign (?).

The goal is to drive the system from level 1 at t

=

0 into level 4 by t

=

T . We therefore

assumed the population of level 4 at time T was measured, and this population was also chosen as the objective functional to be maximized. The system was driven by three different constant amplitude laser pulses (0  t  T ), matching the frequencies of the three transitions:

E (t) = where i

=

3 X

i=1

Ai cos(!i t + i);

1, 2 and 3 correspond to transitions 1

respectively, and where !i is the frequency,

(1 )

$ 2, 1 $ 3 (1 $ 4) and 2 $ 3 (2 $ 4),

Ai is the amplitude and i is the phase of the corresponding pulse. For convenience we define six new independent parameters: ui (i = 1, 2, 3) corresponding to the amplitudes and vi (i = 1, 2, 3) corresponding to the phases of the three pulses,

Ai = Amaxui

i = 2vi:

(2 )

All six control parameters span the range [0; 1]. The values of the different variables are shown in Table 1. The simulations were performed by solving the time dependent Schr¨odinger equation within the rotating wave approximation. Another problem we treated is the maximization of a mathematical function:

f (x) = Each component of x = (x1;

6 X =1

i

sin(xi ? i=7) xi ? i=7 :

(3)

: : :; x6) was in the interval [0; 10]. The reason for the choice

of this problem was that it is known to have a large number (10 6) local maxima and only one global maximum in contrast to the other example with only a few local maxima. The results 4

!1 !2 !3  Amax T

1000 cm?1 21000 cm?1 20000 cm?1 1 a.u. 5:0  10?7 a.u. 4:13  107 a.u.

Table 1: Values of parameters characterizing the quantum system.

obtained are similar to those related to the former problem, thus only one example will be given from these runs. 2.2

The types of imprecision

In the laboratory environment there are several sources of error. The control device (laser) can never be set with perfect precision, nor can the results be measured perfectly. In the following we will call these errors input error and output error, respectively. The error can be characterized as random or systematic. Using these criteria we can therefore consider four basic kinds of errors. The errors can be treated as transformations in the present context. The input error is a transformation applied to the control parameters before simulating the experiment, while the output error is applied to the measured values before they are presented to the optimization algorithm. In the following we describe the specific kinds of errors employed. The systematic errors can be of very different type and difficult to characterize in general. In this paper we deal with two common kinds: error caused by saturation and the error introduced by finite resolution. Saturation occurs in virtually all measurements, and rounding is characteristic to digital measuring devices. They respectively can be characterized by the following transformations on the variable x:

T1(x; ) =

8
> > < > > > :

1 :

2

(7)

if y

0 1

y

otherwise

6

(8 )

The above definitions of the transformations assume that the variable x to be transformed

is in the interval [0; 1], and its result is in the same interval. The transformations are applied to

the input variables ui , vi , introduced above, and to the output result of the quantum controlled system. All of the relevant variables are defined to be in the unit interval. 2.3

Optimization methods

Optimization methods are often categorized as ‘local’ and ‘global’, depending on whether they will tend to search for the closest, and often local, optimum, or for the global one. However, even global methods may not find the true global minimum in a finite number of iterations. Most of the traditional methods (e.g., the various types of gradient methods) are local, while some non-traditional ones, among others the GA and the SA, are global. The local search techniques are considerably faster, but experience shows that they may be inadequate for some quantum mechanical control problems, which may have a large number of poor quality local optima (see, e.g., [8, 12]). In this study we used the GA and SA global methods. The GA is a biologically inspired heuristic [13]. It is appealing because it employs very simple operations, yet tends to find the global solution. The GA maintains a number of possible solutions, called ‘individuals’ in this context, which are collectively called the ‘population’. A run of the algorithm proceeds as follows. At the beginning the individuals are chosen randomly. Then these individuals, the members of the first generation, are tested for their impact. After the testing is complete each member of the population is allowed to have ‘offspring’, which are generated from it by so called genetic operators. Traditionally two genetic operators are used: one called cross over which tries to combine good partial solutions contained in different individuals and one called mutation which is practically a random search operation. The number of offspring an individual can have is proportional to its ‘fitness’, based on its testing results. The SA algorithm is physically inspired [14]. It can be regarded as the generalization of hill climbing algorithms (although in the literature SA is more typically introduced for minimization). The algorithm is usually started from a randomly chosen point. The SA algorithm then proposes a random step, which is accepted with a probability n

min

o

e? ∆E ; 1 ; 7

(9 )

where ∆E is the difference between the values of the objective functional at the present and the proposed point, being negative if the proposed point is better then the current one, and



is a positive constant, called the ‘inverse temperature’, which is increased during the process. In our work we used the conventional GA described by Goldberg in [13], with the coding of the continuous variables as fixed point numbers, using the basic genetic operators (mutation, cross over), regular selection mechanism and linear scaling. For SA we used an implementation based on the simplex algorithm [14]. All optimization methods have shortcomings. A well known problem with GA is that it is good in finding the neighborhood of the optimal solution even in traditionally difficult problems, but it is not necessarily good in finding the exact optimum. Moreover, the quality of the best solution the GA finds has a significant variance. These problems can be overcome by augmenting the GA by a traditional local search method which is started from the best point the GA found. 2.4

Testing methods

Both optimization procedures, the GA and the SA, have some parameters which control their behavior. Fine tuning of these parameters can improve the results, however, the improvements are typically not significant. Furthermore, the focus of this study is to show the trends of the changes caused by the different errors which do not depend strongly on the absolute quality of the obtained solutions. Therefore we used the same parameter values in each run. The results are depicted by giving the graphs of the objective, the probability of being in state 4 at

t=T

as a function of trial number. The ‘measured objective’ is the result of

testing the control parameters ( ui and vi values) prescribed by the optimization algorithm and applying all the input and output transformations in use. The value of the ‘true objective’ is derived by testing the same control parameters but without applying either the input or the output and without applying

T3 (adding noise) to

T1 and T2 to the output.

The reason for

using this kind of objective is double. Eliminating the output transformation is necessary to compare results obtained by using different output transformations. The elimination of noise makes results for different noise amplitudes comparable. Similarly, one can assume that there is a possibility in the laboratory of decreasing the noise and the elimination of noise gives

8

predictions for this case. It should be noted that it is always the measured objective that is used to drive the optimization, the true objective is used only for analyzing the results. Both optimization methods have considerable variance. In the laboratory setting many runs would ideally be performed to define the control statistics. In the simulations all runs were repeated 10 times, the figures show the averages of ten independent runs. In spite of the averaging, the results have considerable variance. There are two basic sources of error: initialization and the stochastic nature of the algorithms. To decrease the variance and to make the effect of the individual transformations more pronounced the algorithms were identically initialized for different values. A high variance of the performance of the optimal solution found by the GA can be observed: the curves corresponding to the transformation free cases in the different figures are noticeably different. In most cases the figures show the performance of the best of the generation for GA and best of 100 trials for SA. In one case, where due to the presence of noise the ‘best’ was not well defined, we gave the average values instead. The SA implementation was based on the simplex algorithm. In n dimensions the simplex

algorithm maintains n + 1 points. At each step it tries to improve the worst of the n + 1 points. For this reason it is possible that a graph depicting the statistics of the tested points shows decreasing quality while in practice there is no degradation, it is just that the algorithm could not further improve the best point. In each case where the graphs showed degrading quality we checked whether it was caused by the above problem or by real degradation. Cases of real degradation are discussed in the text.

It should be noted that in the transformation free case ( = 0) the SA clearly outperforms the GA. This reflects the fact that the GA is good in finding the neighborhood of the optimum but not very good at finding the exact optimum, as mentioned previously. 3

Results

The AFOC of the described system was simulated by employing different transformations (corresponding to different kinds of error on the input and output) with different transformation parameters. The different transformations were employed separately, to allow the separation of their effects. In each case we describe the similarities and differences between the two 9

1

0.9

0.9 Best of 100 trials

Best of generation

1

0.8

0.7

0.6

0.8

0.7

0.6 GA

SA

0.5

0.5 0

2000

4000 6000 Number of trials

8000

10000

0

2000

4000 6000 Number of trials

8000

10000

Figure 2: The achieved probability of being in the target state averaged over 10 runs and its standard deviation for the transformation free case. For an analysis of the SA and GA performances see the text in Sec. 3.1.

optimization procedures. 3.1

Performance without errors

To make the figures clear the variances were usually not plotted. Figure 2 shows the average along with its standard error. Thus the plots indicate the inherent statistics of the GA and SA algorithms for finding solutions. Several comments are warranted on these plots. First, the GA is not trapped in a local solutions, and is slowly converging with the help of mutation. As already mentioned, the slow convergence can be overcome by utilizing a local search at the end. Finally, at this point a preliminary conclusion would appear to be that the SA has outperformed the GA on this problem. However, the result below including realistic laboratory errors will alter this impression. 3.2

Systematic error in the input side

An important difference between control methods utilizing a priori knowledge of the system being controlled and those not using such information is in their reaction to systematic error in the input side. Methods which design the necessary laser pulse in advance may fail if the control device has even the simplest systematic error, since the laser shot delivered will not be the one designed. An adaptive system working directly in the laboratory will run 10

1 Measured objective: best of 100 trials

Measured objective: best of generation

1

0.9

0.8

0.7

α 0 1 2 3 5

0.6 GA 0.5

0.9

0.8

0.7

α 0 1 2 3 5

0.6 SA 0.5

0

2000

4000 6000 Number of trials

8000

10000

0

2000

4000 6000 Number of trials

8000

10000

1

1 Measured objective: best of 100 trials

Measured objective: best of generation

Figure 3: Change in performance caused by input side saturation.

0.9

0.8

0.7

α 0 10-4 10-3 10-2 10-1

0.6 GA 0.5

0.9

0.8

0.7

α 0 10-4 10-3 10-2 10-1

0.6 SA 0.5

0

2000

4000 6000 Number of trials

8000

10000

0

2000

4000 6000 Number of trials

8000

10000

Figure 4: Change in performance caused by input side rounding.

the optimization with all the errors present. Therefore if the input transformation (error) is homeomorphic (one to one) an adaptive system can easily find the optimum. This does not mean that the performance of an adaptive system may not be changed: the transformation can enlarge or diminish the region where the optimal solution is located and thereby making it easier or more difficult to find. As expected, saturation, which is a homeomorphic transformation, has little effect on the performance of the adaptive systems (see Fig. 3). Rounding, on the other hand, is not homeomorphic. The worst the resolution, the worst performance we can expect. This trend can be seen in Fig. 4. The actual degree of degradation is, however, not worth to quote here as the precision a measurement requires may differ by orders of magnitude.

11

1 True objective: best of 100 trials

True objective: best of generation

1

0.9

0.8

0.7

α 0 1 2 3 5

0.6 GA 0.5

0.9

0.8

0.7

α 0 1 2 3 5

0.6 SA 0.5

0

2000

4000 6000 Number of trials

8000

10000

0

2000

4000 6000 Number of trials

8000

10000

Figure 5: Change in performance caused by output side saturation. 1 True objective: best of 100 trials

True objective: best of generation

1

0.9

0.8 α 0 10-4 10-3 10-2 10-1 5·10-1

0.7

0.6 GA 0.5

0.9

0.8 α 0 10-4 10-3 10-2 10-1 5·10-1

0.7

0.6 SA 0.5

0

2000

4000 6000 Number of trials

8000

10000

0

2000

4000 6000 Number of trials

8000

10000

Figure 6: Change in performance caused by output side rounding.

3.3

Systematic error in the output side

The results above show that systematic input error does not seriously degrade the performance, if it is homeomorphic. The same result is expected if the output transformation is monotone. Since the GA and SA methods do not use any other information from the measured performance than its magnitude, the performance should not be seriously changed as long as a higher measured performance corresponds to a better solution. Figures 5 and 6 show the results of the simulated experiments. The performance is somewhat degraded since both saturation and rounding lowers the selectivity, i.e., solutions of different real performance are mapped to the same measured performance (rounding), or to similar ones (saturation). If the transformation is not monotone the algorithm may not find the optimal solution as the

12

maximum of the transformed function may be at other values of the control parameters than the maximum of the untransformed one. When designing the experiment and the objective function one has to be aware of this limitation, i.e., one has to have a least a rough estimation of the types and magnitudes of the systematic errors which can be present. 3.4

Noise in the output side

While the above results could be anticipated to some degree, the effect of noise has more subtleties. Random error in the output side does not alter the performance directly, yet it can mislead the optimization methods. Indeed, the SA’s performance is significantly lowered. However, the GA’s performance is practically unchanged even for extremely high values of noise (see Fig 7). This difference between the GA and the SA can be explained by the integrative nature of the GA: The GA never makes sharp decisions, rather it gives preferences for certain solutions. The long decision process means that the error is practically averaged out. The SA, and most conventional methods, as well, make decisions at every step (after every test) and can therefore be easily mislead by an erroneous measurement. This effect is well pronounced in the figure. The fact that the GA can tolerate a high level of noise has important practical implications. If a higher level of noise can be allowed one can usually use fewer measurements since noise averaging is time consuming. Supposing that the population probe noise has a Poisson distribution (typical for high-performance lasers) a decrease of the noise content by an order of magnitude requires an increase of two orders of magnitude in integration time. 1=f noise is even worse from this respect. In the same way the need for signal averaging is eased. Noise resistance can thus be very useful because optimization is time consuming, especially if one is to find the global optimum. It should be noted that the SA’s performance actually degrades after a time. This is because of the presence of noise: a solution can be measured better or worse than it really is. If a solution is measured much better than it is, then, for high inverse temperature ( ) values, the algorithm will not leave that point, unless an even larger error occurs.

13

1 True objective: best of 100 trials

True objective: best of generation

1

0.9

0.8

0.7

α 0.0 0.1 0.2 0.3 0.4

0.6 GA 0.5

0.9

0.8

0.7

α 0.0 0.1 0.2 0.3 0.4

0.6 SA 0.5

0

2000

4000 6000 Number of trials

8000

10000

0

2000

4000 6000 Number of trials

8000

10000

1

Measured objecitive: average of 100 trials

Measured objecitive: average of generation

Figure 7: Change in performance caused by output side noise.

0.8

0.6

0.4

α 0.00 0.01 0.02 0.03 0.04

0.2 GA 0 0

2000

4000 6000 Number of trials

8000

10000

1

0.8

0.6

0.4

α 0.00 0.01 0.02 0.03 0.04

0.2 SA 0 0

2000

4000 6000 Number of trials

8000

10000

Figure 8: Change in performance caused by input side noise.

3.5

Noise in the input side

If the input is noisy the average performance of the control is lowered, regardless of the optimization technique used. This is because one can not set the control parameters to their best value, only approximately. Therefore the performance can be severely degraded for high noise levels, as can be seen in Fig. 8. However, it turns out that the true objective value (see Sec. 2.4) is significantly higher than the measured one (Fig 9). This reflects the fact that in spite of the noise, the algorithms can find a near optimal performance . This is especially true for the GA. The GA’s tolerance to the input side noise is also very important. In cases that involve mechanical adjustments precise adjustment of the control parameters takes time, and higher precision takes longer to achieve. The tolerance to noise in this case allows the use of faster

14

1 True objective: best of 100 trials

True objective: best of generation

1

0.8

0.6

0.4

α 0.00 0.01 0.02 0.03 0.04

0.2 GA

0.8

0.6

0.4

α 0.00 0.01 0.02 0.03 0.04

0.2

0

SA 0

0

2000

4000 6000 Number of trials

8000

10000

0

2000

4000 6000 Number of trials

8000

10000

Figure 9: Change in performance caused by input side noise.

adjustments, further reducing the time needed for optimization. 3.6

Combined distortion and noise

To see the combined effect of the noise and transformations the optimization was performed with all the transformations and noises present. The transformation parameter values chosen are somewhat arbitrary: the input was transformed by distortion, see Fig 1), the output by both cases was

=

T1 with = 2 (i.e., a severe saturation

T2 with = 0:001 (i.e., retaining 3 digits), the noise in

0:05 (i.e., 5% standard deviation).

Figure 10 shows the performance for the quantum system. The drop in the SA performance after about 1000 trials occurs for the reasons explained for Fig. 7. Thus the SA achieved reasonable results faster than the GA, however the GA ultimately converged to better performance with additional trials. Caution is always called for in such comparisons as they depend on how the optimization algorithms are tuned and the problem studied. In particular, the number of local maxima can significantly impact on the AFOC. Quantum control problems are known to exhibit multiple solutions [15, 16], and the numerical investigation in the present problem revealed multiple maxima of different quality. A demanding test of the presence of many multiple maxima is given by the maximization of the model test function f (x) in Eq. 3 with its 106 local maxima. The result of this testing is shown in Fig. 11. It is evident in this case than the GA has outperformed the SA. Ultimately, the level of complexity of the quantum system will effect the performance of AFOC, and this point deservers further study.

15

1

0.9

0.9 Best of 100 trials

Best of generation

1

0.8

0.7

0.6

0.8

0.7

0.6 GA

SA

0.5

0.5 0

2000

4000 6000 Number of trials

8000

10000

0

2000

4000 6000 Number of trials

8000

10000

Figure 10: The performance under the combined effects of transformations and noise for the quantum

1

1

0.9

0.9

0.8

0.8

Best of 100 trials

Best of generation

system: the average over 10 runs and its standard deviation.

0.7 0.6 0.5 0.4

0.7 0.6 0.5 0.4

GA

0.3

SA

0.3 0

2000

4000 6000 Number of trials

8000

10000

0

2000

4000 6000 Number of trials

8000

10000

Figure 11: The performance under the combined effects of transformations and noise for the maximization of Eq. 3: the average over 10 runs and its standard deviation.

16

4

Conclusions

In this paper we studied the effect of laboratory imprecision on AFOC of quantum systems. We argued that adaptive systems are better suited to tolerate errors and showed that the homeomorphic transformation of the control parameters and the monotone transformation of the measured objective do not seriously change the quality of the optimization. The results showed that the SA clearly outperformed the GA if there was no noise present. This can be circumvented by augmenting the GA by a local search method. It was also shown that with proper choice of the optimization process a high level of noise can be allowed, especially in the measurements of the objective. We showed that the GA outperforms the SA in seriously noisy environments and argued that in such environments the GA can outperform most conventional algorithms, as well. This was explained by the integrative nature of the GA: GA does not make sharp decisions about possible solutions rather it makes preferences. In the elongated decision making process effective signal averaging is done. High noise resistance is very important practically. The possibility of allowing high noise enables one to set the controls faster. In the same way requirements for signal averaging are eased. It is our opinion that both SA and GA or possibly other learning algorithms will make laboratory implementation of quantum control feasible. Acknowledgement This work was partially supported by OTKA Grant #1890/1991 and by the US-Hungarian Joint Fund (Grant JFNo.168/91b). G.J.T. would like to thank the Soros Foundation for the support he received while working at Princeton. The author H.R. acknowledges support from the Office of Naval Research and the Army Research Office. References [1] M. Shapiro and P. Brumer, J. Chem. Phys. 84, 4103 (1986). [2] P. Brumer and M. Shapiro, Chem. Phys. Lett. 126, 541 (1986).

17

[3] D. J. Tannor and S. A. Rice, J. Chem. Phys. 83, 5013 (1985). [4] D. J. Tannor, R. Kosloff, and S. A. Rice, J. Chem. Phys. 85, 5805 (1986). [5] A. P. Peirce, M. A. Dahleh, and H. Rabitz, Phys. Rev. A 37, 4950 (1988). [6] R. Kosloff, S. A. Rice, P. Gaspard, S. Tersigni, and D. J. Tannor, Chem. Phys. 139, 201 (1989). [7] R. S. Judson and H. Rabitz, Phys. Rev. Lett. 68, 1500 (1992). [8] B. Amstrup, J. D. Doll, R. A. Sauerbrey, G. Szab´o, and A. L˝orincz, Phys. Rev. A 48, 3830 (1993). [9] A. M. Weiner, D. E. Leaird, J. S. Patel, and J. R. Wullert, II, IEEE J. Quant. Elect. 28, 908 (1992). [10] A. M. Weiner, D. E. Leaird, D. H. Reitze, and E. G. Paek, IEEE J. Quant. Elect. 28, 2251 (1992). [11] P. Gross, D. Neuhauser, and H. Rabitz, J. Chem. Phys. 98, 4557 (1993). [12] T. Szak´acs, J. Soml´oi, and A. L˝orincz, Chem. Phys. 172, 1 (1993). [13] D. E. Goldberg, Genetic Algorithms in search, optimization and machine learning, Addison Wesley, Reading, MA, 1989. [14] W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, Numerical Recipes in C: the art of scientific programming, Cambridge University Press, Cambridge, U.K., second edition, 1992. [15] M. Demiralp and H. Rabitz, Phys. Rev. A 47, 809 (1993). [16] M. Demiralp and H. Rabitz, Phys. Rev. A 47, 831 (1993).

18

Suggest Documents