Subspace-based Inverse Uncertainty Quantification for Nuclear Data ...

1 downloads 0 Views 228KB Size Report
multiplication factor and the fission reaction rate are used as reactor attributes whose uncertainties ... improve the competitiveness of nuclear energy against.
Available online at www.sciencedirect.com

Nuclear Data Sheets 123 (2015) 57–61 www.elsevier.com/locate/nds

Subspace-based Inverse Uncertainty Quantification for Nuclear Data Assessment B.A. Khuwaileh1, ∗ and H.S. Abdel-Khalik1 1

Department of Nuclear Engineering, North Carolina State University, Raleigh, NC, USA (Received 12 May 2014; revised received 14 July 2014; accepted 19 August 2014)

Safety analysis and design optimization depend on the accurate prediction of various reactor attributes. Predictions can be enhanced by reducing the uncertainty associated with the attributes of interest. An inverse problem can be defined and solved to assess the sources of uncertainty, and experimental effort can be subsequently directed to further improve the uncertainty associated with these sources. In this work a subspace-based algorithm for inverse sensitivity/uncertainty quantification (IS/UQ) has been developed to enable analysts account for all sources of nuclear data uncertainties in support of target accuracy assessment-type analysis. An approximate analytical solution of the optimization problem is used to guide the search for the dominant uncertainty subspace. By limiting the search to a subspace, the degrees of freedom available for the optimization search are significantly reduced. A quarter PWR fuel assembly is modeled and the accuracy of the multiplication factor and the fission reaction rate are used as reactor attributes whose uncertainties are to be reduced. Numerical experiments are used to demonstrate the computational efficiency of the proposed algorithm. Our ongoing work is focusing on extending the proposed algorithm to account for various forms of feedback, e.g., thermal-hydraulics and depletion effects. I.

INTRODUCTION

Nuclear reactor design and safety calculations require hermetic and rigorous calculations of several important reactor attributes such as the multiplication factor, reactor power and temperature distributions. Therefore, new developments in uncertainty reduction are paramount to improve the competitiveness of nuclear energy against other energy sources making it an economic and safe energy alternative. Nuclear data are considered to be a major contributor to the uncertainties in the calculated reactor attributes. Therefore, it is natural to seek algorithms that identify the key nuclear data whose reduced uncertainties would have the highest impact on the uncertainties of reactor attributes of interest. Nuclear data measurement experiments could then be established to reduce the uncertainty of the identified nuclear data. Given the cost of experiments, which noticeably vary from one isotope-reactionenergy-specific cross section to another, one must take into account both the cost of the experiment and the potential benefit of uncertainty reduction on the attributes of interest. This is possible via a constrained optimization problem that minimizes a cost function, representing the cost of the experiments, while being constrained by the reduced uncertainty sought for the attribute(s) of interest. This problem was tackled and appeared under the



Corresponding author: [email protected]

http://dx.doi.org/10.1016/j.nds.2014.12.010 0090-3752/© 2014 Elsevier Inc. All rights reserved.

name “nuclear data target accuracy assessment” initially developed by Usachev in 1970s [1]. We refer to this problem as the inverse sensitivity/uncertainty quantification (IS/UQ) problem. The IS/UQ problem has been applied for current and future reactors ([2] - [7]). These studies considered different integral quantities such as the multiplication factor, reactivity coefficients and various important reaction rates. Based on a target uncertainty for the attributes, as defined by design/economic consideration, these studies have shown that the current nuclear data evaluations need further improvements; see Ref. [2] for an example of a comprehensive study. Ideally, all parameters that might contribute significantly to the overall uncertainty of the attribute of interest must be included in the IS/UQ analysis, for example: nuclear fuel and structural material cross sections, fission products concentration, temperature and power distribution and any other potentially important source of uncertainty must be included. Hence the number of parameters might grow very large, which increases the computational cost of the IS/UQ analysis. Such a challenge is usually addressed by the elimination of parameters that do not contribute significantly to the overall uncertainty [2]. The sensitivity of each response is calculated for a reference case composition, then, influential parameters are selected based on their contribution to the overall uncertainty. However, the sensitivity profile may not remain constant over a range of inputs around the reference case. Hence the contribution might change as the input parameters change. This means that eliminating parameters that do not significantly contribute to the uncertainty relies on the as-

Subspace-based Inverse . . .

B. Khuwaileh et al.

NUCLEAR DATA SHEETS

sumption that the uncertainty contribution of each source is constant. This assumption cannot be always asserted and/or guaranteed. Therefore, many sources of uncertainty must be involved in the IS/UQ analysis. Past work has been primarily limited to integral benchmark critical experiments, exhibiting no feedback or depletion effects, and applied to relatively small problems. We seek in this work to generalize the application of the IS/UQ algorithm to core-wide models simulating hot reactor conditions. This goal implies that one has to solve an optimization problem with a relatively huge number of nuclear data, currently impossible with a brute force application of any given optimization technique. Therefore, in support of our overarching goal, we propose in this manuscript an algorithm that can render practical the solution of a high dimensional IS/UQ optimization problem. The proposed algorithm is based on the replacement of the full dimensional space of all possible solutions (i.e. the searching domain) by a subspace that is smaller and of lower dimension (i.e. a target subspace) tailored by the models physics. A low fidelity solution of the optimization problem is used to build the target subspace. The proposed algorithm can reduce the computational cost of the targeted nuclear data assessment and hence allowing the designer to study wider range of uncertainty sources and include finer energy groups. The algorithm is applied to a PWR assembly model where target accuracies of the multiplication factor and the fission reaction rate have been defined and the performance of the proposed algorithm is investigated and compared against the conventional IS/UQ approach. In this work, the differential cross section data and covariance data of the constituent isotopes are those distributed in SCALE-6.1 [8].

ergy ranges. The constraint that must be satisfied by the optimal solution is SC S

T

≤ var(R),

(3)

where var (R) is a vector of maximum allowed variances in the responses of interest, and S is the applications sensitivity profile with respect to the input at the reference case. Note that the expression in Eq.(3) includes the covariance terms. The covariance terms reflect the correlations between different parameters; hence they may lead to overall variance reduction. It was shown before that the correlation terms are important in targeted nuclear data assessment [9]. So in order to guide the optimization algorithm to search along certain directions, these directions are constructed based on the subspace determined by a low fidelity solution of the problem which can be obtained by neglecting the covariance terms (correlation terms). In this case the problem can be solved using the Lagrange multipliers analysis which yields the uncertainty requirements to meet the specified constraint by minimising the function defined in Eq.(2) subjected to the constraint defined in Eq.(3) [10]. The adjustment parameter xi indicated in Eq.(1) is obtained from the low fidelity solution and can be shown to have the form ⎛ xj =

⎞1/4

2 ⎟ 1⎜ ⎜ wj (var(Rl )) ⎟ ⎜ ⎟  2 0 n σj ⎝  ⎠ √ 2 |si | wi sj

,

(4)

i=1

σj0

II.

where is the initial uncertainty in the j th parameter and sj is the corresponding sensitivity coefficient of the response with respect to the j th parameter and var (Rl ) is the tolerance in the variance of the l th response. Eq.(4) can provide a low fidelity solution for the optimization problem. Hence it can be used to construct the basis of a subspace that is highly likely to include the solution to the actual problem. The following steps summarize the process of constructing this subspace:

ALGORITHM

Reference [1] contains the original details of the formalism of the IS/UQ problem. The IS/UQ problem is an inequality-nonlinear-constrained optimization problem which starts with the prior covariance matrix C of multi-group cross sections, whose diagonal elements are variances of cross sections, the target is to calculate the updated covariance matrix C by updating its elements as 

Cij = xi Cij xj ,

1. Determine the sources of uncertainty (input uncertainties to be considered), along with the set of constraints. 2. Perturb the model input and collect the sensitivity samples, then use Eq.(4) to construct the corresponding adjustment parameter samples for the lth constraint,

(1)

where xi are the adjustment parameters to be optimized along with minimizing the cost associated with obtaining the reduced covariance data to C . The cost to be minimized can be defined as 

Cost[C ] =

n  i=1

wi /Cii =

n 

wi /xi Cii xi ,

Xl = [Δx1 , ..., Δxk ], where Δxi = xi − I ; I ∈ n is the vector with all its elements equal one.

(2)

3. Build orthonormal basis of the subspace spanned by the samples

i=1

where wi ’s are user-defined weights corresponding to the cost of measuring various cross sections in different en-

X l = Ql Rl . 58

Subspace-based Inverse . . .

III.

4. Calculate the error upper bound associated with each subspace size using the inequality ([11], [12]) ε < 10

2 max ||(I − Ql QTl )Δxi ||2 = εupper , π i=1,..s

(5)

5. If the inequality in step 4 is not satisfied then increase the number of samples and repeat steps 2-5, k = k + 1. If the inequality is satisfied then save . Qn×k l 6. Repeat steps 1-5 for each constraint (assume there are L constraints). 7. Find the union of the resulting subspaces, L

Ql .

l=1

8. Tune the optimization algorithm to search along the directions represented by the basis (columns of matrix Q), whereas Qn×r , r  n, where n is the total number of parameters and r is the rank of the basis matrix Q. The columns of the matrix Q form the basis of the reduced domain. The basis will direct the optimization algorithm to search along certain directions. Thus instead of investigating the whole space, only the directions determined by the basis will be investigated. This is represented by the fact that the solution vector can be written as a linear combination of its basis as

x=

n  i=1

α i qi ≈

r 

α i qi ,

NUMERICAL RESULTS

In this section, the goal is to demonstrate the proposed algorithm using a practical example. A quarter PWR fuel assembly is modeled with MOX fuel and depleted using TRITON sequence (part of SCALE-6.1 package) to 15 GWD/MTU [8]. Target accuracy of the multiplication factor and the fission rate are defined and the following nuclei are considered as the main sources of un235 236 238 238 239 240 certainty: 234 92 U, 92 U, 92 U, 92 U, 94 Pu, 94 Pu, 94 Pu, 241 242 1 16 90 10 94 Pu, 94 Pu, 1 H, 8 O, 40 Zr, 5 B with the following cross sections: the fission(σf , ID=18 ), elastic scattering (σes , ID=2), inelastic scattering (σins , ID=4), fission spectrum (χ, ID=1018), capture (σn,γ , ID=102) and average total (prompt plus delayed) number of neutrons released per fission event (¯ ν , ID=452). The total number of input parameters is 2068. Thus the full space consists of 2068 directions. The conventional IS/UQ algorithm was used to solve the optimization problem. Then, the problem was solved using the SIS/UQ algorithm. In order to choose the important directions in the full dimensional domain, the low fidelity analytic solution was sampled and used to build the T-subspace as indicated in the algorithm outlined in the previous section. The input parameters were perturbed (±30% ), and the size of the T-subspace was defined using a previously developed error metric tool represented by Eq.(5) ([11], [12]). The size of the subspace is increased until the subspace represents the low fidelity solutions with a negligible error upper bound. Fig. 1 shows the absolute error as defined by Eq.(5). Based on this, a subspace of dimension (rank) equals to 60 is a reasonable choice to attain the target subspace, therefore the SIS/UQ has only 60 directions (degrees of freedom) to search for the optimized solution compared with the IS/UQ which searches along 2068 directions. The original uncertainty in keff is about 1000 pcm which is intended to be reduced to 100 pcm, moreover the uncertainty in the normalized fission rate in the fuel mixture (F Rfuelmixture ) defined by Eq.(7) is 0.7x10−3 and is to be reduced to 10−4 while minimizing the financial effort represented by the cost function defined by Eq.(8). In Eq.(7), the fission rate in the cells which belong to the fuel mixture is integrated and normalized with respect to the total flux in all mixtures,

where s = 10 is the reasonable choice.

Q=

B. Khuwaileh et al.

NUCLEAR DATA SHEETS

(6)

i=1

where qi  s are the basis vectors forming the columns of matrix Q. Eq.(6) suggests that the adjustment parameter vector can be approximated using only r basis vectors instead of n basis vectors. The error introduced by this approximation can be upper-bounded using Eq.(5). The premise of this work is that searching along the weight parameters (i.e. α ∈ r ) instead of searching along the full dimensional solution space (x ∈ n ) will provide an efficient way to find an optimal solution for the problem. In the following sections the term T-subspace will be used to refer to the lower dimensional target subspace while the term SIS/UQ will be used to refer to the proposed subspace-based inverse sensitivity/uncertainty quantification algorithm.



F Rfuelmixture

 Vi Σgf φi,g g i,i∈fuelmixture   = , Vi φi,g i

Δcost =

n n   wi wi .  − C C ii ii i=1 i=1

(7)

g

(8)

Table I summarizes the results and compare the behavior of the proposed algorithm and the conventional one given certain convergence conditions. Since the sequential quadratic programming (SQP) is the most reliable 59

Subspace-based Inverse . . .

B. Khuwaileh et al.

NUCLEAR DATA SHEETS

and common optimization algorithm used in such applications, it is used in both the IS/UQ and SIS/UQ by recruiting the MATLAB function (fmincon). Both algorithms converge to satisfactory target accuracy and minimize the costs to within 1% difference. However, the SIS/UQ converges faster (493 function evaluations), while the IS/UQ required 3500 function evaluations to fall within the range of the target accuracy. In other words, the introduction of the T-subspace will reduce the complexity of the optimization problem. The SIS/UQ algorithm will always search in a smaller space, a situation that gives it an advantage over the IS/UQ. Moreover, the running time per function evaluation is much smaller due to the fact that the SIS/UQ searches in a smaller subspace and hence the derivatives, cost and constraints needs to be evaluated at less number of points and have smaller number of searching directions per iteration per region. The convergence thickness (constraint violation) and the termination tolerance on the solution were set to 10−10 . Usually, such tolerances in the constraint and solution are not used, however, since the goal of this work is to compare the asymptotic performance of the two algorithms, these harsh convergence conditions were used. This explains the relatively-long CPU running time observed in Table I.

most strict requirements. Furthermore, note that the depleted MOX fuel has relatively high concentration of 239 94 P u, therefore, the sensitivities of the responses with respect to this isotope are expected to be higher for the MOX fuel than for the UO2 fuel. This expectation was verified numerically by calculating the sensitivities using each fuel type separately. TABLE II. Summary of results. The 10 most affected reactions: IS/UQ algorithm. Nuclei ID Energy [eV] 238 [100-550] 92 U 239 [0.275-0.325] 94 Pu 239 [0.15-0.2] 94 Pu 239 [0.07-0.1] 94 Pu 239 [0.05-0.07] 94 Pu 239 [0.25-0.275] 94 Pu 239 [0.225-0.25] 94 Pu 239 [0.1-0.15] 94 Pu 238 [30-100] 92 U 238 [3-4.8]E+06 92 U

TABLE I. Summary of the numerical results and comparison of the two algorithms.

SIS/UQ IS/UQ

CPU (hours) 9 91

σ % 0.0745 0.0564 0.0585 0.0588 0.0663 0.0728 0.0766 0.0766 0.29 2.4388

Reaction 102 102 452 452 4 452 452 452 452 102

σ0 % 2.27 2.39 1.03 1.03 20.03 1.03 1.03 1.03 1.03 3

σ % 0.0745 0.1315 0.0585 0.0588 1.2018 0.0663 0.0728 0.0766 0.0766 0.29

The prioritization given in Table II and Table III does not reflect the contribution of each parameter (isotope, energy, reaction) to the overall uncertainty in the responses. Therefore, in order to provide a measure for the relative reduction in the uncertainty due to an individual parameter, the following metric is introduced. This metric quantifies the relative uncertainty reduction achieved by adjusting the ith parameter only,

FIG. 1. (Color online) Error metric as defined by Eq.(5).

Function evaluations 493 3500

σ0 % 2.27 1.03 1.03 1.03 1.03 1.03 1.03 1.03 3 20.03

TABLE III. Summary of results. The 10 most affected reactions: SIS/UQ algorithm. Nuclei ID Energy [eV] 238 [100-550] 92 U 238 [550-3000] 92 U 239 [0.15-0.2] 94 Pu 239 [0.07-0.1] 94 Pu 238 [3-4.8]E+06 92 U 239 [0.05-0.07] 94 Pu 239 [0.25-0.275] 94 Pu 239 [0.225-0.25] 94 Pu 239 [0.1-0.15] 94 Pu 238 [30-100] 92 U

Algorithm

Reaction 102 452 452 452 452 452 452 452 102 4

time Increase in cost 3.7114e+06 3.6816e+06

I (R : i) =

σ 0 (R) − σ (R : i) , σ 0 (R)

(9)

where σ 0 (R) is the initial uncertainty in the response R and σ (R : i) is the reduced uncertainty obtained by adjusting only the ith parameter (isotope, energy, nuclei). Table IV and Table V calculate the metric defined by Eq.(9) using the solution obtained from each algorithm (IS/UQ and SIS/UQ) and taking the first 10 elements that contribute most to the uncertainty reduction.

Table II and Table III show the initial and required uncertainties for the 10 most affected nuclei, where σ 0 denotes the initial relative uncertainty and σ  denotes the required uncertainty. Comparing the solution from both SIS/UQ and the IS/UQ, the similarity between the two schemes is obvious, however, these tables do not list all the requirements, they only list the nuclei with the 60

Subspace-based Inverse . . .

IV.

TABLE IV. Importance measure prioritization (R = keff ). Nuclei ID Energy [eV]

Reaction

239 94 Pu 239 94 Pu 239 94 Pu 239 94 Pu 239 94 Pu 239 94 Pu 239 94 Pu 239 94 Pu 238 92 U 238 92 U

452 452 452 452 452 452 452 452 4 102

[0.1-0.15] [0.275-0.325] [0.15-0.2] [0.07-0.1] [0.25-0.275] [0.05-0.0.07] [0.4-0.625] [0.375-0.4] [0.0253-0.03] [550-1000]

TABLE V. Importance (R = FRfuelmixture ).

I (keff : i) IS/UQ SIS/UQ 0.047 0.041 0.046 0.040 0.038 0.038 0.038 0.036 0.037 0.036 0.036 0.034 0.035 0.031 0.022 0.025 0.021 0.023 0.018 0.022

measure

Nuclei ID Energy [eV]

Reaction

239 94 Pu 239 94 Pu 239 94 Pu 1 1H 1 1H 1 1H 239 94 Pu 238 92 U 238 92 U

452 452 452 2 2 2 452 4 102

[0.1-0.15] [0.275-0.325] [0.15-0.2] [3.0-17.0]E+03 [5.5-30]E+02 [1.0-4.0]E+05 [0.375-0.4] [0.0253-0.03] [550-1000]

B. Khuwaileh et al.

NUCLEAR DATA SHEETS

CONCLUSIONS

In this work, an efficient subspace-based algorithm for inverse sensitivity/uncertainty quantification is developed as a step towards an efficient application of the IS/UQ formalism on multi-physics coupled problems. A model-based target subspace (T-subspace) is constructed based on a low fidelity solution of the problem, then used to replace the original domain of search. The algorithm is applied and tested on a nuclear data assessment target accuracy study where a MOX fueled PWR assembly model is considered and target accuracy of the multiplication factor (keff ) and the fission rate are defined. The required nuclear data cross sections uncertainties were optimized using the proposed algorithm (SIS/UQ) and the conventional (IS/UQ) both equipped with SQP. Results revealed that the proposed algorithm converges faster, especially, for the large scale problems. Moreover, the proposed algorithm allows the designer to include more sources of uncertainty and yet obtains the optimum set of required uncertainties without an increase in the computational burden. In addition to that, numerical experiments (not reported in this paper), showed that more computational savings can be achieved for problems with more complexity and higher dimensionality. Current work involves the improvement of the choice of the subspace so that more savings are achieved and attempting to extend the existing formalism to serve our overarching goal of applying the inverse uncertainty quantification in multi-physics problems such as doing inverse depletion uncertainty analysis, which is difficult considering the uncertainty of the depletion history.

prioritization

I (FRfuelmixture ) IS/UQ SIS/UQ 0.071 0.066 0.046 0.045 0.030 0.034 0.029 0.033 0.025 0.032 0.023 0.029 0.020 0.022 0.019 0.022 0.016 0.021

[1] L. Usachev, Y. Bobkov, “Planning on optimum set of microscopic experiments and evaluations to obtain a given accuracy in reactor parameter calculations,” Report IAEA INDC (CCP-19U) (1972). [2] G. Aliberti, G. Palmiotti, M. Salvatores, T. Kim , T. Taiwo, M. Anitescu, I. Kodeli, E. Sartori, J. Bosq, J. Tommasi, “Nuclear data sensitivity, uncertainty and target accuracy assessment for future nuclear systems,” Ann. Nucl. Energy 33, 700 (2006). [3] G. Arbanas, M. Dunn, and M. Williams, “Inverse Sensitivity/Uncertainty Methods Development for Nuclear Fuel Cycle Applications,” Nucl. Data Sheets 118, 374 (2014). [4] R. Little, T. Kawano, G. Hale, M. Pigni, M. Herman, P. Obloˇzinsk´ y, M. Williams, M. Dunn, G. Arbanas, D. Wiarda, “Low-fidelity Covariance Project,” Nucl. Data Sheets 109, 2828 (2008). [5] M. Salvatores , G. Palmiotti, G. Aliberti, H. Hiruta, R. McKnight, P. Obloˇzinsk´ y, W. Yang, “Needs and Issues of Covariance Data Application,” Nucl. Data Sheets 109, 2725 (2008). [6] A. Courcelle, A. Santamarina, F. Bocquet, G. Combes, C. Mounier, G. Willermoz, “JEF-2.2 nuclear data statistical adjustment using post-irradiation experiments,”

[7]

[8]

[9]

[10]

[11]

[12]

61

The Physics of Fuel Cycles and Advanced Nuclear Systems: Global Developments, PHYSOR (2004). M. Williams, B. Rearden, “SCALE-6 Sensitivity/Uncertainty Methods and Covariance Data,” Nucl. Data Sheets 109, 2796 (2008). ORNL Team, “SCALE: A Comprehensive Modeling and Simulation Suite for Nuclear Safety Analysis and Design,” Report ORNL/TM-2005/39 Version 6.1 (2011). G. Palmiotti et al, “Nuclear Data Target Accuracies for Generation-IV Systems Based on the use of New Covariance Data,” J. Korean Phys. Soc. 59, 1264-1267 (2011). K. Ito, K. Kunisch, “Lagrange multiplier approach to variational problems and applications,” SIAM 15, 1 (2008). J. Dixon, D. John, “Estimating extremal eigenvalues and condition numbers of matrices,” SIAM J. on Numerical Analysis 20, 812 (1983). N. Halko, P. Martinsson, J. Tropp, “Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions,” SIAM Rev. 53, 217 (2011).

Suggest Documents