MULTIVARIATE CONTROLLER PERFORMANCE MONITORING: LESSONS FROM AN APPLICATION TO A SNACK FOOD PROCESS
a
b,*
GABE HAARSMA and MICHAEL NIKOLAOU
a
Department of Chemical Engineering, Texas A&M University, College Station TX 77843-3122, USA b
Department of Chemical Engineering, University of Houston, Houston, TX 77204-4792, USA
Abstract: This paper discusses the application of multivariate controller performance monitoring in an industrial snack food frying process. The predicted performance of a minimum-variance controller is used as a benchmark standard for the evaluation of controller performance. To use this procedure, one need an estimate of the interactor matrix characterizing the process delaytime structure and a closed-loop disturbance model from a set of representative data of the controlled output variables under the current feedback scheme. We report practical experiences with the use of this technique. Key Words: Snack food frying; Multivariate systems; Controller Performance Monitoring
Submitted to Journal of Process Control (March 2000)
*
Corresponding author. E-mail:
[email protected], Fax: +1 713 743-4323
1
Introduction The efficient manufacture of a high quality product is a requirement for survival in today’s
competitive marketplace. This can be realized with a well designed and maintained process, which is controlled by a well designed, tuned and maintained control system. Because of many assumptions and trade-offs made during controller design, and many process modifications and unmodeled disturbances, controllers at times do not work as designed. Controller Performance Monitoring (CPM) or Controller Performance Assessment has been an emerging area in the past decade that provides means of diagnosing control loop performance (see Harris et al. [1] and Qin [2] for reviews). The work of Harris [5] started a significant interest in CPM and attracted the attention of several investigators as well as practitioners. In his work, Harris proposed the use of Minimum Variance (MV) controller as a lower bound to assess the performance of univariate controllers. A key point is that this MV benchmark can be estimated from closed-loop data, without additional experiments. Only the time delay of the process is assumed to be known. Harris et al. [15] and Huang et al. [16] extended this work to multivariate controllers. For this, the univariate notion of time delay needs to be generalized to the interactor matrix or timedelay matrix. In general, the interactor matrix cannot be constructed from knowledge of the individual time delays only and additional process knowledge is required. This paper discusses our efforts to develop and implement CPM strategies for an industrial snack food frying process. The objective of the paper is to discuss and elucidate issues that arise in the application of CPM based on MV benchmark estimations and an interpretation of controller performance. Issues regarding the stochastic identification of the disturbance model, the effect of deterministic disturbances, and input saturation will be discussed and possible solutions or alternatives will be presented. The paper is organized as follows: Section 2 provides an overview of the theory for the univariate MV benchmark, controller performance indices and the extension to the multivariate case. Section 3 provides a description of the process, the controller, and system identification for controller development used in this application. The estimation of the interactor matrix is shown in section 4. Section 5 describes the stochastic estimation of the closed-loop disturbance model. Section 6 shows the results of the CPM on our application. The last two sections provide a discussion and conclusions of the paper. 1
2 2.1
Theoretical Background Performance analysis using minimum variance control The principles underlying MV control methods originate from work of Box and Jenkins [3]
and Åström [4]. They showed that, for linear processes that are open-loop stable, the controlled output variable will follow a moving average process of finite order f under MV control, where f is the number of whole periods of true process delay. A test for detecting MV control can be constructed using this principle: if the sample auto-correlations of the process output are zero beyond lag f then MV control is being achieved. Harris [5] showed that it also possible to estimate the performance of a MV controller from routine operating data of the controlled variable collected under any existing stable linear controller. This gives a lower bound on achievable performance. Many industrial processes can be adequately modeled by the superposition of a linear plant model and a linear disturbance model
yt =
B ( q −1 ) q − b ut + dt A(q −1 )
(1)
where yt is the measured output, ut is the manipulated variable; A(q-1) and B(q-1) are polynomials in the backward shift operator q-1; b is the number of whole process delays, which is one higher than f, the true process delay. The disturbance dt represents the effect of all unmeasured disturbances acting on the process output. dt can be modeled as the output of an autoregressive moving average (ARMA) of the form dt =
θ (q −1 ) at φ ( q −1 )
(2)
where at is a sequence of independently and identically distributed random variables; θ (q −1 ) and
φ ( q −1 ) are stable polynomials in the backward shift operator q-1. Alternatively, an AutoRegressive Integrated Moving Average (ARIMA) can be used to model dt. Let us assume that the output yt is regulated around a fixed setpoint ysp by a linear timeinvariant feedback controller (Gc), i.e.,
ut = Gc (q −1 )( ysp − yt )
(3)
It can be readily shown (Harris, [5]) that the closed-loop system can be described by
2
yt = (1⋅ at + ψ 1at −1 + K + ψ f at − f + ψ *f +1at − f −1 + ψ *f + 2 at − f − 2 + K) 14444244443 14444244443 ψd ψ pc
(4)
The polynomial ψd is only a function of the process noise model dt and the true process delay f. ψd is feedback invariant, a recognition that a feedback control strategy, linear or nonlinear, cannot return the process output to its setpoint until the process time delay has elapsed. The coefficients of ψd(q-1) can be obtained by solving a Diophantine equation or long division of the polynomial φ ( q −1 ) intoθ (q −1 ) . Under MV control the b-step ahead forecast error is zero. Therefore, the variance of ψd is the variance of a MV controller. Since ψd is feedback invariant, its variance will remain unchanged in the presents of any feedback control. This allows to express the variance of a MV controller, without implementing a MV controller or having open-loop process knowledge, as 2 σ mv = (1 + ψ 12 + ψ 22 + K + ψ 2f )σ a2
(5)
It is convenient to define a normalized controller performance index. Desborough and Harris [6] defined the performance index: 2 2 σ mv σ mv = 1− 2 η0 = 1 − mse( yt ) σ y + µ y2
(6)
which expresses the fractional decrease in output mean square error that arises from not implementing a MV controller. η0 is bounded within [0, 1], where values of η0 close to zero mean better, tighter control. Other authors (Huang et al. [16,22] and Kozub [8]) use the reverse index: 2 2 σ mv σ mv = η1 = mse( yt ) σ y2 + µ y2
(7)
where η1 is bounded within [0, 1], and high values of η1 mean better, tighter control. While MV controllers result in minimum variance of a controlled output, they may not perform well with respect to other performance criteria, such as zero-offset to step disturbances, etc. Therefore, the implementation of such controllers is most frequently not desirable. (Harris [5], Desborough and Harris [6], Huang et al. [16], Eriksson and Isaksson [10]). Nevertheless, the minimization of the variance of a controlled process output is usually an important objective
3
(among others) and, consequently, knowing how far an implemented controller performs from the theoretical lower bound (MV benchmark) is useful. Obviously, the same could be claimed about other controller performance criteria, i.e., it is desirable to be able to estimate (preferably with minimal intervention into the process) how well a controller satisfies a certain performance objective (e.g., satisfaction of constraints, closed-loop bandwidth, settling time, etc.). For control loops that indicate poor performance, second level steps have to be taken, which could include controller re-tuning or re-identification of the process. Stanfelj et al. [7] extended the MV benchmark to feedforward-feedback systems. The authors also point out an important limitation in diagnosing poor controller performance: Poor feedback controller performance can be attributed to modeling error or poor controller tuning or structure if measured external perturbations enter the feedback system. Normal operating data from a feedback system without any measured external perturbations cannot provide information for such a diagnosis. Tyler and Morari [42] extended the MV benchmark approach to unstable and nonminimum phase systems. Their method requires knowledge of the location and multiplicity of all unstable poles and non-invertible (unstable) zeros of the underlying process. A number of applications of the MV benchmark on real industrial processes are reported in the literature. Kozub [8] applied the MV benchmark on distillation columns. Lynch and Dumont [9] used Laguerre series models for the time series analysis and a time-delay estimator. They applied the MV benchmark on a reject refiner and a Kamyr digester in the paper and pulp industry. An application of feedforward and feedback performance monitoring on three distillation columns is given by Vishnubhotla et al. [12]. Plant wide CPM is reported by Jofriet and Bialkowski [11] and Harris et al. [13]. The MV benchmark is used together with a real time expert system to continuously collect data and monitor performance of all control loops. The system is implemented in a newsprint paper mill. Another plant wide control loop performance assessment, situated in a refinery, is reported by Thornhill et al. [14]. 2.2
Extension to the multivariate case
The MV benchmark can be extended from univariate systems to multivariate systems. A normalized performance index can be calculated, to characterize the performance of the 4
multivariate control scheme. Additionally, multi-loop performance indices can be calculated for each individual output (Harris et al. [15]) and (Huang et al. [16]). Consider a multivariate process: Yt = T (q −1 )U t + N (q −1 )at
(8)
where T(q-1) and N(q-1) are proper (causal), rational transfer function matrices and Yt, Ut and at are output, input and noise vectors of appropriate dimensions. In addition at is white noise with Eat = 0 and var(at ) = Σ a . While for univariate systems a priori knowledge or estimation of the time-delay is required, for multivariate systems, this has to be generalized to the interactor or time-delay matrix. In the univariate case, the time delay in terms of the sampling time is equal to the number of zero impulse response coefficients; or the result of the first non-singular or non-zero impulse response coefficient having an effect on the output. The delay corresponds to the number of infinite zeros of a discrete-time process. This idea can be generalized for multivariate systems in terms of the impulse response coefficients. For multivariate systems, the notion of a delay corresponds to the fewest number of impulse response matrices whose linear combination is nonsingular. This means that a set of inputs acting via this linear combination of impulse response matrices can have a desired effect on the output. In mathematical terms the multivariate delay or interactor matrix (Goodwin and Sin [19]) is defined as: For every (n×m) proper transfer function matrix T(q-1), there exist (n×n) polynomial matrices D(q), such that |D(q)|= qr and lim D(q )T (q −1 ) = lim T (q −1 ) = K −1 −1
q →0
(9)
q →0
where K is a full rank constant matrix; the integer r is defined as the number of infinite zeros of ~ T(q-1); and T (q −1 ) is the delay free transfer functions matrix of T(q-1), which contains only finite zeros. It is important to know that the interactor matrix is not unique. Harris et al. [15] use a lower triangular interactor matrix as described in Goodwin and Sin [19] and Wolovich and Falb [20]. The interactor matrix is factored from L(q-1), which is the right matrix fraction description of T(q-1):
5
T (q −1 ) = L(q −1 ) éë R(q −1 ) ùû
−1
(10)
This factorization, in the general case, requires knowledge of L(q-1) and therefore complete knowledge of T(q-1) the open-loop transfer function matrix. Huang et al. [18] use a unitary interactor matrix, which has the additional property: D T ( q −1 ) D ( q ) = I
(11)
The unitary interactor matrix is an optimal form of the general interactor matrix for the application of MV control and CPM. The factorization of the unitary interactor matrix requires only the first d impulse response coefficient matrices, where the positive integer d equals the multivariable delay order. The first d impulse response coefficient matrices can be estimated directly under closed-loop conditions with external excitation or trough an open-loop identification experiment. Rogoziński et al. [21] presented an algorithm for the calculation of the unitary interactor matrix from the impulse response coefficient matrices. The MV controller is defined as a linear controller that minimizes the output Linear Quadratic (LQ) objective function J1 = E (Yt − Yt sp )T (Yt − Yt sp )
(12)
The MV control can be designed to make the variance of the output D(q)Yt or, equivalently ~ Yt = q − d D(q)Yt , minimum, where d is the polynomial order of the interactor matrix D(q). The filter q-dD(q) removes infinite zeros from the transfer function matrix. If D(q) is a unitary interactor matrix, then the optimal control law that minimizes Eqn. (12) also minimizes the ~ following objective function of the interactor filtered variable Yt (Huang et al. [16]) ~ ~ ~ ~ J 2 = E (Yt − Yt sp )T (Yt − Yt sp )
(13)
and J1=J2. The performance measure of the original variable Yt can be obtained via the ~ performance measure of the interactor filtered variable Yt . The overall multivariate performance index can be calculated as:
η1 =
tr ( Σ mv )
(
tr E éëYtYtT ùû
(14)
)
And individual multi-loop performance indices, which considers multivariable interactions, can be calculated as:
6
éëη1,Y1 K η1,Yn ùû =
diag ( Σ mv )
(
diag E éëYtYtT ùû
(15)
)
See Huang et al. [16] for calculations of Σ mv . Few Applications of the multivariate performance monitoring have been reported in the literature. Huang et al. [16] applied it to a 2×2 industrial absorption process. Huang et al. [22] applied it to a 2×2 paper-machine headbox. Harris et al. [15] used the multivariate MV benchmark on a 2×2 fractionation column and a 3×3 distillation column. Miller and Huang [23] presented a multivariate application on a 3×3 intergrated cracking and separation unit. Kadali et al. [24] presented an application on multivariate feedforward-feedback controller performance on an industrial distillation column with nine outputs, six inputs and five measured disturbance variables. 2.3
Other Methods
Attaining performance close to MV control can be unrealistic if the dominant time constant of the open-loop system is large when compared to the sampling rate. Such a controller would require very large input signals, which can lead to input saturation (Kendra and Çinar [40] and Huang and Shah [36]). It may also require a controller with high bandwidth, which may result in robustness constraint violations (Tyler and Morari [34]). Isaksson [31] pointed out that the MV benchmark does not take into account limitations due to controller structure (e.g., controller order). Modifications of the MV benchmark in Eqn. (5) have been proposed. One such modification is the following generalization of the MV benchmark: Instead of focusing on MV as in Eqn. (5), one can consider the minimization of the quantity (1 + ψ 12 + ψ 22 + K + ψ g2 )σ a2 , with g > f , as the control criterion. Monitoring this statistic might be useful when one is not interested in being close to MV, but all that matters is achieving some settling time specification (Kozub [8], Thornhill et al. [14]). Horch and Isaksson [33] proposed another modification. Instead of comparing actual variance to MV, which corresponds to placing all closed-loop poles at the origin, one can compare actual variance to the variance that would be obtained by placing all closed-loop poles but one at the origin. The pole not placed at the origin would determine the closed-loop speed or
7
respond and bandwidth, according to its location. The choice of the closed-loop pole can be based on either control design guidelines (robustness margins) or additionally available process knowledge, like the slowest process time constant. To assess the achievable performance of PID controllers in the presence of deterministic disturbances such as steps, ramps and exponential rises to new levels, Åström [28] and Åström et al. [29] proposed methods in which closed-loop performance is characterized in terms of bandwidth and dimensionless numbers such as normalized peak error and normalized rise time. In that context, the evaluation of controller performance requires a Laplace model for both process and disturbance. The emphasis in these methods is on the use of these measures for design and tuning of PID-type controllers. However, there is notion of on-line assessment when setpoints are changed or perturbations at the controller output are introduced. More recently, Swanda and Seborg [30] characterized closed-loop performance in terms of dimensionless settling time and the dimensionless integral of the absolute value of the error (IAE). The proposed methodology uses setpoint response data to estimate the dimensionless performance indices. In order to quantify how far a PID controller is from the best achievable performance, performance classes are defined in terms of a bound for each dimensionless performance index. An approach to measure the performance of PI controllers under stochastic load disturbances is developed by Ko and Edgar [32]. Their method assumes knowledge of the process and knowledge or estimation of the stochastic disturbance model. When the process and disturbances are known or estimated, the variance of the closed-loop can be computed for a given controller. Thus the achievable performance using a PI controller can be obtained by numerically minimizing the closed-loop variance by changing PI tuning parameters. If the disturbance model is unknown or changing with time, the disturbance model has to be estimated. The authors propose an approximate stochastic disturbance model realization, to estimate the disturbance model without requiring additional experiments. The disturbance model is realized from the feedback invariant closed-loop Finite Impulse Response (FIR) sequence, such that the FIR sequence of the disturbance model equals the closed-loop FIR sequence to within a certain bound. Tyler and Morari [34] have proposed a pass/fail likelihood ratio criterion to test whether routine operation data meets performance specifications. Performance specifications are set as constraints on the impulse response coefficients. Many performance criteria can be formed in 8
this way, like Closed-loop settling time, Decay rate, MV or Frequency domain bounds. Performance evaluation of the control system is generalized likelihood test between two hypotheses given by: H0: the closed-loop behavior satisfies the performance objective; H1: the closed-loop behavior violates the performance objective. The approach showed to be useful using both simulated and industrial data. The approach is both conceptually and computationally demanding compared to other methods (Kozub [8]). Hägglund [35] developed a procedure for detecting oscillations in control loops. These oscillations could be caused by reasons ranging from higher than normal friction in control valves to badly tuned controllers. The oscillation detection works in two steps. First there is a load disturbance detection procedure, which measures the magnitude of the Integrated Absolute Error (IAE) between successive zero crossings of the control error. When the IAE exceeds a certain limit, IAElim, it is likely that a load disturbance has occurred. If the frequency of load disturbance detections becomes high over an extended period of time, it can be concluded that an oscillation is present. In order to find out if the oscillations are being generated outside or inside the loop, the controller has to be put in manual mode. Huang and Shah [36] proposed a technique of practical control loop performance assessment relative to a benchmark in terms of user specified closed-loop dynamics, like settling times, overshoot etc. It is of interest to know if the actual closed-loop dynamics are close to the desired dynamics. Actual performance (in the form of impulse response coefficients of the closed-loop transfer function) can be estimated from data using time series analysis or standard identification tools. If the desired closed-loop dynamics are directly specified then only a priori knowledge of time delays are required. If the desired closed-loop response is specified by some other characteristic such as settling time, process knowledge is required. A filtered optimal H2 control law with desired closed-loop dynamics has been proposed as a practical benchmark to assess control loop performance. The filter improves robust performance and provides good compromise between performance and robustness and the closed-loop dynamics can be adjusted by the tuning of the filter parameters. Several authors proposed methods that require closed-loop experiments. Ju and Chiu [37] proposed a monitoring procedure considering the maximum closed-loop log modulus, Lc,max, which is related to the H∞ norm of the complementary sensitivity function. The Lc,max is a 9
measure of the robustness of the system and thereby can indicate the appropriateness of the control design in its current operating condition. The authors propose to evaluate the Lc,max online using two to three relay feedback experiments for univariate systems. For an n×n multivariate system the relay feedback’s have to repeated 2n-1 times. This will limit its use to small multivariate systems. Wang and Chiu [38] proposed the use of setpoint changes instead of relay experiments. The authors use a Fast Fourier Transform based frequency response identification technique on the error signal and compute the Lc,max from the frequency information. Kammer et al. [39] developed a model-free approach to linear quadratic (LQ) performance assessment from closed-loop experiments. In order to make measured signals more informative, exogenous excitation signals are injected into the closed-loop system. Spectrum analysis of the process input and output signals provide the necessary information for verification of LQ optimality. The procedure leads to a truly model-free test or LQ optimality when state feedback is used. For the output feedback is used, part of the noise dynamics must be known. The need for some plant knowledge in the output feedback case and the required external excitation will be obstacles for the widespread use of this approach in industrial plants. Isaksson [31] developed a whole set of performance indices based on the control structure limitation (PI, PID, Dahlin etc.) and intended control task (stochastic control, setpoint tracking or disturbance rejection). The measure of the achievable performance for a particular controller type can be obtained as a ratio between the optimal integrated squared error (ISE) within the given class and the optimal ISE for the unrestricted controller structure. Therefore, the indices will take into account intended control task and controller structure constraints. The calculation of the indices requires knowledge of the controller and an estimation of the plant. In most cases one has to use extraneous test signals in order to estimate the plant model. Kendra and Çinar [40] proposed a system identification method for assessing the performance of multivariable systems, using methods that coincide with classical and modern frequency domain design specifications. These design specifications can include bandwidth and peak magnitude of the sensitivity and complementary sensitivity functions subject to strict robustness requirements. Performance monitoring is achieved by estimating the closed-loop transfer functions of interest, namely, the sensitivity and complementary sensitivity function. The functions are obtained by exciting the reference input with a zero mean, pseudo random binary sequence and developing a closed-loop model. If the design specifications are severely violated, 10
the controller should be de-tuned or a more accurate process model should be developed. If the observed performance shows no sign of degenerative behavior, it may also be reasonable to relax the robustness constraint in pursuit of increased performance. Gustafsson and Graebe [41] published a method that utilizes statistical hypothesis testing to assess whether an observed deviation from nominal performance is due to a disturbance or due to a process change that has deteriorated the closed-loop stability margins. Disturbances are assumed to be bounded deterministic steps, ramps or similar disturbances. The stability margins in the Nyquist plots are defined in terms of a clover like region. The clover region maps to a linearly bounded region in the closed-loop domain and therefore ensures a feasible evaluation of the test statistic. The algorithm uses an external excitation signal consisting of a sum of sinusoids with different frequencies and amplitudes. The choice of frequencies can be critical for the effectiveness the algorithm and picking proper frequencies requires prior process knowledge. Thresholds for test statistic can be computed from the detection delay and the acceptable performance region. Disturbances larger than assumed will give cause to a false alarm. This can be overcome by increasing the threshold (which increases the time to detect a process change) or increase the external excitation signal. 3
Process and Controller
Without basic understanding of the process, it is difficult to interpret the collected performance analysis data. In the effort to understand or diagnose problems in the process, process understanding is paramount (Jofriet and Bialkowski [11]). Therefore, an introduction of the process and control system before applying CPM. 3.1
Snack-food frying process
A process that is fairly common in the snack food industry (Nikolaou [25]) has been selected for application of multivariable CPM. The process is schematically illustrated in Fig. 1. It consists of seven unit operations that transform the raw materials into snack chips. At the heart of the process is an extrusion cooker (that mixes raw ingredients and partially cooks them) followed by an oil fryer. The process has several sources of disturbances: •
Natural variability in the raw materials, due to variety, and growth, transportation, and storage conditions. 11
•
Imperfect mixing in the batch mixer.
•
Extrusion cooking operational issues (screw wear & screen pack).
•
Fryer heat balance disturbances (oil-level, oil-flow and disturbances in inlet oil temperature control).
•
Production rate changes.
In order to maintain constant product quality, important consumer attributes need to be controlled. Important consumer attributes, mainly, moisture content and oil content of the snack chips are finalized in the fryer (see Fig. 2 for a schematic overview of the frying process). A typical commercial fryer is around 10-12 m. long, and is about 30 cm. deep. Hot oil is pumped continuously at one end of the fryer, and the cooled oil is removed from the other end of fryer. The cooled oil coming out at the exit of the fryer is then passed into a heat exchanger where it is heated back to the required temperature and then pumped back into the fryer on the front end. A small oil makeup stream is added to replenish oil exiting the fryer with the product. Frying is a process for cooking foods by immersing them in edible oil. The main physicochemical phenomena that govern immersion frying are: •
heat transfer (from the hot oil to the surface and interior of the chips);
•
mass transfer (evaporation of moisture from the food, and transfer of oil into the chips);
•
browning of the chips (Maillard reactions between aminoacids and sugars).
The temperature of the hot oil used is normally in the range of 170-190 oC. The hot oil serves a dual role of (a) transferring medium, and (b) providing nutrients and flavor to the product. The quality of the snack chips is characterized by the moisture and oil content. For the product of this study, snack chips should not have more than 2% moisture by weight, to exhibit the correct amount of crispness and proper product shelf life. On the other hand, very low moisture content will result in the scorching of the chips, resulting in a “burnt” taste. The final oil content is also very important to develop consumer acceptability. A very high fat content in the chips may not be very appealing to the consumer, and a very low oil content may not produce the right flavor, texture, or taste in the chips. The overall frying process is characterized by the following salient features: •
There are multiple process inputs and outputs.
•
Process outputs are highly correlated. 12
•
There are long (~ 2 minutes) and multiple delay times.
•
Delay times are variable when the residence time in the fryer changes.
•
Process dynamics are fairly linear, especially within the working range of the controller.
•
Saturation of process inputs occurs when the process is perturbed by large disturbances or changes in operating conditions.
3.2
Model Predictive Control and Process Identification
Due to the multivariate nature of the process and the dual objective of controlling moisture content and oil content, Model Predictive Control (MPC) (Prett and García [26]) is chosen as the control platform. Other MPC benefits are that delay times are handled naturally and process constraints are dealt with in a systematic way during the design and implementation of the controller. After an initial screening and several step tests the three process inputs depicted in Fig. 1 were chosen as the manipulated variables for a 3×2 MPC. For the industrial frying process, the original process identification was performed August 24th 1994. The MPC based on that model was put into work one week later. The controller initially performed satisfactorily, but its performance, judging from empirical observations, deteriorated over a period of three years. During that period, several modifications to the initial process had been made, the most important of which were (a) change in the setpoint of the finished product oil content resulting in changes on the working range of the controller, and (b) equipment modifications. It was therefore assumed that performance could be improved after a re-identification of the process and re-tuning of the controller. New process identification was performed on August 14th 1997 and an updated controller was put on-line on August 19th 1997. An open-loop Pseudo Random Binary Sequence (PRBS) experiment was conducted during production. The amplitudes of the process inputs were chosen in such a way that oil-content and moisture content would stay within the production abort limits. Four hours of detrended PRBS data are presented in Fig. 3. From these data an Auto-Regressive-with-eXogenous-input (ARX) model (Ljung [27]) is built. Results of a cross validation simulation (infinite step-ahead prediction) can be seen in Fig. 4. The ARX model does a reasonable job on following the dynamic behavior of the data and should be sufficient for control purposes. Figure 5 illustrates the step responses of the initially identified 1994 model and newly identified 1997 model.
13
4
Identification of the interactor matrix
Huang et al. [17,18] describe the following three basic methods of factorization and estimation of the unitary interactor matrix: (a) Direct estimation of the interactor matrix under closed-loop conditions, which requires an external excitation signal added to the controller setpoints or process inputs. The interactor matrix of the closed-loop transfer function is the same as the interactor matrix of the open-loop matrix (See Huang et al. [18] for proof), under the assumption that there are no large disturbances or strong feedback during the identification. (b) Indirect estimation of the interactor matrix under closed-loop conditions. First the openloop transfer function matrix is estimated from closed loop data with external excitations of the closed loop, then the interactor matrix is estimated from this open-loop transfer function (see Huang et al. [17] for details). (c) Identification of a FIR or parametric model from open-loop input-output data, which then yields the impulse response coefficients of the process model. Since a parametric model was already developed for the MPC in section 3, we used the third approach. The open-loop impulse response coefficients extracted from the newly identified 1997 ARX model were é0 0 -0.0012 ù −10 é0 0 -0.0002 ù −11 é0 0 0.0008ù −12 q +ê ê0 0 ú q + ê0 0 0.0003ú q + 0 úû ë ë0 0 -0.0008 û ë û é 0 0 0.0012 ù −13 é0 0.0008 0.0023ù −14 ê 0 0 0.0020 ú q + ê0 -0.0225 0.0118ú q ë û ë û
(16)
Since the oil content has significantly higher variability than the moisture content, the outputs have to be properly weighted in order to equally control oil content and moisture content. Instead of using Eqn. (12), the new MV control objective function becomes: J1 = E (Yt − Yt sp )T W TW (Yt − Yt sp )
(17)
In this case the weights are chosen the same as the output weights in the model predictive controller. é 20 3 0ù W =ê ú ë 0 1û
(18)
14
Using the impulse response coefficients in Eqn. (16) and the algorithm presented by Rogoziński et al. [21], the weighted unitary interactor matrix is factorized as follows: é0 0 ù 14 é 0 −.166 ù 13 é.017 −.055ù 12 é −.004 .099 ù 11 é.979 ê0 .979 ú q + ê −.099 −.004 ú q + ê.055 .017 ú q + ê .166 0 ú q + ê 0 ë û ë û ë û ë û ë
0 −0.0083ù é0 K =ê ú ë 0 −0.0221 0.0108 û
0 ù 10 q (19) 0 úû (20)
Note that the interactor matrix is independent of the transfer functions between the inlet oil temperature (U1) and the two outputs, because the two process inputs with the shorter delay times, submerger speed (U2) and takeout conveyor speed (U3), can have a desired effect on both outputs (see also the zero entries in the first column of the full rank constant matrix K, in Eqn. (20) ). 5
Stochastic identification of the disturbance model
There are conflicting opinions in literature about the impact of stochastic identification of a disturbance model through time series on CPM. Harris [5], Desborough and Harris [6], Huang et al. [16], and Thornhill et al. [14] state that time series modeling is well known and selecting the structure and order of the disturbance model has little effect on the accuracy of CPM. On the other hand, other authors (Kozub [8], Lynch and Dumont [9], Eriksson and Isaksson [10]) cite practical cases (e.g., time varying disturbance models) where univariate time series modeling is 2 not trivial, requires experience, and might result in poor estimation of σ MV and η if a wrong
disturbance model structure were selected. The situation gets worse in the multivariate case. Multi-output models have a much richer internal structure, which has the consequence that their parameterization is nontrivial (Ljung [27]). There is no unique parameter set that corresponds to an input-output relationship, hence there are no unique convergence points of optimization. The number of possible model structures is much larger. The Prediction Error Method (PEM) requires nonlinear iterative parameter optimization, which often encounters convergence problems, especially for higher order or overparameterized multivariate systems. The modeling objective is to create a disturbance model, which has two parts of interest. First the estimation of the model. The model is not used in the traditional sense of prediction or
15
2 . And simulation; rather, only the first d impulse response coefficients are used to estimate σ MV
second, the residuals or the innovation sequence are of interest. It is well known that the identification is also a ‘whitening’ process that is supposed to generate uncorrelated residuals at. 5.1
Stochastic identification methods
There are several possible identification techniques in order to carry out the multivariate stochastic identification as discussed next. More details on implementation issues and results can be found in section 6.4. 1. Multivariate Polynomial ARMA PEM This is the direct extension of the univariate ARMA PEM. The disturbance model employed is A( q −1 )Yt = C ( q −1 )at
(21)
where A and C must be identified. As mentioned before, model structure selection and the numerical optimization are drawbacks of this method. Another drawback is that the C(q-1) polynomial matrix must remain stable during the numerical optimization. Unstable polynomial matrices can be stabilized with spectral factorization, trivial in the univariate case but more computationally intensive for the multivariate case. The above method is the default method used in this work. Comparisons with other methods, discussed below, are shown in section 6.4. 2. State Space ARMA PEM The disturbance model employed by this method is xt +1 = Axt + Kat
(22)
Yt = CX t + at
where the constant matrices A, C, and K must be identified. In principle, PEM’s are easily adapted to work with state space models, a preferred model structure for more complex problems. In practice, this leads to a large number of parameters to identify, making the numerical optimization very difficult (Viberg [44]). A canonical parameterization (Ljung [27]) reduces the amount of identified parameters, but finding a reliable canonical parameterization, is to a large extent, unsolved. 16
3. Subspace Identification Since the early 90's, a realization based approach to estimating state-space models, now collectively called subspace identification methods, has become an effective method for multivariable system identification. The method does not require non-linear iterative search. The computational complexity therefore is modest compared with PEM, particularly when the number of outputs is large. For subspace identification methods, the model structure of Eqn. (22) is used. The constant matrices A, C, and K must be identified. The essential parts of the numerical calculations consist of a QR-factorization and a Singular Value Decomposition (SVD). This allows robust numerical methods to calculate the estimates. See Van Overschee and De Moor [43] and Viberg [44] for general overviews of the subspace identification. 4. Auto Regressive Models (AR) The use of univariate Auto Regressive (AR) models has been used by Desborough and Harris [6], Harris et al. [13], and Thornhill et al. [14]. The model to identify is A( q −1 )Yt = at
(23)
The estimation is linear; with no numerical iterations required. Another important advantage of this approach is that confidence intervals can be developed for η . The disadvantage is the limited model structure. An AR disturbance model does not have any zeros in the transfer function. High order AR models are needed to approximate systems that exhibit oscillating behavior, which might lead to numerical problems or poor estimation. 5. Laguerre Models Another method reported in the literature, but not investigated in this work, is the use of Laguerre networks for univariate systems (Lynch and Dumont [9], Eriksson and Isaksson [10]). Laguerre networks generate parsimonious Moving Average (MA) models. The model structure selection is simpler than ARMA models, but proper time scale(s) for the Laguerre network have to be chosen. The Laguerre network identification, despite
17
sporadic claims to the contrary in literature, is nonlinear in the parameters when used in stochastic identification of disturbance models. 5.2
Model structure selection
Model structure selection is not trivial in the multivariate case. Desborough and Harris [6] proposed to increase the model order until the performance index estimate shows no appreciable change. Thornhill et al. [14] use a fixed 30th order AR model and adjust the sample time such that the closed-loop impulse response is fully captured within 30 samples. Instead of looking at lack of appreciable change in the performance index or other model structure criteria, like Akaike’s Information Criteria, an approach closer to the intended use of the model would be to look at the whiteness of the residuals. In order to check the whiteness of the residuals, whiteness tests (Ljung [27]) or portmanteau lack-of-fit tests (Box and Jenkins [3]) have been developed. These tests have been extended to the multivariate case by Hosking [49] and Li and McLeod [50]. There are various equivalent forms of the multivariate portmanteau statistic. A computationally convenient form is given in Hosking [49], relying on the statistic S
(
)
τ = l å trace Cˆ rT Cˆ 0−1Cˆ rCˆ 0−1 ~ χα2 (n 2 S ) r =1
(24)
l with Cˆ r = l −1 å t =1 aˆt aˆtT− r and Cˆ 0 = Σˆ a , where n is the number of outputs; l is the sample length;
and S the number of residual auto-covariances for the portmanteau test. A normal choice for S is 20. Contrary to Box and Jenkins [3], we assume that the null hypothesis H0 is predicated upon a residual mean of zero, which helps to avoid a reduction of the degrees of freedom of the χ 2 test (Johansson [51]). With the multivariate portmanteau statistic, the following simple model structure selection is used to select the appropriate model order: (a) Start with the smallest model possible. (b) Increase model order until the residuals are statistically white. (c) If a maximum model order is reached without any model producing statistically white residuals, select the model with the least colored residuals.
18
5.3
Deterministic disturbances
The controller performance analysis in sections 2.1 and 2.2 assumes that the disturbances are of a stochastic nature. This is not always a realistic assumption for our process, which contains a mixture of stochastic and deterministic events. However, the stochastic framework of the controller performance analysis can easily be extended to include randomly occurring deterministic disturbances (Harris et al. [1] and MacGregor et al. [52]). It might be useful to separately analyze the performance against stochastic and deterministic disturbances, or focus on one kind of disturbance. Harris et al. [1] suggest the use of intervention analysis to separate stochastic and deterministic components. Another approach, proposed in this work, is to try to separate disturbances with the aid of a denoising method such as Wiener filtering or wavelet de-noising (Donoho et al., [54]). See section 6.6 for more details and results. 6 6.1
Results User choices
The affect of the data length on the confidence limits of the performance index has been assessed by Desborough and Harris [6], using AR models. Short data segments will increase confidence limits of the statistical estimates, while long data segments can also give misleading results when many different response characteristics are juxtaposed into one long data set (Kozub [8]). Since the overall process studied in this work has many modes of operation and switches frequently between modes, a short data segment of 720 samples or one hour is chosen. The sample time (5 s) is equal to the sample time of the model predictive controller. 6.2
Data pre-treatment
The data segments should not be corrupted by abnormal process operating conditions. Although the process is continuous, frequent process breaks can occur. When this occurs, the infrared sensor that measures moisture and oil content (Fig. 2) starts reading the underlying conveyor instead of the product. The controller jumps to Manual and these periods have to be discharged (Fig. 6). Another issue is the frequent screen pack changes in the extruder. The screen is placed in the extruder to securely remove foreign objects that may inadvertently enter the process (e.g., small stones in the raw material). Over time the screen can become clogged, which causes increased power requirements for the extruder to operate and overcooking of the product. 19
A screen change causes a short process break that does not put the controller in manual but might cause outliers in the process outputs, due to the infrared sensor seeing holes in the product bed. These outliers will be removed and data will be interpolated if possible (see Fig. 7). 6.3
Data and performance monitoring
The CPM result of all 406 hours of data is presented in Fig. 8. Values of η1, (Eqn. 14) as well as the multi-loop output performance indices η1,moisture and η1,oil (Eqn. 15) are calculated every hour and shown. All identifications are performed according to the Multivariate Polynomial ARMA PEM approach in section 5.1. Note that the data is not always contiguous due to line breaks. The new controller is installed at hour 274 as marked by the vertical dotted lines. Figure 8 shows a wide variety of controller output responses and a large range of performance indices. Seven interesting areas of data are picked and further studied. Each area has different process features and disturbances, which make it interesting to interpret the performance index results. The seven areas are shown in Fig. 10 trough Fig. 17. In all of these figures, hourly values of both the overall performance index, η1, as well as of the two multi-loop output indices η1,moisture and
η1,oil are shown. Measured disturbances are added in the graphs where this is illustrative. The data presented is not always continuous and breaks are indicated with vertical dotted lines. 1. Controller hitting process constraints, Hours 3-9 Area 1, presented in Fig. 10, shows rather poor performance, reflected in the low values of all three η1, η1,moisture, and η1,oil. The reason is quite obvious, namely saturating process inputs cause continuous deviations from setpoint. In this case, the saturated inlet oil temperature and takeout conveyor speed make it impossible for the controller to maintain both outputs on their target values. The reason for controller outputs hitting their saturation bounds is that disturbances with an excessively high magnitude (owing to operating conditions in the upstream extruder differing from normal) entered the fryer system. Therefore, a main assumption of the CPM methodology, namely Eqn. (3), no longer applies, because of saturation of the process inputs. Consequently, one should very carefully interpret the low values of η1, η1,moisture, and η1,oil. When controller output saturation occurs, the low values of the performance indices definitely reflect an undesired situation of high offset, but the linear analysis on which the MV benchmark is 20
predicated no longer applies. Improving controller performance would not require a better process model to use in the controller, but rather wider bounds for process inputs, or restructuring of the controller through the addition of more process inputs, facilities that are usually not easily available. A better remedy would be to try to eliminate or reduce the magnitude of the upstream disturbances that cause saturation of process inputs. 2. Fryer heat-balance disturbance, Hours 42-48 Area 2, presented in Fig. 10, shows the controller reacting on a large but slow ramp disturbance. The oil level in the fryer is slowly declining, which has a major influence on the heat-balance in the fryer (because the total heat capacity of the fluid that provides heat to the chips being fried is reduced). The controller mainly compensates by lowering the submerger speed. Since the frequency of the disturbance is low, the controller does not have any problems rejecting it. The performance indices therefore are good. When the fryer oil level starts to incline back to its setpoint (shortly after 46.5 hours), the incline happens a lot faster that the decline. The frequency of the disturbance is much higher and the controller is not capable to completely reject the disturbance. This is reflected in the lower performance index for hour 46-47. 3. Extruder operating issues and controller constrains, Hours 84-92 Area 3, presented in Fig. 11, again shows rather poor performance. The controller is frequently saturating inlet oil temperature and takeout conveyor speed. The problem can be traced back to operational issues in the upstream extruder. The extruder power requirements are higher then normal and screen changes, represented by the fast dips in the extruder power requirements, are frequent. Notice that moisture and oil content move in the same direction, which is opposite of what would normally happen, since oil replaces moisture during frying. 4. Good performing period with old controller, Hours 172-179 Area 4, presented in Fig. 12, shows excellent performance. The disturbances are few and low in frequency. The extruder power, normally a source of significant disturbances, only 21
shows a slow and small increase over the seven hour time span. The takeout conveyor speed is the only process input that shows some activity. The controller easily rejects the existing disturbances. Even though the old controller is still operating, which is assumed to have plant-model mismatch, performance indices are excellent, due to the fact that there are no disturbances acting on the system that might show performance degradation. 5. Good performing period with new controller, Hours 290-303 Area 5, presented in Fig. 13, again shows excellent performance with the new controller operating. There are several smaller disturbances in the form of extruder power changes and other unmeasured disturbances that are reflected in changes of the manipulated controller variables (especially the takeout conveyor speed). The controller is capable of quickly rejecting all disturbances and therefore has near perfect performance for this area. 6. Spikes in Oil content, Hours 304-311 Area 6, presented in Fig. 14 shows significant spikes in the second process output, oil content. The duration of the spikes is very short (only about 2 samples) and the causes of the spikes are unknown and highly suspect. Given that the process dynamics are a lot slower, it is highly likely that the observed spikes are outliers generated by measurement problems. The performance indices are not as high as in area 4 and area 5, but, interestingly enough, for the oil content they remain high. Part of this behavior is due to the fact that the MV estimate increases as a result of the observed spikes. Consequently, when the actual variance is compared to the MV estimate, the comparison is not too unfavorable. The performance indices for the last three hours are somewhat lower since the takeout conveyor speed is partially saturating. The performance indices for the moisture content are also lower, but the reason for this is better illustrated in area 7. 7. Oscillations in moisture and extruder power, Hours 323-329 Area 7, presented in Fig. 17 shows large high frequency oscillations in the first process output, moisture content. This is clearly reflected in the moisture performance indices which are around 0.5 during the oscillations. Initially it was thought that the controller itself was causing these oscillations. But a close inspection of the Extruder Power (Fig. 22
17) showed oscillations of the same frequency as the moisture content (note the different Extruder Power response between the hours 324-328 and before and after). The extruder is causing high frequency oscillations that the controller of the downstream fryer is not able to reject, hence the significantly lower performance indices. 6.4
Stochastic Disturbance Modeling aspects
The multivariate CPM articles published ([15], [16] and [22]) suggest several methods to identify the stochastic disturbance model identification, but offer view details. To study the effect of disturbance modeling on the performance indices, different identification techniques are used. A difficulty comparing identification techniques and working with real data, is that the true model and model structure are unknown. Also cross validation is difficult to use since the disturbances acting on the system and therefore the disturbance models are constantly changing. Still several comments can be made and the data offers a richness of features and a wide range of disturbance models not found in simulations. The identification techniques used: 1. Multivariate Polynomial ARMA PEM For the multivariate PEM a criterion needs to be chosen that maps the sequence of prediction errors into a scalar. The choice of the determinant in Eqn. (25) is optimal under weak conditions and useful in practical situations (Söderström and Stoica [45]). æ1 VN (θ ) = det ç èN
N
å a(t ,θ )a t =1
T
ö (t ,θ ) ÷ ø
(25)
VN (θ ) is the scalar valued loss function that is minimized during the optimization, a (t ,θ ) the prediction errors from the data and θ the estimated parameters of the model.
In order to compute the parameter estimate a nonlinear least-squares problem needs to be solved. This involves the choice of a search direction. The algorithm starts with GaussNewton and switches to Levenberg-Marquardt or Steepest-descent when the Hessian is close to being singular. See Ljung [27] and Söderström and Stoica [45] for more information on how to compute the gradient and Hessian. For the initial guess of the iterative search a high order AR is used (Ljung [27]). 23
The C(q-1) polynomial matrix in Eqn. (21) must remain stable during the line-search part of the numerical optimization. Unstable polynomial matrices can be stabilized with spectral factorization (Kwakernaak and Sebek [56]). 2. State Space ARMA PEM The State Space ARMA PEM algorithm from the MATLAB System Identification Toolbox is used. The state-space model is first estimated using the N4SID subspace method and transformed to the canonical form. Then the general PEM routine is used for the estimation. The PEM routine by default uses a robust estimation technique instead of applying a purely quadratic criterion (Ljung [45,27]). The robust estimation produces different disturbance models and incorrect performance indices. The robust estimation technique needs to be disabled. Without robust estimation State Space ARMA PEM minimizes the criterion in Eqn. (25). 3. Subspace Identification Two different kind of subspace identification algorithms are used: the N4SID algorithm (Van Overschee and De Moor [43]) and the CVA algorithm (Larimore [47]). The algorithms differ mainly on the weighting matrices W1 and W2. These weighting matrices are used to pre- and post-multiply before the SVD, and effect the quality of the estimated state space matrices Aˆ and Cˆ . It is at present not fully understood how to choose them optimally (Ljung [27]). The N4SID algorithm used, is the MATLAB System Identification Toolbox
implementation. The CVA algorithm used is available from ADAPTX (Larimore [48]). A MATLAB implementation of both algorithms can also be found in Van Overschee and De Moor [43]. 4. AR The MATLAB System Identification Toolbox ARX implementation is used to estimate multivariate AR models (Ljung [45]). The least-squares estimation problem is an overdetermined set of linear equations that is solved using Gaussian elimination.
24
5. FCOR Huang et al. [16] have developed a filtering and correlation based method, FCOR, to 2 and η . A pre-whitening filter is found to obtain the estimated random estimate σ MV
shocks (aˆt ) . Calculating a cross-correlation function between the delay free output and the random shocks then leads directly to the calculation of η . Calculation of the crosscorrelation function eliminates the need to determine the impulse response coefficients from the estimated closed-loop transfer function. The process of obtaining a prewhitening filter is analogous to estimating a disturbance model, so any of the above methods can be used. An overview of the performance of the five identification methods on all data is given in Table 1. Both the state space ARMA PEM and N4SID identified models that were unstable. These models were manually replaced by a stable model with different model order. The fourth column shows the number of models that were unable to obtain statistically white residuals with the model structure selection method described in section 5.3. The maximum model order was set to 6 for both PEM and subspace identification methods and 15 for the AR identification method. Increasing the maximum order did not yield more models with white residuals. The PEM and AR methods produce better results than the subspace methods. The reason why subspace methods produce fewer models with white residuals is at this time unknown. It is clear that the non-iterative methods, i.e. Subspace and AR, are computationally much faster than the iterative PEM methods. The computational cost for the PEM methods is not prohibitive but it might become a problem for higher order systems. Four hours of data with different features are selected to investigate performance of the different identification methods. The data is presented in Fig. 16 and the performance indices are tabulated in Table 2. The data segment (hour 60) shows a nice response and perfect performance. The different identification methods give very similar impulse responses (Fig. 17) and the same performance indices. For this data segment all identification methods work equally well. The second data segment (hour 12) shows oscillating behavior in the second output, oil content. Both PEM methods show similar impulse responses (illustrated in Fig. 18), but significantly differing from the other three identification methods. Both subspace and AR 25
methods are not able to adequately model the oscillations. This results in incorrect high performance indices, especially for η1,oil . The third data segment (hour 136) shows a slow recovery from a large disturbance. Again both PEM methods show a similar response in Fig. 19, and differ from the other three identification methods. The performance indices for both PEM methods are significantly lower than the performance indices with subspace and AR identification methods. The fourth data segment (hour 325) shows high frequency oscillations in the first output, moisture content. This data segment is difficult to model and all identification methods show a different response (illustrated in Fig. 20). The performance indices differ to a large extend between different identification methods. However, the performance indices of both PEM methods and subspace CVA are in a closer agreement with what the data segment shows. Fig. 21 shows a comparison of the conventional impulse response coefficients method to estimate η and FCOR. Both disturbance models are estimated with the polynomial ARMA PEM identification. The estimated random shocks needed for FCOR are found by inverting the estimated ARMA model: aˆt = [Cˆ (q −1 )]−1 Aˆ (q −1 )Yt
(26)
Both methods have shown comparable performance indices. The FCOR performance indices have a somewhat higher variance and several performance indices that are clearly above one. The FCOR method uses the cross-correlation between the delay free output and estimated random shocks to compute the impulse response coefficients, instead of using the impulse response coefficients directly from the disturbance model. This adds additional variance to the 2 and η . The FCOR method therefore, does not offer an advantage over the estimation of σ mv
conventional method of computing impulse response coefficients. 6.5
The effect of underfitting or overfitting of data
To investigate the affect of selecting incorrect model orders on the performance indices, models with increasing model order are fitted on the data of hours 11 and 12 (data is illustrated in Fig. 22). The performance indices for both the polynomial ARMA PEM and AR models are
26
shown in Fig. 23. The selected model orders in Fig. 23 are the model orders picked with the model structure selection in section 5.2. The data segment of hour 11 shows a nice response without major disturbances, which should receive a high performance index. Over-fitting the data does have an effect on the performance index. There is a slow but continuous decline in the performance when data is overfitted. A fixed 30th order AR model, as used in Thornhill et al. [14], would result in a performance estimation error of 8% for hour 11. The polynomial ARMA response of hour 12 shows clearly that underestimating the model order produces larger errors that overestimating the model order as indicated by Eriksson and Isaksson [10]. Also overestimating the model error gives a conservative performance index error, while an underestimation of the model order gives an overestimation of the performance index. Hour 12 is the segment where the second output oil content is oscillating. An AR(6) model produces white residuals but is clearly not capable of reproducing oscillations in the impulse response coefficients (see the impulse response models in Fig. 24). This causes an overestimation of the performance index. An AR(30) model does seem to be able to reproduce the oscillations and has a performance index comparable the ARMA(2,2) model. But the disturbance model impulse response if very noisy, a clear indication of over-parameterization. 6.6
Deterministic disturbances and Wavelet de-noising
In Fig. 25 five hours of data are presented with severe unmeasured disturbances acting on the system. Most probably, the overall response is a combination of stochastic and deterministic disturbances. Fig. 26 shows the residuals and residual auto-correlations of the data in hour 152153. Although the data in hour 152-153 has severe deterministic disturbances an ARMA(2,2) is very well capable of producing residuals that are uncorrelated. The inclusion of randomly occurring deterministic disturbances did not offer any problems for the stochastic identification and the controller performance analysis is identical to the stochastic disturbances case only. When it is useful to separately analyze the performance against stochastic and or deterministic disturbances, wavelet de-noising might be able to separate stochastic and deterministic components. The Wavelet de-noising objective is to suppress the stochastic noise part of the output (Yt) and the recover the underlying deterministic trend (Yd). The residuals (YtYd) form the stochastic noise part and can be used for the stochastic performance analysis. 27
The wavelet de-noising procedure proceeds in three steps (Misiti et al. [53]): (a) Decompose: Choose a wavelet, choose a level N. Compute the wavelet decomposition of the signal s at level N. (b) Threshold detail coefficients: For each level 1 to N, select a threshold and apply thresholding to the detail coefficients. (c) Reconstruct: Compute wavelet reconstruction based on the original approximation coefficients of level N and the modified detail coefficients of levels from 1 to N. For this to work, it is important that the wavelet thresholding is done with minimal human intervention or additional assumptions about the stochastic and deterministic signal properties. For instance, the assumption of white noise for the stochastic noise part would be incorrect in most cases. Methods for selecting wavelet thresholding selection rules can be found in Donoho et al. [54] and Johnstone and Silverman [55]. For this specific wavelet denoising application 7th order Daubechies orthogonal Wavelets are used and 7 levels of wavelet decomposition. The following thresholding options are selected (see also Misiti et al. [53]): •
Soft thresholding
•
HeurSure threshold selection rule, which is a heuristic variant of Stein's Unbiased Risk. Estimator.
•
Basic non-white noise model, threshold rescaling using a level dependent estimation of the level noise
The results of the wavelet de-noising are presented in Fig. 25. The first two graphs show the original data and the wavelet de-noised signal. The performance indices shown ( η0 ) in the first two plots are the original (mixed stochastic and deterministic) performance indices. The last two graphs show the stochastic noise part of both outputs, and the stochastic performance indices ( ηS ). It appears that the wavelet de-noising successfully separates the deterministic disturbances from the stochastic disturbances. This success of course depends on the difference of the features in both disturbances. A possible pitfall might be the difficulty to distinguish between low frequency stochastic disturbances and low frequency deterministic disturbances. More work is needed on this topic. 28
6.7
Dealing with constraints
Fig. 27 shows two hours of data where the controller is running into saturation. If process inputs are constrained for the whole time interval, as is the case in Fig. 27, then on potential way of dealing with these constraints is to compute another interactor matrix, where the constrained process inputs are removed from the impulse response coefficients. This is possible since the unitary interactor matrix can be factored from non-square (fat or flat) systems. The unitary interactor matrix in Eqn. (27) is factored from the system presented in Eqn. (16), removing the constrained takeout conveyor speed: 0 ù 24 é.086 .020 ù 23 é −.170 −.016 ù 22 é −.071 −.041ù 21 é −.051 −.042 ù 20 é.365 −.028ù 19 é 0 ê.544 .129 ú q + ê.347 .234 ú q + ê −.460 .136 ú q + ê .136 .233 ú q + ê −.161 −.033ú q + ê.028 .365 ú q + ë û ë û ë û ë û ë û ë û é −.033 .161 ù 18 é.233 −.136 ù 17 é.136 .460 ù 16 é .234 −.347 ù 15 é.129 −.544 ù 14 q ê .042 −.051ú q + ê.041 −.071ú q + ê.016 −.170 ú q + ê −.020 .086 ú q + ê 0 0 úû ë û ë û ë û ë û ë
é 0 0.041ù K =ê ú ë-0.006 0.001û
(27)
(28)
Note that the order of the interactor matrix is much higher, since the process input with the shortest time delay was removed. The interactor matrix is now a function of the inlet oil temperature, which has a much higher delay. The new performance indices are presented in Fig. 27 as η1,C . The new performance indices,
η1,C do not offer any significant performance increase over original performance indices, η1 . A closer inspection of the system reveals why. Although the system with the takeout conveyor speed constrained is controllable, since the dynamic response of the inlet oil temperature and submerger speed are different. The system steady state gain matrix without the takeout conveyor speed has a condition number of 244. The controller is not capable of maintaining both moisture content and oil content on setpoint if there are step disturbances or non-stationary stochastic disturbances and the takeout conveyor is constrained. The new performance indices η1,C will therefore suffer when the system is under the influence of step or non-stationary stochastic disturbances.
29
7 7.1
Discussion Deterministic disturbances
No practical problems were observed with the inclusion of randomly occurring deterministic disturbances into the stochastic framework of MV performance monitoring. The analysis is identical to the stochastic disturbances case only. The possibility of including deterministic disturbances naturally depends on the randomness and magnitude of the deterministic disturbance. Wavelet denoising showed to be a viable method to separate stochastic and deterministic disturbances with minimal assumptions about the stochastic or deterministic signal properties. Making it possible to separately analyze the performance against stochastic or deterministic disturbances. 7.2
Model order selection and stochastic identification
Care should be taken selecting proper model orders; it does affect the estimated performance indices. In time varying closed-loop system such as this application, the proper model order should be determined for each individual data segment. Different disturbances acting on the system will change the model order of the closed-loop system. The automatic model order selection developed in section 5.2 is simple and appears to be very effective. Comparing different stochastic identification techniques, the five methods used obtained equal disturbance models when clean data segments where used. In cases of difficult to model data (large disturbances, outliers or oscillations) both ARMA PEM methods clearly have a better estimation of the disturbance model and produce more consistent performance indices. Especially oscillations, which are quite common see for instance data published in Kozub [8], Jofriet and Bialkowski [11], Thornhill et al. [14] and Hägglund [35], appear to be troublesome for both subspace methods and AR models. To make maters worse both subspace methods and AR models mostly overestimate the performance indices when they are not capable of fitting an adequate disturbance model. 7.3
Uncertainty in the time delay / interactor matrix
The interactor matrix identified in section 4 is assumed to remain constant for all 406 data segments. This assumption is violated when the submerger speed or takeout conveyor speed 30
changes over a large range, changing the delay time and interactor matrix of the system. It is clear from Fig. 10 trough Fig. 17 that the process inputs significantly change, therefore creating uncertainty in the interactor matrix. Miller and Huang [23] presented work on the effect of uncertainty in the interactor matrix. They conclude that uncertainty in the time delay estimate has a significant effect on the MV based performance indices, but their uncertainty in the time delay is rather large. A more fundamental understanding of how uncertainty in the interactor matrix affects the MV benchmark remains an open area. 7.4
Comparison of controllers
Since a newly identified and re-tuned controller is installed after hour 274 (see Fig. 8), we would expect better control of the process. It is interesting to see if this is reflected in the performance indices. A comparison of the performance indices of both controllers is shown in Table 3. The new controller clearly outperforms the old controller when looking at all available data. This comparison however, is unfair since the old controller has more process input saturation due to disturbances and therefore a lower average performance index. Looking at unconstrained data only the average performance indices of both controllers are equal. This is still a troublesome comparison since the performance indices depend on the kind of disturbances acting on the system. The data segments from hour 304 till hour 328 have high frequency disturbances, which reduce the performance indices that are not present in the data segments of the old controller. Removing the high frequency disturbance in hours 304-328 shows a small improvement of the new controller over the old controller. One problem with removing constrained data segments is that also partially constrained data sets are removed, discarding interesting data were significant disturbances have to be rejected. Comparing different controllers remains difficult, despite the MV benchmark, in situations were the disturbance models are constantly changing. 7.5
Controller constraint handling
In many cases, multivariate controllers are used where process constraints are important. The controller performance indices based on the MV index are not directly applicable to processes operating under constraints. This is an important limitation that needs to be addressed, especially in systems that have some form of optimization on top of the control scheme. Section 6.7 offers a 31
simple solution, but is only limited to process inputs that are saturated for the whole part of the data segment. Another approach (Kozub, [57]) would be to calculate minimum variance through on-line constrained optimization at each time step, and compare that minimum variance to actual variance. This approach can overcome the limitation posed by assuming that specific inputs are saturated over an entire interval of investigation. The optimization however, requires accurate open-loop process and disturbance models. Such models may not be available at the desired accuracy, and certainly cannot be estimated in the closed-loop unless there is some perturbation of the process (e.g. setpoint changes or process input dithering). The definition and estimation of appropriate multivariate performance indices in constrained multivariate control remains largely an open area. 8
Conclusions
Multivariate CPM has been performed on an industrial snack food frying process. A large range of performance indices are reported, due to different disturbances acting on the system and process input saturation. The comparison of the observed closed loop variance to an estimate of the MV achievable under feedback control is relatively simple, yet informative. The MV estimate requires minimum interference to the process operations, with the possible exception of the identification of the interactor matrix in the multivariate case. The method also has its limitations; the performance is based on the disturbances acting on the system present in the data segment, which might not be the same as the disturbances the end-user is most interested in. Furthermore, the diagnosis of root causes is impossible without the introduction of measured or injected external perturbations. References
1. T.J. Harris, C.T. Seppala, L.D. Desborough, A review of performance monitoring and assessment techniques for univariate and multivariate control systems, Journal of Process Control 9 (1) (1999) 1-17. 2. S.J. Qin, Control performance monitoring -- a review and assessment, Computers and Chemical Engineering 23 (2) (1998) 173-186.
32
3. G.E.P. Box, G.M. Jenkins, Time series analysis, forecasting and control. Holden-day, San Francisco CA, 1970. 4. K.J. Åström, Introduction to stochastic control theory, Academic Press, New York, 1970. 5. T.J. Harris, Assessment of control performance, The Canadian journal of Chemical Engineering 67 (5) (1989) 856-861. 6. L. Desborough, T.J. Harris, Performance assessment measures for univariate feedback control, The Canadian journal of Chemical Engineering 70 (6) (1992) 1186-1197. 7. N. Stanfelj, T.E. Marlin, J.F. MacGregor, Monitoring and diagnosing process control performance: The single-loop case, Industrial & Engineering Chemistry Research 32 (2) (1993) 301-314. 8. D.J. Kozub, Controller Performance Monitoring and Diagnosis: Experiences and Challenges, in: Proceedings 5th International Conference on Chemical Process Control, Lake Tahoe CA, Jan 7-12 1996, AIChE symposium series, issue 316, 1997, pp. 83-96. 9. Lynch, C.B. and G.A. Dumont, Control loop performance monitoring, IEEE transactions on control systems technology 4 (2) (1996) 185-192. 10. P.-G. Eriksson, A.J. Isaksson, Some aspects of control loop performance monitoring, in: Proceedings of the Third IEEE Conference on Control Applications, Glasgow UK, 1994, pp. 1029-1034. 11. P.J. Jofriet, W.L. Bialkowski, Process knowledge: The key to on-line monitoring of process variability and control loop performance, in: Conference Proceedings of the Control Systems Conference, TAPPI Press Norcross GA USA, 1996, pp. 187-193. 12. A. Vishnubhotla, S.L. Shah, B.A. Huang, Feedback and feedforward performance analysis of the Shell industrial closed loop data set, in: IFAC symposium Advanced control of chemical processes, Banff Can., Pergamon Press Oxford, 1997, pp. 313-318. 13. T.J. Harris, C.T. Seppala, P.J. Jofriet, B.W. Surgenor, Plant-wide feedback control performance assessment using an expert-system framework, Control Engineering Practice 4 (9) (1996) 1297-1303. 14. N.F. Thornhill, M. Oettinger, P. Fedenczuk, Refinery-wide control loop performance assessment, Journal of Process Control 9 (2) (1999) 109-124. 15. T.J. Harris, F. Boudreau, J.F. MacGregor, Performance assessment of multivariable feedback controllers, Automatica 32 (11) (1996) 1505-1518. 33
16. B. Huang, S.L. Shah, E.K. Kwok, Good, bad or optimal? Performance assessment of multivariable processes, Automatica 33 (6) (1997) 1175-1183. 17. B. Huang, S.L. Shah, H. Fujii, Identification of the time delay/interactor matrix from mimo systems using closed-loop data, in: Proceedings of IFAC 13th Triennial World Congress, San Francisco CA, 1996, pp. 355-360. 18. B. Huang, S.L. Shah, H. Fujii, The unitary interactor matrix and its estimation using closedloop data, Journal of Process Control 7 (3) (1997) 195-207. 19. G.C. Goodwin, K.S. Sin, Adaptive filtering prediction and control. Prentice-Hall, Englewood Cliffs NJ, 1984. 20. W.A. Wolovich, P.L. Falb, Invariants and canonical forms under dynamic compensation, SIAM Journal on Control and Optimization 14 (6) (1976) 996-1008. 21. M.W. Rogoziński, A.P. Papliński, M.J. Gibbard, An algorithm for the calculation of a nilpotent interactor matrix for linear multivariable systems, IEEE Transactions on Automatic Control 32 (3) (1987) 234-237. 22. B. Huang, S.L. Shah, E.K. Kwok, J. Zurcher, Performance Assessment of multivariate control loops on a paper-machine headbox, The Canadian journal of Chemical Engineering 75 (1) (1997) 134-142. 23. R. Miller, B. Huang, Perspectives on multivariate feedforward/feedback controller performance measures for process diagnosis, in: IFAC symposium Advanced control of chemical processes, Banff Can., Pergamon Press Oxford, 1997, pp. 493-498 24. R. Kadali, B. Huang, E.C. Tamayo, A case study on performance analysis and trouble shooting of an industrial model predictive control system, in: Proceeding of the American Control Conference, June 1999 San Diego CA, (1999) pp. 642-646. 25. M. Nikolaou, Computer-Aided Process Engineering in the Snack Food Industry in: Proceedings 5th International Conference on Chemical Process Control, Lake Tahoe CA, Jan 7-12 1996, AIChE symposium series, issue 316, 1997, pp. 61-69. 26. D.M. Prett, C.E. García, Fundamental Process Control. Butterworth-Heinemann, Boston MA, 1988. 27. L. Ljung, System Identification: Theory for the user. Prentice Hall, Upper Saddle River NJ, 1999.
34
28. K.J. Åström, Assessment of achievable performance of simple feedback loops, International Journal of Adaptive Control and Signal Processing, 5 (1) (1991) 3-19. 29. K.J. Åström, C.C. Hang, P. Persson, W.K. Ho, Towards intelligent PID control, Automatica 28 (1) (1992) 1-9. 30. A.P. Swanda, D.E. Seborg, Controller performance assessment based on setpoint response data, in: Proceedings of the American Control Conference, June 1999 San Diego CA, (1999) pp. 3863-3867. 31. A.J. Isaksson, PID controller performance assessment, in: Conference Proceedings of the 1996 Control Systems Conference, Halifax Can., (1996) pp. 163-169. 32. B.S. Ko, T.F. Edgar, Assessment of achievable PI control performance for linear processes with dead time, in: Proceedings of the American Control Conference, June 1998 Philadelphia PA, (1998) pp. 1548-1552. 33. A. Horch, A.J. Isaksson, A modified index for control performance assessment, Journal of Process Control 9 (6) (1999) 461-525. 34. M.L. Tyler, M. Morari, Performance monitoring of control systems using likelihood methods, Automatica 32 (8) (1996) 1145-1162. 35. T. Hägglund, A control-loop performance monitor, Control Engineering Practice 3(11) (1995) 1543-1551. 36. B. Huang, S.L. Shah, Practical issues in multivariable feedback control performance assessment, Journal of Process Control 8 (5/6) (1998) 421-430. 37. J. Ju, M.S. Chiu, An on-line monitoring procedure for 2x2 and 3x3 full multivariable control systems, Chemical Engineering Science 53 (6) (1998) 1277-1293. 38. B. Wang, M.S. Chiu, Online monitoring of controller performance from servo information, in: Proceedings of the 1998 IEEE ISIC/CIRA/ISAS Conference, Gaithersburg MD 14-17 Sept. (1998) pp. 221-226. 39. L.C. Kammer, R.R. Bitmead, P.L. Bartlett, Optimal controller properties from closed-loop experiments, Automatica 34 (1) (1998) 83-91. 40. S.J. Kendra, A. Çinar, Controller performance assessment by frequency domain techniques, Journal of Process Control 7 (3) (1997) 181-194. 41. F. Gustafsson, S.F. Graebe, Closed-Loop Performance Monitoring in the Presence of System Changes and Disturbances, Automatica 34 (11) (1998) 1311-1326. 35
42. M.L. Tyler, M. Morari, Performance assessment for unstable and nonminimum-phase systems, in: On-line fault detection and supervision in the chemical process industries IFAC Workshop Newcastle-upon-Tyne UK, June (1995) pp. 187-192. 43. P. Van Overschee, B. De Moor, Subspace identification for linear systems, Kluwer Academic Publishers, Boston MA, 1996. 44. M. Viberg, Subspace-based methods for the Identification of Linear Time-invariant Systems, Automatica, 31 (12) (1995) 1835-1851. 45. T. Söderström, P. Stoica, System Identification, Prentice-Hall International, Hemel Hempstead UK, 1989. 46. L. Ljung, System Identification toolbox user’s guide (R11), The MathWorks Inc., Natick Ma, 1995. 47. W.E. Larimore, Canonical variate analysis in identification, filtering, and adaptive control, in: Proceedings of the 29th IEEE Conference on Decision and Control (1990) pp. 596-604. 48. W.E. Larimore, ADAPTX Automated multivariable system identification and time series analysis software user’s manual. Adaptics Inc., McLean VA, 1999. 49. J.R.M. Hosking, The multivariate portmanteau statistic, Journal of the American Statistical Association 75 (371) (1980) 602-608. 50. W.K. Li, A.I. McLeod, Distribution of the residual autocorrelations in multivariate ARMA time series models, Journal of the Royal Statistical Society. Series B, 43 (2) (1981) 231-239. 51. R. Johansson, System modeling and identification. Prentice Hall, Englewood Cliffs NJ, 1993. 52. J. F. MacGregor, T.J. Harris, J.D. Wright, Duality between the control of processes subject to randomly occurring deterministic disturbances and arima stochastic disturbances, Technometrics 26(4) (1984) 389-397. 53. M. Misiti, Y. Misiti, G. Oppenheim, J.M. Poggi, Wavelet toolbox user’s guide (R11). The MathWorks Inc., Natick MA, 1996. 54. D.L. Donoho, I.M. Johnstone, G. Kerkyacharian, D. Picard, Wavelet shrinkage: asymptopia?, Journal of the Royal Statistical Society. Series B, 57 (2) (1995) 301-369. 55. I.M. Johnstone, B.W. Silverman, Wavelet threshold estimators for data with correlated noise, Journal of the Royal Statistical Society. Series B, 59 (2) (1997) 319-351. 56. H. Kwakernaak, M. Sebek, The Polynomial Toolbox for MATLAB On-Line Manual, Version 2.0. PolyX, Ltd., Prague Czech Republic, 1999. 36
57. D.J. Kozub, Personal Communication with the Author.
37
List of Figures Fig. 1. Schematic overview of the industrial snack food process.
39
Fig. 2. Schematic overview of an industrial snack food fryer.
39
Fig. 3. Detrended open-loop PRBS test data (Aug. 1997).
40
Fig. 4. Cross validation simulation (Aug. 1997).
41
Fig. 5. Step responses of the initial identified 1994 model and newly identified 1997 model.
42
Fig. 6. Sensor response to a line break.
43
Fig. 7. Sensor and extruder response to a screen change.
43
Fig. 8. CPM results of all available data, the horizontal dotted line at hour 274 marks the transition from the old controller to the new controller.
44
Fig. 9. Controller performance monitoring results of Area 1.
45
Fig. 10. Controller performance monitoring results of Area 2.
46
Fig. 11. Controller performance monitoring results of Area 3.
47
Fig. 12. Controller performance monitoring results of Area 4.
48
Fig. 13. Controller performance monitoring results of Area 5.
49
Fig. 14. Controller performance monitoring results of Area 6.
50
Fig. 15. Controller performance monitoring results of Area 7.
51
Fig. 16. Data used for comparison of different disturbance identification methods.
52
Fig. 17. Different disturbance Models for data in hour 60.
53
Fig. 18. Different disturbance Models for data in hour 12.
53
Fig. 19. Different disturbance Models for data in hour 136.
54
Fig. 20. Different disturbance Models for data in hour 325.
54
Fig. 21. Overall performance indices of FCOR and conventional impulse response coefficients method.
55
Fig. 22. Data used to show the effect of model order on the performance indices.
56
Fig. 23. Effect of model order on the overall performance indices.
56
Fig. 24. Effect of model and model order on the disturbance model (Hour 12).
57
Fig. 25. CPM in the presence of deterministic disturbances and separation of stochastic and deterministic disturbances with the aid of wavelet denoising.
58
Fig. 26. Residuals and residual auto-correlations of ARMA(2,2) model with 95%-confidence interval limits (dotted lines) for uncorrelated residuals, Data from hour 152-153. Fig. 27. Data from hours 373 and 384, constrained takeout conveyor speed.
59 60
List of Tables Table 1. Performance of identification methods on all 406 hours of data.
61
Table 2. Performance Indices of identification methods on hours 60, 12, 136 and 325
61
Table 3. Comparison of overall performance indices between old and new controller.
61
38
Figures
Raw materials other ingredients
Batch Mixer
Hopper
Fryer
Cutter
Extruder
Seasoning
Packaging
Snack Chips
Fig. 1. Schematic overview of the industrial snack food process.
U2: Submerger Speed U3: TakeOut Conveyor Speed
Y1: Moisture Content Y2: Oil Content
Fried Chips
Raw Slices
U1: Inlet Oil Temperature
Makeup Oil
Fig. 2. Schematic overview of an industrial snack food fryer.
39
Moisture Content
0.4 0.2 0 −0.2 −0.4 4 2 0 −2 −4
Oil Content
Inlet Oil Temp
2 0 −2
Submerger Speed 2 0 −2
TakeOut Speed
5 0 −5 0
500
1000
1500
Samples (T=5s) Fig. 3. Detrended open-loop PRBS test data (Aug. 1997).
40
2000
2500
3000
0.4
Moisture Content
0.2
0
−0.2
−0.4 4
Actual Data
Model Simulation
Oil Content
2
0
−2
−4 0
100
200
300
400
500
600
Samples (T=5s) Fig. 4. Cross validation simulation (Aug. 1997).
41
700
800
900
1000
Inlet Oil Temp
Submerger Speed
TakeOut Speed
0.05 0 −0.05
Oil Content
Moisture Content
0.1
−0.1 0.2 0 −0.2 0
25 50 0 25 Samples (T=5s) Samples (T=5s) Aug 1997 Model
50 0
25 Samples (T=5s) Aug 1994 Model
Fig. 5. Step responses of the initial identified 1994 model and newly identified 1997 model.
42
50
Moisture Content
1.5
ignored data used data
1 0.5 0
Oil Content
−0.5 30 ignored data used data
20 10 0 0
100
200
300 400 500 Samples (T=5s.)
600
700
Moisture Content
Fig. 6. Sensor response to a line break.
0.6 Raw Data Interpolated Data
0.4 0.2 0
Oil Content
−0.2 30 Raw Data Interpolated Data
20 10
Extruder Power
0
80 70 60 50
0
200
400 600 Samples (T=5s.)
800
1000
Fig. 7. Sensor and extruder response to a screen change.
43
Moisture Content
0.6 0.4 0.2 0 −0.2 −0.4 −0.6 6
Oil Content
4 2 0 −2 −4 −6
Performance Indices
1 0.8 0.6 0.4
η1 (Overall Index) η1,moist (Moisture Index) η1,oil (Oil Index)
0.2 0
0
50
100
150
200 Hours
250
300
350
400
Fig. 8. CPM results of all available data, the horizontal dotted line at hour 274 marks the transition from the old controller to the new controller.
44
η1
0.39
0.32
0.35
0.11
0.11
0.11
−0.6 η1,moist 0.69
0.62
0.55
0.28
0.30
0.33
0.26
0.31
0.09
0.09
0.09
Moisture Content
0.6 0.4 0.2 0 −0.2 −0.4
6
Oil Content
4 2 0 −2 −4 −6 η1,oil
0.33
10 Inlet Oil Temp Submerger Spd TakeOut Spd
Inputs
5
0
−5 3
4
5
6 Time (Hours)
Fig. 10. Controller performance monitoring results of Area 1.
45
7
8
9
η
0.97
0.76
0.86
0.94
0.67
0.95
−0.6 η1,moist 0.99
0.88
0.85
0.89
0.62
0.93
0.73
0.87
0.96
0.68
0.96
1
Moisture Content
0.6 0.4 0.2 0 −0.2 −0.4
6
Oil Content
4 2 0 −2 −4 −6 η1,oil
0.96
10 Inlet Oil Temp Submerger Spd TakeOut Spd
Inputs
5
0
−5 100 Fryer Oil Level Fryer Delta Temperature
Disturbances
80 60 40 20 0 42
43
44
45 Time (Hours)
Fig. 10. Controller performance monitoring results of Area 2.
46
46
47
48
η
0.20
0.19
0.44
0.20
0.08
0.27
0.08
0.92
η1,moist 0.29
0.39
0.57
0.44
0.23
0.51
0.20
0.91
0.16
0.42
0.17
0.07
0.24
0.07
0.93
1
Moisture Content
0.6 0.4 0.2 0 −0.2 −0.4 −0.6 6
Oil Content
4 2 0 −2 −4 −6
η1,oil 0.19
10 Inlet Oil Temp Submerger Spd TakeOut Spd
Inputs
5
0
−5 100
Disturbances
80 60 40
Extruder Power
20 0 84
85
86
87
88 Time (Hours)
89
Fig. 11. Controller performance monitoring results of Area 3.
47
90
91
92
η
1.00
1.00
0.99
0.99
0.95
1.00
1.00
η1,moist 1.00
1.00
0.98
1.00
0.93
1.01
1.00
1.00
0.99
0.99
0.95
1.00
1.00
1
Moisture Content
0.6 0.4 0.2 0 −0.2 −0.4 −0.6 6
Oil Content
4 2 0 −2 −4 −6
η1,oil 1.00
10 Inlet Oil Temp Submerger Spd TakeOut Spd
Inputs
5
0
−5 100
Disturbances
80 60 40 Extruder Power 20 0 172
173
174
175
176 Time (Hours)
Fig. 12. Controller performance monitoring results of Area 4.
48
177
178
179
η
0.93
0.96
1.00
1.00
0.99
0.98
1.00
0.97
1.00
0.96
0.99
1.00
0.98
η1,moist 0.81
0.92
1.00
1.00
0.95
1.00
1.00
1.00
1.00
0.94
0.98
1.00
0.93
η1,oil 1.00
0.97
1.00
1.00
1.00
0.98
1.00
0.96
1.00
0.98
0.99
1.00
1.00
1
Moisture Content
0.6 0.4 0.2 0 −0.2 −0.4 −0.6 6
Oil Content
4 2 0 −2 −4 −6 10
Inlet Oil Temp Submerger Spd TakeOut Spd
Inputs
5
0
−5 100 Extruder Power
Disturbances
80 60 40 20 0 290
292
294
296 Time (Hours)
298
Fig. 13. Controller performance monitoring results of Area 5.
49
300
302
η
0.89
0.95
0.81
0.89
0.77
0.85
0.79
η1,moist 0.70
0.77
0.55
0.73
0.71
0.83
0.80
0.98
0.89
0.92
0.78
0.85
0.79
1
Moisture Content
0.6 0.4 0.2 0 −0.2 −0.4 −0.6 6
Oil Content
4 2 0 −2 −4 −6
η1,oil 0.94
10 Inlet Oil Temp Submerger Spd TakeOut Spd
Inputs
5
0
−5 100
Disturbances
80 60 40
Extruder Power
20 0 304
305
306
307
308 Time (Hours)
Fig. 14. Controller performance monitoring results of Area 6.
50
309
310
311
η
0.88
0.79
0.73
0.78
0.78
0.93
−0.6 η1,moist 0.74
0.53
0.45
0.50
0.56
0.83
0.98
0.99
0.95
0.95
0.95
1
Moisture Content
0.6 0.4 0.2 0 −0.2 −0.4
6
Oil Content
4 2 0 −2 −4 −6 η1,oil
0.93
10 Inlet Oil Temp Submerger Spd TakeOut Spd
Inputs
5
0
−5 100 Extruder Power Disturbances
80 60 40 20 0 323
324
325
326 Time (Hours)
Fig. 17. Controller performance monitoring results of Area 7.
51
327
328
329
Moisture Content
0.6 0.4 0.2 0 −0.2 −0.4 −0.6 6
Oil Content
4 2 0 −2 −4 −6 10
Inputs
5
0
−5 Hour 60
Hour 12
Hour 136
Hour 325
Fig. 16. Data used for comparison of different disturbance identification methods.
52
a
a
1
1
Poly. PEM SS PEM N4SID CVA AR
0.8
2
0.1
0.05
Ymoist
0.6 0 0.4 −0.05 0.2
−0.1 1
0.2
0.8
0.15
0.6
Y
oil
0 0.25
0.1
0.4
0.05
0.2
0
0
5
10 Lags
15
20
0
0
5
10 Lags
15
20
Fig. 17. Different disturbance Models for data in hour 60. a
moist
0.8
2
0.15 Poly. PEM SS PEM N4SID CVA AR
1
Y
a
1
1.2
0.1
0.05
0.6 0.4
0
0.2 −0.05 0 −0.1 1
0.6
0.8
0.5
0.6
0.4
0.4
0.3
0.2
0.2
0
0.1
−0.2
Yoil
−0.2 0.7
0
0
10
20 Lags
30
40
−0.4
0
10
20 Lags
30
40
Fig. 18. Different disturbance Models for data in hour 12.
53
a1
1.2
Poly. PEM SS PEM N4SID CVA AR
1
Y
moist
0.8
a2
0.1
0.05
0.6 0 0.4 0.2
−0.05
0 −0.1 1.2
0.2
1
0.1
0.8
0
0.6
−0.1
0.4
−0.2
0.2
−0.3
0
Y
oil
−0.2 0.3
−0.4
0
10
20 Lags
30
40
−0.2
0
10
20 Lags
30
40
Fig. 19. Different disturbance Models for data in hour 136. a1
1
Poly. PEM SS PEM N4SID CVA AR
0.8
Ymoist
0.6
a2
0.1
0
0.4 0.2 −0.1
0 −0.2
−0.2 1.2
0.05
1
0
0.8
−0.05
0.6
Yoil
−0.4 0.1
−0.1
0.4
−0.15
0.2
−0.2
0
−0.25
0
10
20 Lags
30
40
−0.2
0
10
20 Lags
30
40
Fig. 20. Different disturbance Models for data in hour 325.
54
1.1
1
Overall Performance Indices
0.9
0.8
0.7
0.6
0.5
0.4
0.3 200
FCOR Impulse Response Coefficients 210
220
230
240
250
260
270
280
290
300
Fig. 21. Overall performance indices of FCOR and conventional impulse response coefficients method.
55
Moisture Content
0.6 0.4 0.2 0 −0.2 −0.4 −0.6 6
Oil Content
4 2 0 −2 −4 −6 10 Inlet Oil Temp Submerger Spd TakeOut Spd
Inputs
5
0
−5 Hour 11
Hour 12
Fig. 22. Data used to show the effect of model order on the performance indices. 1.05
1.05 Hour 11 Hour 12
Overall Performance Index
1 0.95
0.95
0.9
0.9
0.85
0.85 * = Selected Model Order
0.8
0.75
0.7
0.7
0.65
0.65
0.6
0.6
0
5 10 Polynomial ARMA PEM model order
* = Selected Model Order
0.8
0.75
0.55
Hour 11 Hour 12
1
15
0.55
0
10
Fig. 23. Effect of model order on the overall performance indices.
56
20 30 AR model order
40
50
a1
1.2
ARMA(2,2) AR(6) AR(30)
1
a2
0.1
0.05
Ymoist
0.8 0
0.6 0.4
−0.05
0.2 −0.1 0 −0.15 1
0.4
0.8
0.3
0.6
0.2
0.4
0.1
0.2
0
0
−0.1
−0.2
Yoil
−0.2 0.5
−0.2
0
20
40 Lags
60
80
−0.4
0
20
40 Lags
60
80
Fig. 24. Effect of model and model order on the disturbance model (Hour 12).
57
η
0.48
1
0.22
0.12
0.18
0.46
Moisture Content
0.6
Yt Yd
0.4 0.2 0 −0.2 −0.4 −0.6
η
1,moist
0.41
0.17
0.40
0.14
0.55
6
Yt Yd
Oil Content
4 2 0 −2 −4 −6
η
1,oil
0.52
0.26
0.10
0.20
0.44
10 Inlet Oil Temp Submerger Spd TakeOut Spd
Inputs
5
0
−5
Residual Moist.
η1,S
0.98
0.99
1.00
1.00
1.00
η
0.99
0.99
1.00
1.00
1.00
η
0.98
0.5 0 −0.5
1,moist
Residual Oil
5 0 −5 149
1,oil
0.98 150
1.00 151
1.00 152
1.00 153
154
Time (Hours)
Fig. 25. CPM in the presence of deterministic disturbances and separation of stochastic and deterministic disturbances with the aid of wavelet denoising.
58
1.2
AutoCorrelation a (t)
a (t) : Moisture
1
1
0.2
1 0 0.8 −0.2
0.6
3 0.4
a (t) : Oil 2
2 1
0.2
0 −1
0
−2 −0.2 −50 0.2
0 Lags
50
−3 152
152.2
1.2
CrossCorrelation a1(t),a2(t)
0.1
152.4 152.6 Hours
152.8
153
AutoCorrelation a2(t)
1
0
0.8
−0.1 0.6 −0.2 0.4 −0.3 0.2
−0.4
0
−0.5 −0.6 −50
0 Lags
50
−0.2 −50
0 Lags
50
Fig. 26. Residuals and residual auto-correlations of ARMA(2,2) model with 95%-confidence interval limits (dotted lines) for uncorrelated residuals, Data from hour 152-153.
59
η1→η1,C
0.47→0.50
0.34→0.34
Moisture Content
0.6 0.4 0.2 0 −0.2 −0.4 −0.6
η1,moist
0.47→0.56
η1,moist
0.38→0.38
η
0.46→0.47
η1,oil
0.33→0.33
6
Oil Content
4 2 0 −2 −4 −6
1,oil
10
Inlet Oil Temp Submerger Spd TakeOut Spd
Inputs
5
0
−5 Hour 373
Hour 384
Fig. 27. Data from hours 373 and 384, constrained takeout conveyor speed.
60
Tables Identification Method Polynomial ARMA PEM State Space ARMA PEM Subspace N4SID Subspace CVA AR
Parameters as function of order 8×o 8×o 2 o +4×o o2+4×o 4×o
Unstable estimates 0 2 11 0 0
Average Model Order 1.8 1.8 3.3 3.4 3.5
No white residuals 1 3 67 39 0
Average * CPU Time 4.0 ** 3.4 *** 0.10 0.06 0.03
Table 1. Performance of identification methods on all 406 hours of data. *
Time is measured on a standard PC. No attempt is made to optimize the methods and numbers should be taken with a grain of salt.
**
The slowest 5% of the identifications take 64% of the total time.
***
The slowest 5% of the identifications take 35% of the total time.
hour
60
12
136
325
η1
η1 η1,moist η1,oil η1 η1,moist η1,oil η1 η1,moist η1,oil η1 η1,moist η1,oil
Polynomial
State Space
Subspace
Subspace
ARMA PEM
ARMA PEM
N4SID
CVA
1.00 0.99 1.00 0.64 0.75 0.61 0.85 0.89 0.84 0.79 0.53 0.98
1.00 0.98 1.00 0.63 0.74 0.60 0.83 0.89 0.82 0.78 0.53 0.96
1.00 0.99 1.00 0.89 0.74 0.92 0.92 0.87 0.93 0.92 0.99 0.87
1.00 0.99 1.00 0.87 0.81 0.89 0.98 0.97 0.98 0.72 0.58 0.82
AR
1.00 1.00 1.00 0.91 0.89 0.91 0.99 0.98 0.99 0.93 0.84 0.99
Table 2. Performance Indices of identification methods on hours 60, 12, 136 and 325
η1 All data Unconstrained data Removing hours 304-328 Removing hours 304-328
Old Controller
New Controller
(hours 1-274)
(hours 275-406)
0.80 0.96 0.96 0.96
0.89 0.96 0.98 0.98
Table 3. Comparison of overall performance indices between old and new controller.
61