Document not found! Please try again

chemical engineering & industrial chemistry - Wiley

0 downloads 304 Views 6MB Size Report
Nanotechnology. Biotechnology. NBIC System. Cognitive. Sciences. Information Technology. Bits. Synapses. Genes. Atoms. T
CHEMICAL ENGINEERING & INDUSTRIAL CHEMISTRY SAMPLER

INDEX 03

Continuous Biomanufacturing:Innovative Technologies and Methods

Edited by Ganapathy Subramanian

CHAPTER 1: Continuous Bioprocess Development: Methods for Control and Characterization of the Biological System by Peter Neubauer and M. Nicolas Cruz-Bournazou

32

Nanotechnology Commercialization: Manufacturing Processes and Products Thomas O. Mensah, Ben Wang, Geoffrey Bothun, Jessica Winter, Virginia Davis

CHAPTER 1: Overview: Affirmation of Nanotechnology between 2000 and 2030 by Mihail C. Roco

56

Troubleshooting Process Plant Control: A Practical Guide to Avoiding and Correcting Mistakes, 2nd Edition Norman P. Lieberman CHAPTER 1: Learning from Experience

Edited by Ganapathy Subramanian

Continuous Biomanufacturing Innovative Technologies and Methods

3

1 Continuous Bioprocess Development: Methods for Control and Characterization of the Biological System Peter Neubauer and M. Nicolas Cruz-Bournazou Technische Universität Berlin, Department of Biotechnology, Ackerstrasse 76, ACK 24, 13355 Berlin, Germany

1.1 Proposed Advantages of Continuous Bioprocessing 1.1.1

Introduction

The change from batch to continuous processing has led to the intensification of processes in a number of industries, including steel casting, automobile and other devices, petrochemicals, food, and pharmaceuticals. Advantages include, aside from a significant increase in volumetric productivity, reduced equipment size, steady-state operation, low cycle times, streamed process flows, and reduced capital cost. In bioengineering, continuous processing is the standard in wastewater treatment, composting, and some bioenergy processes such as biogas and bioethanol fermentations. In contrast, most production processes run as batch type operations or more specifically fed-batch processes, which is the major production technology today. Konstantinov and Cooney provide a definition of a continuous process as “A unit operation is continuous if it is capable of processing a continuous flow input for prolonged periods of time. A continuous unit operation has minimal internal hold volume. The output can be continuous or discretized in small packets produced in a cyclic manner.” [1]. They also differentiate between full continuous processes with no or minimal hold volume in the process line or hybrid processes that contain both batch and continuous process operations. Obviously, the push in continuous manufacturing technologies was initiated by the BioPAT initiative of the Food and Drug Administration (FDA) in 2002 and the published guidance to PAT in 2004 [2], which initially aimed at a better understanding of the connections between product quality and process conditions. This lead to the need to develop quality by design (QbD), that is, the implementation of process analytical tools over the whole developmental pipeline from early product screening over the process development in the laboratory scale and during scale up. The needs for a better understanding of the impact of process parameters on the critical quality attributes (CQA) of the respective product also increased the interest in the development and implementation of novel sensors and analytical Continuous Biomanufacturing: Innovative Technologies and Methods, First Edition. Edited by Ganapathy Subramanian.  2018 Wiley-VCH Verlag GmbH & Co. KGaA. Published 2018 by Wiley-VCH Verlag GmbH & Co. KGaA.

4

1 Continuous Bioprocess Development

tools. As a consequence, this better understanding of processes resulted in further process intensification and provided the instrumental basis to approach challenges in relation to continuous operation. Aside the FDA initiative, there are several drivers for the increasing interest in continuous processing, not only in the pharmaceutical industry but also in the industrial (white) biotech industry. On one side, we see an increasing demand and thus also increasing production scale for industrial bioproducts (enzymes, small molecules, and bioenergy market) with a need for reduced costs for the products and increased competition. Considering that production scales are steadily growing and that a scale reduction close to factor 10 would be possible by continuous processing, plant sizes and the efficiency of bioprocesses could be increased significantly. On the other side, the opportunity of the selection of new biocatalysts and its implementation in the chemical synthesis for integrated chemoenzymatic processes (i.e., processes which combine chemical and enzymatic reactions) have to be competitive with the existing chemical processes and need to be integrated into the chemical production schemes. Here, continuous processes offer clear advantages. In biopharma for recombinant proteins, antibodies, highly complex proteins, recombinant enzymes and blood factors, the efficiency of the cell factories, and production systems have dramatically increased during the last decade. Opportunities for high cell density processes with a higher volumetric product yield and quality, as well as the changing situation in view of the intellectual properties by the termination of many patents for important drugs with novel commercial opportunities for new biosimilars and biobetters are a strong driver in increasing the competition especially from emerging markets. In parallel, there is an increasing demand for establishing local production sites for defined regional markets, rather than having single production sites. Strict cost calculations as a developmental driver demand for smaller and effective, but also flexible production plants. This directs interest to evaluate continuous bioprocessing opportunities to minimize investments for production facilities, and thinking about parallelization rather than larger scales. Parallelization would also be an advantage in processes with longer plant cycle times [3] as, for example, cell culture-based products. A nice example that shows the opportunities in significantly decreasing operational and capital expenses by changing from conventional bioprocessing to continuous bioprocessing in the case of production on monoclonal antibodies (mAB) and other non-mAB processes is shown by Walther et al. [4]. However, despite the obvious opportunities of continuous processes there are many challenges to solve, mainly the demand for fast realization and risk minimization. Currently, it seems to be easier to transfer a batch process into production than to start a new, longer, and more expensive development of a continuous process even though it is expected to be more efficient. These scenarios show that there is a big need in strategic methods concerning the development of continuous process strategies for either new products or to derive a continuous process from existing batch type processes. As early-phase product development can practically be only performed as batch processes, a key question in product development is how we can transfer a batch strategy to a continuous operation in a large process.

1.2 Special Challenges for Continuous Bioprocesses

Specific challenges for continuous operations in the biotech industry compared to other industries are (i) the inherent nature of a natural whole cell biocatalyst, that is, a prokaryotic or eukaryotic cell, for steady evolution of its genome. (ii) Biotechnological processes generally need much higher amounts of catalyst compared to chemical industry. This biocatalyst has to be maintained by feeding. Thereby, the feed is mostly the same substance as the original substrate for the biochemical reaction and thus it competes with the yield. As a consequence, diffusion and mass transfer in the reactor have not only an effect on the efficiency of the process but also are critical for long-term operation and maintenance of the product quality. Especially in view of the evolutionary adaptation, it is obvious that in contrast to batch procedures, continuous operation cannot be set up by trial and error but needs a fundamental basic understanding of the process in kinetic terms to allow a control of the process. This is a fundamental paradigm change in bioprocessing industries, where most operations included only a limited application of mathematical models. Also, available sensors and ways for process control are traditionally very limited. Although current examples of so-called continuous cell culture processes in the pharma area are only semicontinuous if compared to, for example, biogas or wastewater bioprocessing, the expansion of cell cultivation processes from days to months, for example, by the perfusion technology, goes into the direction of continuous bioprocessing. However, also for these continuous bioprocess operations a big amount of labor and time is needed to establish a new process as a continuous production system. Similar as in traditional biotech, the development is currently mainly based on wet-lab experiments rather than on a systematic developmental approach. This raises the question, whether this is the only way that has proven to be successful or whether a paradigm change in the application of technologies is needed to advance the field of continuous bioprocessing. In this chapter we will discuss the methods and data which are needed to develop a continuous operation with a focus on modeling approaches.

1.2 Special Challenges for Continuous Bioprocesses 1.2.1

The Biological System in Continuous Biomanufacturing

In process engineering, there are different reactor designs typically used depending on the characteristics of the reaction system. It should be pointed, that continuous stirred tank reactors (CSTR) are known to have some disadvantages over plug flow reactors (PFR), which are typically used in the chemical industry and batch and fed-batch type of cultures – the most important being a lower concentration of product due to a constant dilution. In chemical processes, typically PFRs are preferred when high yields are required, and batches are chosen when low volumes of different product are to be produced. In addition, considering process development, continuous processes require higher investments and times at small scales. Hence, even in the chemical industry many processes with similar characteristics to protein synthesis run in PFR or batch. Still there are

5

6

1 Continuous Bioprocess Development

many process setups that allow a significant reduction in footprint, time, and costs by using continuous processes and these should be considered in bioengineering. Now let us focus on the processes that should run continuously in bioengineering. Continuous bioprocesses are characterized by a continuous addition of substrates and simultaneous harvest, thus that the bioreactor volume stays approximately constant. If the product is released from the cell and accumulates in the culture broth, recycling by filtration units or centrifuges can be applied and this separation can be supported by immobilization of the biocatalyst. Alternatively, the bioprocess can be performed in solid-state fermentation processes where the medium runs through a static matrix where the biocatalyst is immobilized. The biological system, which is the core component of a biotechnological production process, has many specific features that favor continuous production on one side, but also cause the specific challenges that restrict continuous processing in the biotech industries so far. In difference to chemistry, where continuous operation became a standard, and where the change from batch to continuous operation has contributed to a significant drop in reactor sizes and investment costs and lead to modularization of production plants, bioprocesses are rather different. To understand the special challenges of continuous bioprocessing, it is important to analyze these differences between chemical and biocatalytic processes, both of which are different routes for the same outcome – the production of a chemical molecule. A chemical reaction is characterized by the high yield of the reaction from a substrate into a product using small amounts of catalysts (order of μg/l or mg/l). In contrast, in a bioprocess the catalyst is the (micro)-organism, which has to be produced from the substrate in a very high concentration (in industry mostly 50–100 g/l). This is mostly done in the first phase of a process (called growth phase). The product is not produced until the second phase, called production phase, which may be equal in length or even shorter than the growth phase. Interestingly, the product reaches low yields if compared to single reaction chemical processes. Highest concentrations of bioproducts are in the order between 100 and 200 g/l, or up to the order of 10–100 g/l for more complex molecules, and for many processes the yield is even lower. Generally, the yield of the product from glucose is significantly lower than 0.1 g/g since most of the glucose goes in the biomass, that is, the production of the biocatalyst. If one assumes that after biomass production the cells could be used to produce the product over unlimited time (considering that substrate required for the own maintenance is less than 10% of the amount needed for the growth phase), a significant increase of the product yield per substrate in a bioprocess would be possible. Furthermore, if turn-around times are avoided, such as harvesting of the reactor, cleaning, and preparation of the new batch including preparation of the starter cultures, time costs would also be reduced further increasing process efficiency. 1.2.2

Inherent Changes in the Microbial System – Problem of Evolution

In contrast to continuous processes in chemical engineering, bioprocesses include a perpetual evolution process, that is, a genetic change of the biocatalyst.

1.2 Special Challenges for Continuous Bioprocesses

The rate of mutations has been extensively discussed and investigated, mainly in the context of how a continuous culture can be used for strain evolution. However, it needs to be stated that the problem of evolution is inherent to any bioprocess. In a bioprocess the operation always starts with a very low number of cells that are multiplied during inoculum preparation and scale-up from flasks to the final size bioreactor. For a large scale bioreactor running at a scale of 10 m3 (which is small in view of most common microbial bioprocesses) and a final cell number of 1.2 × 1018 for a bacterial process (i.e., 1.2 × 1011 cells/ml),1 this makes approximately 60 generations from a selected one-cell clone and even approximately 27 generations from an inoculum stock. If the calculations concerning the accumulation of mutations as performed by Ref. [5] were assumed to be 1 × 10 3 nucleotide substitutions per genome in each generation for Escherichia coli [6] and 4 × 10 3 nucleotide substitutions per genome per generation in Saccharomyces cerevisiae [7], and would be adopted to the considered large-scale cultivation case, when assuming a homogenic frozen stock culture (which is not the case), there would be at the end of the cultivation a probability of approximately 30 mutations per each position of the nucleotide sequence of E. coli and 0.07 mutations per position for the yeast S. cerevisiae. This is an incredible diversity, which is rarely applied in molecular evolution experiments and still not exploited. These considerations may even underestimate the possible mutation frequency, as discussed in detail for chemostat environments by Ferency [8]. Therefore, it is a great challenge in continuous cultivation to direct evolution in order to guarantee continuous production of a constant product with equal quality despite the steady evolution process. While this is greatly ignored in industrial batch processes, this evolution and selection of the fittest is a critical point in continuous processing. What can be learned from a large number of evolution experiments, especially from the extensive work of Lensky et al. [9], is that characteristic changes occur stepwise, that is, suddenly. Also, most importantly, fitness gain is constant with steady improvements without an upper end even in very long experiments over (tens of) thousands of generations. Experiences with natural evolution and steady fitness gain with a selection pressure for important cellular parameters, like affinity constants and maximum specific rates as described in the model below, and steady change of them by mutations [5] are a clear advantage if the continuous culture is aimed at degrading a compound which serves as a key substrate, like in degradation of wastes. However, the same process is a challenge for control in production of biomolecules where it is not possible to set a selection pressure toward the product. 1.2.3

Lack of Process Information

Historically, the low reproducibility, observability, controllability, and understanding of the biotechnological processes has driven large-scale production to an approach based on: (i) as fast as possible after inoculation, (ii) as fast as possible after induction, (iii) as fragmented as possible to avoid mixing of failed charges with high purity product. For these reasons, the fed-batch technology has become the most widely 1 Assuming 1g/l = 2 × 1012 cells/l, final dry cell weight of 60 g/l.

7

8

1 Continuous Bioprocess Development

applied standard upstream production method in large scale. Consequently, all other processes taking place before (pretreatment and medium preparation) and after (down-stream operations) were developed to meet the requirements of discrete production plants. Despite this success of the fed-batch technology, it is remarking that continuous bioprocesses have been early established [10] in parallel to the development of the fed-batch technology by Hayduck [11]. Today’s advances in technology are paving the way for continuous systems with improvements in PAT, higher automation of the production, advanced control strategies, methods to direct and apply the naturally occurring evolution process to higher productivity, and most importantly regulations that promote continuous processes. This may clearly offer significant advantages by solving important challenges as are higher overall productivity, lower risk of infection, smaller reaction vessels and plants. 1.2.3.1 Models-Based Process Development and Control for Continuous Processes

In bioprocess engineering, models are used for process design, monitoring, control, and optimization [12–16]. Bioprocess complexity and restrictions have driven design and control to demand accurate and robust models [17], triggering a rapid development over the last years [18]. Advanced sensor techniques and fast computer processors enable the creation of very complex models processing enormous amounts of information [15,19]. Models are not only used to describe the behavior of living organisms but also essential to map complex systems into smaller dimension and also to obtain indirect measurements and observe nonobservable events when applied as software sensors [20], for example. The new regulations of the PAT initiative of the FDA and EMA show the importance that modeling applied to process monitoring and control is gaining in the pharmaceutical and in general in biotechnological processes [21]. Biological processes are characterized by [22–25]

         

the complexity of the biological processes taking place in the bioreactor lack of reproducibility monitoring of unstable intracellular compounds at very low concentrations extremely fast and sensible reaction to environmental changes (offline measurements are inaccurate) mutations insufficient sensor technology expensive and inaccurate sensors highly invasive difficult to calibrate large time delays and low frequency of observations

1.2.3.2

Engineering Approach to Complex Systems

In chemical engineering, the implementation of different methods to deal with large complex systems has a long history. Engineers have developed methods like hierarchical modeling, model reusability, model inheritance, and so on. An extensive discussion of these methods and their application for the simulation

1.2 Special Challenges for Continuous Bioprocesses

of chemical plants is presented by Barton [12]. In biological systems, the modularization of separated instances of the system is not always possible. In traditional process engineering, a pump can be modeled in a modular form and then added to the flow sheet of the plant and reused as many times as needed [26]. Contrary to this, biological systems tend to show different behavior under in vitro conditions compared to their in vivo state [27]. Still, some approaches intend an analysis and modeling of biological systems with methods taken from engineering [28,29]. Kitano [30,31] emphasizes that the only possibility to understand living organisms is to consider the system as a whole. Identifying genes and proteins is only the first step, whereas real understanding can only be achieved by uncovering the structure and dynamics of the system. Kitano states the following four key properties [30]:

   

System structure System structure identification refers to understanding both the topological relationship of the network components and the parameters for each relation. System dynamics System behavior analysis suggests the application of standardized techniques such as sensitivity, stability, stiffness, and bifurcation. The control method System control is concerned with establishing methods to control the state of biological systems. The design method System design is the effort to establish new technologies to design biological systems aimed at specific goals, for example, organ cloning techniques.

For this reason, two things are necessary in order to control a biological system and comply with the strict food and pharmaceutical regulations, namely, to observe or at least deduce the state of critical quantities and to predict to some extent the behavior of the system. The first issue is tackled using state and parameter estimation methods [32–36] in an effort to infer the conditions of the process using the information that is available. Second, the evolution of the system over time as well as its response to the control actions, characterized by nonlinear dynamics, can be predicted using mathematical models. 1.2.4 1.2.4.1

Limited Control Strategies Traditional Control Strategies for Continuous Cultures

As the chemostat works at a preset dilution rate without any feedback control, it cannot stabilize in a process that runs close to the maximum specific growth rate. Although the chemostat is the most used continuous culture technique in research, it is rarely applied in industry. In this context other processes with a feedback control loop have developed, such as the turbidostat and the pH auxostat. The turbidostat (see Ref. [37] for an excellent early review) is a continuous process where the feed rate is controlled by the online measurement of the turbidity, mostly in the outlet stream. By maximizing the flow rate at a high biomass concentration, the turbidostat can operate the process at μmax and, at the same time, avoids outwashing of the biomass. Therefore, it is generally applied in

9

10

1 Continuous Bioprocess Development

processes where the flow rate should be maximized, but while maintaining the cells in the system, for example, in processes which aim the degradation of toxic compounds in wastewater treatment, or directly for the production of biomass (single cell protein), or other growth-related molecules. The turbidostat has been also a powerful tool for the selection of faster growing strains, that is, in natural selection, due to its permanent adaptation of the flow rate [38–40]. The necessity of the turbidostat to have a continuous measurement of the turbidity limits its applications to systems with no biofilm formation at the walls. Therefore, the pH auxostat (pH stat), which uses the pH as a state control variable, is more robust, but is also more sophisticated in terms of the design of the medium. In the pH-auxostat the flow rate is set by the pH controller through the feeding of fresh medium to keep the pH constant. Thus, the pH auxostat can be used only for processes where biomass growth is closely correlated to changes on the pH [41]. Early pH auxostats [42,43] were typically applied for microbial processes with acidifying products, for example, in the dairy fermentation [44] or similar anaerobic processes. Such processes needed special pretreatment of the fed medium that had to be preadjusted for a certain pH and thus the biomass concentration in the reactor depended on the difference between the pH difference between the feed solution and the fermenter broth as well as on the buffer capacity of the medium, and normally not high cell densities were obtained [42]. The theoretical solution of the performance of a pH auxostat with two inlet flows, medium and a pH controlling agent, by Larsson et al. [45] made the pH auxostat more easy to handle and applicable as a tool also for aerobic processes. The kinetic model contains an extra function that calculates the hydrogen ions in the added base, and considers that the added new medium has the same pH as the control point for the pH in the bioreactor. The process is controlled by the definition of the inlet flow ratio of the two inlet streams, which can be easily set. This inlet flow ratio is a parameter that provides a reliable tool for process optimization. This principle was adapted to processes with an ammonia (NH4+) feed, for which the H+ concentration is in good relation with the cellular uptake of NH3 and the yield coefficient for hydrogen ions on substrate used is constant, such as for S. cerevisiae [41]. A third principle for control of a continuous bioprocess, namely, the nutristat, aiming at maintaining the substrate concentration at a certain level, has found less application. If online methods for the determination of the substrate are available, the nutristat is well suited to run a process at higher growth rates which are lower than μmax. This is a clear advantage over the pH- or turbidostat. However, an interesting study by Rice and Hempfling [46] shows that even a pH stat can run stable at different concentrations of the growth limiting substrate, that is, specific growth rates, by variation of the substrate concentration in the feed solution or by changing the buffering capacity of the feed solution. The nutristat is clearly useful for the degradation of waste compounds, as shown, for example, by Refs [47,48]. When using glucose as substrate, the nutristat can be applied [48], however due to the low KS value and high specific substrate uptake rate, a proper control below μmax is becoming challenging, especially at high cell densities. A special solution for culture processes with production of secreted products, such as monoclonal antibodies, at low or even zero growth rates is cell

1.3 Changes Required to Integrate Continuous Processes in Biotech

recycling. The theory of continuous cultures with cell recycling was already developed and experimentally proven in Ref. [49] based on the idea of continuous culture with cell recycling [50]. While in these processes the theory of a chemostat (see above) is valid with the cell recycling term (which can be from 0 to 1), most production processes rely on higher growth rates, as the metabolic activities at low growth rates are low and the energy supply is going to the maintenance mainly. However, the cell recycling (or cell retention, retentiostat, perfusion culture) is a practical method to increase degradation capabilities, for example, in waste treatment or maintaining organisms for which even μmax is very low. The method of perfusion culture is widely and very successfully applied in mammalian cell culture and similar processes with a low growth rate, where, for example, the production of monoclonal antibodies can be stably maintained for longer times (see Chapter 7 of this volume). Cell retention in these systems today is mainly achieved either by immobilization of the cells, by filtration, for example, use of alternating tangential flow filtration, or by centrifugation [51,52].

1.3 Changes Required to Integrate Continuous Processes in Biotech 1.3.1 A Better Physiological Understanding of the Organisms and Their Responses on the Reactor Environment 1.3.1.1

Model Complexity

The biggest challenge for modeling is to develop a general and systematic approach to find the simplest manner to describe complex systems aiming at the strictly required accuracy. The meaning of model simplification becomes more important with the increasing complexity of bioprocesses analyzed in research. The complexity of biological processes makes it very difficult to fully describe cultivations using a computer model. To name one example consider the phenotype of a microbial cell determined by >30 million macromolecules, >1000 species of small organic molecules fine-tuned in composition and number to the comprehensive set of its environmental factors [53]. In order to achieve a robust and efficient continuous process, a close monitoring of the system and a tight control are required. Due to the complexity of the microbial behavior, standard feedback control methods are not adequate and more advanced methods are required. Process control has a long tradition in development of model-based techniques, for monitoring (e.g., softsensors, observers, moving horizon estimators) and control (e.g., model predictive control, adaptive control). These methods relay on a mathematical model that fulfils some specifications as are follows:

  

Ability to accurately describe the dynamic behavior of the system Identifiability Tractability

11

12

1 Continuous Bioprocess Development

This obliges a close interaction between control experts and bioengineers in order to develop control systems that assure a secure process that will fulfill quality regulation with high reliability. 1.3.1.2

Models

A model is a poor mathematical representation of a physical system. Lack of accurate knowledge of the process to be modeled, insufficient measurement techniques, and extensive computation time hinder an exact representation of the phenomena to be described [54]. Nevertheless, models are widely used in science and their contribution to a better understanding of engineering processes and their proper design, optimization and control is out of question. Computer aided tools using model-based methods allow optimal design and operation of plants, reducing energy consumption, hazard, and environmental impact, while allowing better monitoring and control [12]. From this it can be deduced that the best model to describe a certain process is not necessarily the most accurate, but the one that describes only the relevant aspects of the system so as to get a good description with minimal effort [55]. Modeling includes a wide number of tools [56] as are principal component analysis (PCA) or partial least squares (PLS) [57], nonlinear models like neural networks [58] and also multivariate statistics [57]. Roughly said, these methods search for data correlation to reduce the dimension of the data set [59]. Also more advanced methods in knowledge discovery of data (KDD) like data mining [60] have been developed for treatment of large data sets and are applied in bioinformatics. Still, generally speaking, first principle modeling (white box) is the preferred approach to describe a complex system when mechanistic understanding (mass balances, thermodynamics, kinetics, etc.) is at hand [25,61–63]. These methods study the data characteristics to find new relations between variables and create black-box type models that describe it. By these means it is possible to look through high dimensional data and detect the most important characteristics of the system [64–66]. Contrary to black box models, mechanistic models are based on physical knowledge of the system to be described. In engineering, for example, rigorous modeling includes mass and energy balances, detailed reaction pathways, and so on. Models are the core of computer aided process engineering (CAPE) [67] and computer aided biology (CAB). The quality of every work on simulation, optimization, design, and model-based control, depends on the characteristic of the model. In engineering, models are not only used to describe the behavior of systems but also essential to map complex systems into smaller dimension more comprehensible to humans. Finally, they also serve to obtain indirect measurements of states or parameters of interest with software sensors [20], for example. Software sensors substitute measurements, which are not possible due to physical limitations, with models which predict the behavior of the nonmeasurable variable based on indirect measurements. Whenever a state of the system is to be determined, observer can be applied [35,36] or the Kalman filter [33] with its variations [34]. In system biology, various methods exist aiming at an adequate description of the dynamics of living organisms studying their gene regulatory networks [68].

1.3 Changes Required to Integrate Continuous Processes in Biotech

Still, differential equation systems settle the standard modeling method in engineering. Systems of ordinary differential equations (ODE) have been widely applied for the description of gene regulatory networks. Usually, the system comprises rate equations of the form dxi ˆ f i …x; u†; dt

(1.1)

where x is the vector of concentration of proteins, mRNAs, or other molecules, u the vector of inputs, and fi is a nonlinear function. Also, time delays can be added if necessary. Typical types of equations used are Monod type, switching, Heaviside, and logoid functions among others. An important advantage of nonlinear ODEs is the possibility to describe multiple steady-states and oscillations in the system [69]. Besides the requirement of testing the global convergence of the optimal solution, the bottleneck is still the state information of the parameter set creating identifiability problems. Nevertheless, many successful applications have been published showing the possibilities of ODEs to describe gene regulatory networks [70]. Although today gene regulatory network models are not applicable in industrial scales, it can be expected that systematic conversion of complex gene regulatory network models in simple tractable models will be possible in near future. Nevertheless, model complexity is closely related to instability, over parameterization, parameter correlation, and low parameter identifiability [71]. The effort required to develop and fit a model has to be justified by its application. It is useless to apply computational fluid dynamics (CFD) to the simulation of a 1 l reactor knowing that the concentration gradients can be neglected. On the other hand, simulating a reaction in a tank with 10 000 l without considering mass transfer limitations may yield in results far from reality. Summarizing, the key dynamics of a system need to be identified, isolated, and analyzed before any model is built. Currently, limitations are mainly due to the scarcity of measurement possibilities but also to the insufficiency of adequate mathematical tools. 1.3.2

Model-Based Process Monitoring

A key task in process control is to monitor the critical states and parameters of the process in order to secure proper operating conditions and desired quality even under perturbations and model mismatch [72,73]. Unfortunately, bioprocesses are characterized by their low information content caused by low concentrations, complex media, and the lack of noninvasive online sensors that can measure intracellular concentrations [74]. To overcome these problems, model-based methods to infer the conditions of the process can be applied. There is a long list of methods and applications for online state estimation [32], state observers including the classical Kalman filters [33] with their variations [34] and nonlinear observers [35,36]. Some authors use the expression software sensor or softsensor [75,76] in account of the fact that a “software” or computer-based calculation of a nonmeasurable variable which is not always a state variable, like the respiration coefficient, provides more information than the initial variable that can be directly measured.

13

14

1 Continuous Bioprocess Development

1.3.3 1.3.3.1

Implementation of Model Predictive Control Model-Based Control

Advances in computer capacity, sensor technology, and a better understanding of the biological system are giving place to very successful applications of advanced control strategies in continuous processes [77,78]. In the case of continuous processes, different approaches have been developed and successfully applied. Some representative examples are classical approaches to various control strategies [79–84] and state feedback control strategies [85,86]. Additionally, efforts to exploit the data generated using neural networks [64,87,88] without the need of a thorough understanding of the system have been shown. In an effort to simplify the control strategy, fuzzy logic controls [89] have also been used in bioreactors. Furthermore, advanced methods using model predictive controllers [64–66], and even based on population models [22] can be found in literature. An interesting approach is to overcome the limitations of the existing models by performing recursive estimation of its parameter estimates. By these means, models can be used also in processes that change over time. Some proves of the potential of these methods are the use of adaptive control techniques [90–92]. Finally, investigation of complex formulation for optimal control show the controllability potential of biological systems if simplified mathematical descriptions succeed to predict the dynamics of the process [93,94]. In general, the theory of monitoring and control is used under the assumption that the system to be described is time invariant and properly described by the model. Nevertheless, there exists the possibility to adapt the observer to changing system behavior by estimating the model parameters together with the states in order to adapt the model to changes in the system. This is called adaptive control [95] and proved to be very effective for many applications including bioreactors [90]. Adaptive control can be used to overcome structural deficiencies of the model as well as uncertainty in the parameters. Still an important drawback is the need of increasing information in order to find accurate estimates of both, the parameters and the states. Methods for moving horizon estimation can be used to increase the robustness of the parameter estimates if sufficient computer capacity is at hand [96]. These methods are especially relevant in biotechnology application since changes in the system (e.g., between cultivations, mutants, over time) can be observed by small variations in the parameters without the need of a change in the model structure.

1.4 Role of Iterative Process Development to Push Continuous Processes in Biotech 1.4.1

Methods for Development of Continuous Processes

In general, bioprocess development suffers from significantly longer times and costs compared to other industries [97,98]. Additionally, development of continuous processes follows different strategies than batch processes. While it is advantageous to implement the cultivation strategy at the screening or product

1.4 Role of Iterative Process Development to Push Continuous Processes in Biotech

development stage, it is difficult to implement continuous strategies in the early phases. Thus, the successful development of continuous bioprocesses depends to a bigger part than batch bioprocess development on a comprehensive understanding of the biological system. While batch processes have been traditionally developed with trial-and-error approaches, this is not possible for continuous processes so that the implementation of robust process control strategies is an important basis. A key parameter for any bioprocess is the specific product formation rate qp. This rate has a close connection to the specific growth rate, but this correlation is different for different products. In many cases there is no linear dependency, but qp has an optimum below μmax. Thus, to run a process with a high productivity, it is a major task in continuous bioprocess development to find this correlation between μ and qp. Traditionally, this is performed in chemostats, which is a long-lasting process, as the steady state must be established for each dilution rate, which takes about four to seven reactor exchanges. While generally in scientific investigations with chemostats more steady states are established in a series from an initial batch process, such consecutive long-time cultivation can lead to the selection of mutants and thus has to be performed with good controls, for example, by returning to the original dilution rate in the end of an experiment. While for process development systems for the parallel performance of continuous bioreactors would be very interesting, the setup of such systems is a technical challenge. Parallel chemostats can minimize the problem of evolutionary selection by running experiments with different dilution rates in parallel. However, parallel experiments benefit from miniaturization, especially if big liquid volumes are handled like in the case of chemostats. Although miniaturization has made a big progress in discontinuous cultivation technologies, it is difficult to achieve in the milliliter and submilliliter scale when well defined and controlled dilution rates must be guaranteed over long time intervals. In the past a number of approaches have been realized for parallel continuous mini and microbioreactors. Balagadde et al. [99] developed a microchip-based circulating loop bioreactor with a segment-wise sterilization option to avoid biofilm formation. They demonstrated this reactor for cultivations with E. coli. The authors observed oscillations in the cell number. No other online parameters were measured. The feed control, steady states, and so on, were not characterized. The system was developed to investigate evolution, but probably it would not be applicable in its current form for process development. Nanchen et al. [100] used a 10-ml parallel continuous culture system in 17 ml Hungate tubes for 13C labeling and metabolic flux analysis. Aeration with 2 vvm, was used to also mix the system, feeding was done by a peristaltic pump, pH was measured in outlet stream and DO by a microelectrode. The system was characterized with steady states obtained over five volume changes and compared against stirred tank reactor cultivations. The system is very simple to establish and is also very valuable for parameter estimation for continuous process development. A shaken system that can be easily parallelized was developed by Akgün et al. [101]. The authors developed a continuous bioreactor based on shake flasks

15

16

1 Continuous Bioprocess Development

with a controlled feed, an overflow outlet channel at the side of the flask, and a top-phase aeration. While this system was extensively characterized in connection to filling volume constancy and different technical parameters and applied for a continuous culture of S. cerevisiae, the system has possibly so far not been applied for other studies. In view of real microbioreactors a modification to a commercial 48 bioreactor system (2 mag, Munich, Germany) was recently published by Schmideder et al. [102] showing first promising results for the feasibility of transferring this system to a continuous bioreactor. However, in this case so far only eight of the bioreactors were connected and the determination of key parameters, such as the Ks value had a relatively high error. While continuous cultivation until seven volume exchanges was possible, the presented data do not allow deep-going evaluation about the quality of the steady states, which even has been a problem in larger continuous cultivations. A faster and elegant method for obtaining the necessary cellular parameters and characteristics with a lower effort compared to parallel chemostats is the A stat technology [103]. With this technology, it is possible to scan the whole growth rate space of an organism in a single experiment [104], either by continuously increasing (Acelerostat) or by decreasing (Decelerostat) the dilution rate in a way that the culture always is in a steady state. However, the technology even has wider use by applying this concept also to the continuous change of other parameters than the feed rate of the limiting substrate; therefore, the term changestat was introduced [104]. This technology of the Gradiostat was applied by various authors, but only Vilu et al. developed the scientific basis for it and showed the strength in view of data collection over the whole growth range (see Chapter 9 of this volume). 1.4.1.1 Alternative: Fed-Batch as a System to Simulate Quasi Steady-State Conditions

The strength of the A-stat technology, screening the whole growth space in a single experiment, starting with either a high or a low specific growth rate, is principally also possible by the application of the fed-batch technology. Here, in a similar way as in the A-stat, one can realize a feeding rate, which leads to a continuous gradual change of the specific growth rate of the system. However, in difference to a real continuous culture technology, the fed-batch is more easily to apply in high throughput approaches, as standard (micro)-reactors can be used. Simply instead of having an outgoing steam, one can live with the typical volume change of a fed-batch, which is dependent on the substrate concentration in the feed solution and the feed rate. Although the controlled feeding to miniaturized cultures is still a challenge, first solutions are available which provide an interesting technological basis, such as 48 microplate base real feeding systems with integrated bottom channels and a hydrogel filling the capillaries [105], or even the integration of micropumps [106]. Alternatively, and more easy to use, would be systems with internal release of the substrate either from silicon as the feed-bead technology [107] or the Feed plate® (PS Biotech, Aachen, Germany). While these systems rely simply on the diffusion of the substrate into the medium, the EnBase® technology [108] (for a comprehensive review see Ref. [109]), which is based on a biocatalytic release of glucose

1.4 Role of Iterative Process Development to Push Continuous Processes in Biotech

from a polymer of glucose, allows easily varying the feed rate by the amount of the added biocatalyst. Thus, it is easy with this technology to screen for a wide range of growth rates in microplates or parallel shake flasks [108,110]. Recently, this technology was applied with the aim of finding an optimal specific growth rate for the specific production rate of a secreted heterologous enzyme in the yeast S. cerevisiae. In comparison to the A-stat technology the same optima was found, but in a very much shorter time [111]. As it was discussed above, the big challenge for the future is, to combine model-based and experimental approaches for continuous bioprocess development. As models in this direction must provide knowledge on the system, mechanistic models, rather than black-box approaches are needed. Therefore, it is necessary to develop design of experiment (DoE) approaches which allow the fast estimation of parameters of nonlinear models. Traditionally, in other disciplines this has been done with online optimal experimental designs (OED). In a recent study, we have applied this approach also to identify the parameters of a dynamic model for E. coli cultivations [112]. The strength of the application of the fed-batch approach with a model-based DoE with a sequential reoptimization of the model by a sliding windows approach succeeded in identifying the model parameters in a single parallel experiment of one day, which shows how process development can benefit from model-based and automation approaches. 1.4.2 Mimicking Industrial Scale Conditions in the Lab: Continuous-Like Experiments 1.4.2.1

A Simple Model for Continuous Processes

The challenges of continuous processes and comparison against batch or fed-batch can be better understood using a simplified description of its dynamic system. The continuous process is one of three main cultivation technologies together with batch and fed-batch. Other extensions such as sequencing batch and fed-batch also exist but will be covered later. The main difference between these three setups is the inflow and outflow streams. We can use a simple generalized dynamic model to describe the reactor in Eqs (1.2–1.5). The reader is referred to Refs [25,113] for a more detailed description of the system of equations. The simplest system of a continuous process is the chemostat, which is characterized by a continuous flow of the incoming medium at a rate that limits one nutrient component in the bioreactor. As the rates for inflow and outflow are equal, the volume in a chemostat is constant. Through the limiting component, the specific growth rate can be controlled simply by the pump speed. The experimental system and kinetic basis of the chemostat were developed in the groundbreaking papers by Novick and Szillard [114] and Monod [115] and further theoretically refined by Powell [116]. The chemostat applies the concept of the metabolic control, which has been earlier established by the fed-batch method, to continuous processing. By this these authors have set the basis for the wider application of the chemostat providing a detailed procedure for control, long before this was done for the fed-batch by Pirt and Kurowski [49].

17

18

1 Continuous Bioprocess Development

A number of assumptions are necessary, the most important being: (i) species and conditions that are not described by the model (temperature, pH, trace elements, etc.) are constant, (ii) ideal mixing, (iii) monoculture. The behavior of the process can be approximately described by the following set of equations:  dSj F in  ˆ S j;in � S j � qSj X; dt V dX F in F out …δX � X † ‡ μX; …X in � X † � ˆ V V dt   dOd ˆ Kla O∗d � Od � qO XH; dt dV ˆ F in � F out ; dt

(1.2) (1.3) (1.4) (1.5)

with Sj being the j ˆ 1 . . . N soluble components (substrate or product) concentrations in the medium in (g/l) (qSj > 0 for substrate uptake and qSj < 0 product secretion), X the cell dry weight of the organisms in (g/l), Od the oxygen dissolved in the medium in (%) of saturation, V the volume of medium in the reactor in (l), and F a flow stream of the reactor in (l/h). The subindexes in and out represent the inlet and outlet streams, respectively, qS and qO the uptake rates of soluble components and oxygen, respectively, in (g/(g l)), Kla is the oxygen diffusion constant, O∗d the saturation concentration of oxygen in the medium in (%), H the Henry related coefficient, and δ the cell retention (�) by membrane or perfusion systems (1 for no retention and 0 for complete recirculation). This simple model, allows us to analyze the basic characteristics and differences of the discontinuous and continuous cultivation processes. In batch F in ˆ F out ˆ 0, in fed-batch F in > 0; F out ˆ 0, and in continuous F ˆ F in ˆ F out > 0 with D ˆ F=V being the dilution rate. 1.4.2.2

Continuous-Like Fed-Batch Cultivations

It is worth stressing out that, in systems with no recirculation (δ ˆ 1), F out enters only in the volume Eq. (1.5), so that equilibrium in all other states can be reached also in the fed-batch setup considering off course an infinitely large vessel. In a continuous process, the growth rate can be easily obtained by solving Eqs (1.2) and (1.3) at steady state and considering that concentration of biomass or products in the inlet are equal to zero. In a continuous process, the growth rate can be obtained by reformulating Eq. (1.3)to dX ˆ 0ˆ dt

  F � δ ‡ μ X; V

and further solving it to μˆ

F δ ˆ Dδ; V

(1.6)

1.4 Role of Iterative Process Development to Push Continuous Processes in Biotech

for the biomass with S 1 being the substrate and qS1 < 0 being the specific substrate uptake rate    dS1 F in  ˆ S1;in � S 1 ‡ qS1 X ˆ D S 1;in � S1 ‡ qS1 X; dt V     D S 1;in � S1 X ˆ� ; considering S1;in � S1  S 1;in ; qS1

0 ˆ

X ˆ�

(1.7)

DS 1;in ; qS1

now if we consider qS1 ˆ �μ=Y X=S , and μ ˆ Dδ we obtain Xˆ

S 1;in Y X=S ; δ

(1.8)

and for the product S2 or P, with qp > 0 being the production rate dP F in P ‡ qP X ˆ �DP ‡ qP X; ˆ� V dt P q X qP ˆ D ; or P ˆ P : X D 0 ˆ

(1.9)

Figure 1.1 depicts the growth rate of an E. coli cultivation with regard to different levels of constant feeding. Even at large feeding differences, the change in biomass concentration drives the process to a similar growth rate. eColi Model

Cell dry weight X [g/l]

40

Biomass Biomass Biomass Biomass

30 20 10 0

0

Growthrate μ [hʹ -1]

0.75

5

10 time

15

20

5

10 time

15

20

mu mu mu mu

0.50 0.25 0.00

0

Figure 1.1 Effect of constant feeding profiles in biomass and growth rate.

19

1 Continuous Bioprocess Development

eColi Model

Cell dry weight X [g/l]

250

Biomass Biomass Biomass Biomass Biomass

200 150 100 50 0 0 0.75

Growth rate μ [hʹ -1]

20

5

10 time

15

20

5

10 time

15

20

mu mu mu mu mu

0.50 0.25 0.00

0

Figure 1.2 Effect of exponential feeding profiles in biomass and growth rate on the behavior of a fed-batch process.

If we apply an exponential feeding as in Figure 1.2, the dilution rate remains constant. By this, the growth rate, which is directly related to D, can be held constant so as to investigate the effect of continuous cultures in the organisms. These exponential feeding experiments recreate conditions very similar to continuous cultures, reducing drastically material costs (experimental setup and volumes) and experimental time. 1.4.3 1.4.3.1

Fast and Parallel Experimental Approaches with High Information Content Computer-Aided Operation of Robotic Facilities

One of the most important differences of continuous processes compared to batch or even fed-batch is its level of sophistication hence control sophistication required. The actions that can be taken to operate a batch process are limited so that control can be simple. But continuous processes require a complex control strategy to assure stability and product quality. In other words, the acceptance and profitability of continuous processes strongly depends on the progress in bioprocess monitoring, understanding, and control. But before a model can be used for monitoring, control, and optimization purposes, it has to be build and validated with experimental data. This is not a trivial task since, as mentioned before, the reproducibility and scalability of processes is especially difficult in biological systems. The experimental efforts related are extremely high so that model building is usually left aside since scaleup and process development are carried out under high time pressure. Nowadays,

1.4 Role of Iterative Process Development to Push Continuous Processes in Biotech

advances in robotics, miniturazion, and data handling are being used to create high-throughput (HT) facilities able to perform thousands of experiments in parallel automatically. With this, new opportunities arise for a better process understanding and model building. Together with scale down techniques it is possible to create many experiments at “process like conditions” in order to rapidly fit model parameters and test the response of the system in a larger operation space. 1.4.3.2

Model Building and Experimental Validation

In biotechnology, model building is necessarily coupled to a reiterative experimental validation. Regardless of its level of complexity, models have to be constantly fitted against real observations to adjust its parameters to changes caused by variations in the environment or in the microorganism itself. On the other hand, such robotic systems require a respective control strategy, posing new challenges to experimental design and control. Process automation from product development to production requires a horizontal transfer of information. In near future, a miniaturized scale down robotic facility will be connected to the plant and run in parallel creating a miniaturized twin of the large-scale process. By this, the mathematical model traditionally used in MPC will be substituted by a more accurate description of the system with a faster response time due to the dimension difference. Regarding the experimental planning for model validation, the efficiency in the design of multiple parallel experiments can be increased by using existing methods for data analysis and design of experiments [117–120] in order to allow an automatic evaluation of experimental results as well as the design and run of following experiments. If we managed to build a proper model to describe a process, we first need to fit the model to real data. This model will contain a set of unknown parameters that can be varied to fit the outputs of the model against observations of the real process. The aim is to find the experimental setups such that the statistical uncertainty of estimates of the unknown model parameters is minimized [121,122]. Nevertheless, there are some problems again related with the complexity and nonlinear dynamics of biological processes. First, the experiments carried out in the screening phase should emulate real process conditions [123] and generate high quality data so that systems beyond simple plates, as are mini-bioreactors [124–129] are needed. But more important is that, even for continuous processes, we have to go beyond “endpoint” or “steady-state” experiments. The dynamics of the process are essential to predict its evolution over time and the proper control strategies and for these we need dynamic experiments or at least different steady states. Because of the size and possibility of modern HT facilities, the number of factors that can be varied is very large including “actions” (pipetting, mixing, incubating, measuring, etc.) and “resources” (1-, 8-, or 96-channel pipette, shaker, photometer, flow cytometer, reaction vessels, plates, etc.). For these reasons, computer aided tools for optimal experimental design (OED) [121,130–132] are needed to maximize the efficiency of automatized laboratories. Achievements in online applications, allowing the use of the data generated to redesign the running experiment also are being developed [130,133–137] and applied in real case studies with bacterial cultivations [138], solving a number of complications as are the complexity of the biological system, the control of the experimental facilities, the low information

21

22

1 Continuous Bioprocess Development

content and long delays of the measurements, the scheduling of all actions considering resource availability, and a robust and cheap computation of the optimization. Generally, the main factors that affect the identifiability of a model are: (i) the structure of the model, (ii) the quality of the measurements (frequency, accuracy, etc.), and (iii) the design of the experiment (inputs, conditions, etc.). OED thus, by realization of the computed experimental conditions, the information content of the measurements is maximized and the parameters are determined most accurately. There are still important challenges that need to be solved before OED methods for optimal design and operation of robotic liquid handling stations can be reliably applied for bioprocess development. Some of the most important are the design of a robust optimization program that can assure convergence to a global solution in a limited time, the addition of nonlinear path constraints to define a more accurate search, the computation of the error propagation caused by model uncertainties, efficient methods for the solution of the scheduling problem considering all resources.

1.5 Conclusions Continuous bioprocessing which is a standard in some bioprocesses, such as for example, waste-water treatment and biogas production, is still at its infancy in pharmaceutical production. While long-term continuous experiments are limited in view of labor and also would only provide limited knowledge in connection to the randomly occurring mutations, deeper going knowledge is needed and quality has to be implemented in the process. Therefore, modeling and advanced process control approaches form a solid basis. Especially, mechanistic and hybrid models can provide here important information on the system and the process. For their application, it has to be considered that due to variations in the process the parameters of the model will be not constant over longer time intervals but have to be regularly adjusted automatically. While such continuous optimization strategies are the state of the art in various engineering disciplines they are new in the area of bioprocessing, but can provide a significant benefit. In this context it is an advantage that various complex cellular models exist, which however by typical methods of model reduction should be adapted in a way to make the parameters identifiable. If successful, such approaches can than be a solid basis for continuous bioprocessing.

References 1 Konstantinov, K.B. and Cooney, C.L. (2015) White paper on continuous

bioprocessing May 20–21, 2014 continuous manufacturing symposium. J. Pharm. Sci., 104, 813–820. 2 FDA (2004) Guidance for industry PAT – A framework for innovative pharmaceutical development, manufacturing, and quality assurance.

References

3 Han, C., Nelson, P., and Tsai, A. (2010) Process development’s impact on Cost 4

5 6

7

8 9

10 11 12 13

14 15

16

17

18

of Goods Manufactured (COGM). BioProcess Int., 8, 48–55. Walther, J., Godawat, R., Hwang, C., Abe, Y., Sinclair, A., and Konstantinov, K. (2015) The business impact of an integrated continuous biomanufacturing platform for recombinant protein production. J. Biotechnol., 213, 3–12. Gresham, D. and Hong, J. (2015) The functional basis of adaptive evolution in chemostats. FEMS Microbiol. Rev., 39, 2–16. Lee, H., Popodi, E., Tang, H.X., and Foster, P.L. (2012) Rate and molecular spectrum of spontaneous mutations in the bacterium Escherichia Coli as determined by whole-genome sequencing. Proc. Natl. Acad. Sci. USA, 109, E2774–E2783. Lynch, M., Sung, W., Morris, K., Coffey, N., Landry, C.R., Dopman, E.B., Dickinson, W.J., Okamoto, K., Kulkarni, S., Hartl, D.L., and Thomas, W.K. (2008) A genome-wide view of the spectrum of spontaneous mutations in yeast. Proc. Natl. Acad. Sci. USA, 105, 9272–9277. Ferenci, T. (2008) Bacterial physiology, regulation and mutational adaptation in a chemostat environment. Adv. Microb. Physiol., 53, 169–229. Lenski, R.E., Wiser, M.J., Ribeck, N., Blount, Z.D., Nahum, J.R., Morris, J.J., Zaman, L., Turner, C.B., Wade, B.D., Maddamsetti, R. et al. (2015) Sustained fitness gains and variability in fitness trajectories in the long-term evolution experiment with Escherichia coli. Proc. Biol. Sci., 282, 20152292. Hayduck, F. (1923) Low-Alcohol Yeast Process, US Patent 1449107, Fleischmann Company. Hayduck, F. (1915) Verfahren der Hefefabrikation ohne oder mit nur geringer Alkohlolerzeugung. German Patent 300662. Barton, P.I., (1992) The modelling and simulation of combined discrete/ continuous processes. Ph.D. dissertation. University of London. Alvarez, M., Stocks, S., and Jørgensen, S. (2009) Bioprocess modelling for learning model predictive control (L-MPC), in Computational Intelligence Techniques for Bioprocess Modelling, Supervision and Control (eds M. C. Nicoletti and L.C. Jain), Springer, Berlin, pp. 237–280. Lübbert, A. and Simutis, R. (1994) Using measurement data in bioprocess modelling and control. Trends Biotechnol., 12, 304–311. Nicoletti, M., Jain, L., and Giordano, R. (2009) Computational intelligence techniques as tools for bioprocess modelling, optimization, supervision and control in Computational Intelligence Techniques for Bioprocess Modelling, Supervision and Control, (eds M.C. Nicoletti and L.C. Jain), Springer, Berlin, pp. 1–23. Tabrizi, H.O., Amoabediny, G., Moshiri, B., Haji Abbas, M.P., Pouran, B., Imenipour, E., Rashedi, H., and Büchs, J. (2011) Novel dynamic model for aerated shaking bioreactors. Biotechnol. Appl. Biochem., 58, 128–137. Chhatre, S., Farid, S.S., Coffman, J., Bird, P., Newcombe, A.R., and TitchenerHooker, N.J. (2011) How implementation of quality by design and advances in biochemical engineering are enabling efficient bioprocess development and manufacture. J. Chem. Technol. Biotechnol, 86, 1125–1129. Preisig, H.A. (2010) Constructing and maintaining proper process models. Comput. Chem. Eng., 34, 1543–1555.

23

24

1 Continuous Bioprocess Development

19 Li, D., Ivanchenko, O., Sindhwani, N., Lueshen, E., and Linninger, A.A. (2010)

20 21 22 23

24

25 26 27

28

29 30 31 32

33 34 35 36

37 38 39

Optimal catheter placement for chemotherapy. Comp. Aid. Chem. Eng., 28, 223–228. Dochain, D. (2003) State and parameter estimation in chemical and biochemical processes: a tutorial. J. Process Control., 13, 801–818. fda.gov (2011) Guidance for Industry Process Validation: General Principles and Practice. Henson, M.A. (2003) Dynamic modeling and control of yeast cell populations in continuous biochemical reactors. Comput. Chem. Eng., 27, 1185–1199. Lei, F. and Jørgensen, S.B. (2001) Dynamics and nonlinear phenomena in continuous cultivations of Saccharomyces cerevisiae. Technical University of Denmark, Department of Chemical and Biochemical Engineering. Lei, F., Olsson, L., and Jørgensen, S.B. (2004) Dynamic effects related to steadystate multiplicity in continuous Saccharomyces cerevisiae cultivations. Biotechnol. Bioeng., 88, 838–848. Dochain, D. (2008) Bioprocess control. ISTE London. Hady, L. and Wozny, G. (2010) Computer-aided web-based application to modular plant design. Comp. Aid. Chem. Eng., 28, 685–690. Bailey, J.E. (1998) Mathematical modeling and analysis in biochemical engineering: past accomplishments and future opportunities. Biotechnol. Progr., 14, 8–20. Kremling, A., Fischer, S., Gadkar, K., Doyle, F.J., Sauter, T., Bullinger, E., Allgöwer, F., and Gilles, E.D. (2004) A benchmark for methods in reverse engineering and model discrimination: problem formulation and solutions. Genome Res., 14, 1773–1785. Csete, M.E. and Doyle, J.C. (2002) Reverse engineering of biological complexity. Supramol. Sci., 295, 1664–1669. Kitano, H. (2002) Systems biology: a brief overview. Supramol. Sci., 295, 1662–1664. Kitano, H. (2002) Computational systems biology. Nature, 420, 206–210. Kravaris, C., Hahn, J., and Chu, Y. (2013) Advances and selected recent developments in state and parameter estimation. Comput. Chem. Eng., 51, 111–123. Welch, G. and Bishop, G. (1995) An introduction to the Kalman filter. Daum, F. (2005) Nonlinear filters: beyond the Kalman filter. IEEE Aerosp. Electron. Syst. Mag., 20, 57–69. Soroush, M. (1997) Nonlinear state-observer design with application to reactors. Chem. Eng. Sci., 52, 387–404. Boulkroune, B., Darouach, M., Zasadzinski, M., Gillé, S., and Fiorelli, D. (2009) A nonlinear observer design for an activated sludge wastewater treatment process. J. Process Control, 19, 1558–1565. Watson, T.G. (1972) Present status and future prospects of turbidostat. J. Appl. Chem. Biotechnol., 22, 229–243. Evdokimov, E.V. and Pecherkin, M.P. (1992) Directed autoselection of bacteria cultured in a turbidostat. Microbiology, 61, 460–466. Bennett, W.N. (1988) Assessment of selenium toxicity in algae using turbidostat culture. Water Res., 22, 939–942.

References

40 Aarnio, T.H., Suihko, M.L., and Kauppinen, V.S. (1991) Isolation of acetic acid-

41 42

43

44 45

46 47

48 49

50

51

52

53 54 55

56 57

tolerant bakers-yeast variants in a turbidostat. Appl. Biochem. Biotechnol., 27, 55–63. Pham, H.T.B., Larsson, G., and Enfors, S.O. (1999) Modelling of aerobic growth of Saccharomyces cerevisiae in a pH-auxostat. Bioprocess. Eng., 20, 537–544. Martin, G.A. and Hempfling, W.P. (1976) A method for the regulation of microbial population density during continuous culture at high growth rates. Arch. Microbiol., 107, 41–47. Büttner, R., Uebel, B., Genz, I.L., and Köhler, M. (1985) Researches with a substrate-limited Ph-auxostat .1. Bistability of growth under potassium-limited conditions. J. Basic Microbiol., 25, 227–232. Pettersson, H.E. (1975) Growth of a mixed species lactic starter in a continuous "pH-Stat" fermentor. Adv. Appl. Microbiol., 29, 437–443. Larsson, G., Enfors, S.O., and Pham, H. (1990) The ph-auxostat as a tool for studying microbial dynamics in continuous fermentation. Biotechnol. Bioeng., 36, 224–232. Rice, C.W. and Hempfling, W.P. (1985) Nutrient-limited continuous culture in the phauxostat. Biotechnol. Bioeng., 27, 187–191. Müller, R.H. and Babel, W. (2004) Delftia acidovorans MC1 resists high herbicide concentrations–a study of nutristat growth on (RS)-2-(2,4dichlorophenoxy)propionate and 2,4-dichlorophenoxyacetate. Biosci. Biotechnol. Biochem., 68, 622–630. Kleman, G.L., Chalmers, J.J., Luli, G.W., and Strohl, W.R. (1991) Glucose-stat, a glucose-controlled continuous culture. Appl. Environ. Microbiol., 57, 918–923. Pirt, S.J. and Kurowski, W.M. (1970) An extension of the theory of the chemostat with feedback of organisms. Its experimental realization with a yeast culture. J. Gen. Microbiol., 63, 357–366. Herbert, D. (1961) A theoretical analysis of continuous culture systems, in Continuous Culture, vol. 12, Society of Chemistry and Industry, London, pp. 21–53. Zhang, Y., Stobbe, P., Silvander, C.O., and Chotteau, V. (2015) Very high cell density perfusion of CHO cells anchored in a non-woven matrix-based bioreactor. J. Biotechnol., 213, 28–41. Kempken, R., Preissmann, A., and Berthold, W. (1995) Clarification of animal cell cultures on a large scale by continuous centrifugation. J. Ind. Microbiol., 14, 52–57. Neidhardt, F.C., Ingraham, J.L., and Schaechter, M. (1990) Physiology of the Bacterial Cell: A Molecular Approach, Sinauer Associates. Box, G.E.P. (1999) Statistics as a catalyst to learning by scientific method part II-a discussion. J. Qual. Technol., 31, 16–29. Raduly, B., Capodaglio, A.G., and Vaccari, D.A. (2004) Simplification of wastewater treatment plant models using empirical modelling techniques. Young Res., 2004, 51. Nomikos, P. and MacGregor, J.F. (1994) Monitoring batch processes using multiway principal component analysis. AIChE J., 40, 1361–1375. AlGhazzawi, A. and Lennox, B. (2009) Model predictive control monitoring using multivariate statistics. J. Process Control, 19, 314–327.

25

26

1 Continuous Bioprocess Development

58 Kern, P., Wolf, C., Bongards, M., Oyetoyan, T.D., and McLoone, S. (2011) Self-

59

60 61

62 63 64

65

66

67 68 69 70 71 72 73 74 75 76 77

organizing map based operating regime estimation for state based control of wastewater treatment plants. 2011 International Conference of IEEE on Soft Computing and Pattern Recognition (SoCPaR), pp. 390–395. Georgescu, R., Berger, C.R., Willett, P., Azam, M., and Ghoshal, S. (2010) Comparison of data reduction techniques based on the performance of SVMtype classifiers, 2010 IEEE Aerospace Conference, pp. 1–9. Witten, I.H. and Frank, E. (2005) Data Mining: Practical Machine Learning Tools and Techniques, 2nd edn, Morgan Kaufmann, San Francisco. Barton, P.I. and Lee, C.K. (2002) Modeling, simulation, sensitivity analysis, and optimization of hybrid systems. ACM Trans. Model. Comput. Simul. (TOMACS), 12, 256–289. Rippin, D.W.T. (1993) Batch process systems engineering: a retrospective and prospective review. Comput. Chem. Eng., 17, 1–13. Lima, F.V. and Rawlings, J.B. (2011) Nonlinear stochastic modeling to improve state estimation in process monitoring and control. AIChE J., 57, 996–1007. Ławryńczuk, M. (2008) Modelling and nonlinear predictive control of a yeast fermentation biochemical reactor using neural networks. Chem. Eng. J., 145, 290–307. Zhu, G.-Y., Zamamiri, A., Henson, M.A., and Hjortsø, M.A. (2000) Model predictive control of continuous yeast bioreactors using cell population balance models. Chem. Eng. Sci., 55, 6155–6167. Ramaswamy, S., Cutright, T., and Qammar, H. (2005) Control of a continuous bioreactor using model predictive control. Process Biochem., 40, 2763–2770. Bayer, B. and Marquardt, W. (2004) Towards integrated information models for data and documents. Comput. Chem. Eng., 28, 1249–1266. Spieth, C., Hassis, N., and Streichert, F. (2006) Comparing Mathematical Models on the Problem of Network Inference, ACM, pp. 279–286. Heinrich, R. and Schuster, S. (1996) The Regulation of Cellular Systems, Springer, US. Vilela, M., Chou, I.C., Vinga, S., Vasconcelos, A., Voit, E., and Almeida, J. (2008) Parameter optimization in S-system models. BMC Syst. Biol., 2, 35. Stewart, W.E., Shon, Y., and Box, G.E.P. (1998) Discrimination and goodness of fit of multiresponse mechanistic models. AIChE J., 44, 1404–1412. Harms, P., Kostov, Y., and Rao, G. (2002) Bioprocess monitoring. Curr. Opin. Biotechnol., 13, 124–127. Schügerl, K. (2001) Progress in monitoring, modeling and control of bioprocesses during the last 20 years. J. Biotechnol., 85, 149–173. Biechele, P., Busse, C., Solle, D., Scheper, T., and Reardon, K. (2015) Sensor systems for bioprocess monitoring. Eng. Life Sci., 15, 469–488. de Assis, A.J. and Maciel Filho, R. (2000) Soft sensors development for on-line bioreactor state estimation. Comput. Chem. Eng., 24, 1099–1103. Sundström, H. and Enfors, S.-O. (2008) Software sensors for fermentation processes. Bioprocess Biosyst. Eng., 31, 145–152. Agrawal, P. and Lim, H.C. (1984) Analyses of Various Control Schemes for Continuous Bioreactors, Springer.

References

78 Henson, M.A. and Seborg, D.E. (1992) Nonlinear control strategies for 79

80 81

82 83

84 85

86

87 88

89

90 91 92

93 94 95

continuous fermenters. Chem. Eng. Sci., 47, 821–835. Andersen, M.Y., Pedersen, N.H., Brabrand, H., Hallager, L., and Jørgensen, S.B. (1997) Regulation of a continuous yeast bioreactor near the critical dilution rate using a productostat. J. Biotechnol., 54, 1–14. Hu, X., Li, Z., and Xiang, X. (2013) Feedback control for a turbidostat model with ratio-dependent growth rate. J. Appl. Math. Inform., 31, 385–398. Konstantinov, K.B., Tsai, Ys., Moles, D., and Matanguihan, R. (1996) Control of long-term perfusion chinese hamster ovary cell culture by glucose auxostat. Biotechnol. Progr., 12, 100–109. Sowers, K.R., Nelson, M.J., and Ferry, J.G. (1984) Growth of acetotrophic, methane-producing bacteria in a pH auxostat. Curr. Microbiol., 11, 227–229. Ullman, G., Wallden, M., Marklund, E.G., Mahmutovic, A., Razinkov, I., and Elf, J. (2013) High-throughput gene expression analysis at the level of single proteins using a microfluidic turbidostat and automated cell tracking. Philos. Trans. R. Soc. B, 368, 20120025. Yao, Y., Li, Z., and Liu, Z. (2015) Hopf bifurcation analysis of a turbidostat model with discrete delay. Appl. Math. Comput., 262, 267–281. Olivieri, G., Russo, M.E., Maffettone, P.L., Mancusi, E., Marzocchella, A., and Salatino, P. (2013) Nonlinear analysis of substrate-inhibited continuous cultures operated with feedback control on dissolved oxygen. Ind. Eng. Chem. Res., 52, 13422–13431. Tian, Y., Sun, K.B., Chen, L.S., and Kasperski, A. (2010) Studies on the dynamics of a continuous bioprocess with impulsive state feedback control. Chem. Eng. J., 157, 558–567. Nagy, Z.K. (2007) Model based control of a yeast fermentation bioreactor using optimally designed artificial neural networks. Chem. Eng. J., 127, 95–109. Sankpal, N.V., Cheema, J.J.S., Tambe, S.S., and Kulkarni, B.D. (2001) An artificial intelligence tool for bioprocess monitoring: application to continuous production of gluconic acid by immobilized Aspergillus niger. Biotechnol. Lett., 23, 911–916. Kishimoto, M., Beluso, M., Omasa, T., Katakura, Y., Fukuda, H., and Suga, K-i. (2002) Construction of a fuzzy control system for a bioreactor using biomass support particles. J. Mol. Catal. B Enzym., 17, 207–213. Bastin, G:. (2013) On-Line Estimation and Adaptive Control of Bioreactors, Elsevier. Gresham, D. and Hong, J. (2015) The functional basis of adaptive evolution in chemostats. FEMS Microbiol. Rev., 39, 2–16. Raissi, T., Ramdani, N., and Candau, Y. (2005) Bounded error moving horizon state estimator for non-linear continuous-time systems: application to a bioprocess system. J. Process Control, 15, 537–545. Alvarez-Vázquez, L.J. and Fernández, F.J. (2010) Optimal control of a bioreactor. Appl. Math. Comput., 216, 2559–2575. Dondo, R.G. and Marqués, D. (2001) Optimal control of a batch bioreactor: a study on the use of an imperfect model. Process Biochem., 37, 379–385. Åström, K.J. and Wittenmark, B. (2013) Adaptive Control, Courier Corporation.

27

28

1 Continuous Bioprocess Development

96 Zavala, V.M., Laird, C.D., and Biegler, L.T. (2008) A fast moving horizon

97

98

99

100

101

102

103

104

105

106

107

108

109

110

estimation algorithm based on nonlinear programming sensitivity. J. Process Control, 18, 876–884. Neubauer, P., Cruz, N., Glauche, F., Junne, S., Knepper, A., and Raven, M. (2013) Consistent development of bioprocesses from microliter cultures to the industrial scale. Eng. Life Sci., 13, 224–238. Formenti, L.R., Nørregaard, A., Bolic, A., Hernandez, D.Q., Hagemann, T., Heins, A.L., Larsson, H., Mears, L., Mauricio-Iglesias, M., and Krühne, U. (2014) Challenges in industrial fermentation technology research. Biotechnol. J., 9, 727–738. Balagadde, F.K., You, L., Hansen, C.L., Arnold, F.H., and Quake, S.R. (2005) Long-term monitoring of bacteria undergoing programmed population control in a microchemostat. Supramol. Sci., 309, 137–140. Nanchen, A., Schicker, A., and Sauer, U. (2006) Nonlinear dependency of intracellular fluxes on growth rate in miniaturized continuous cultures of Escherichia coli. Appl. Environ. Microbiol., 72, 1164–1172. Akgun, A., Maier, B., Preis, D., Roth, B., Klingelhofer, R., and Büchs, J. (2004) A novel parallel shaken bioreactor system for continuous operation. Biotechnol. Progr., 20, 1718–1724. Schmideder, A., Severin, T.S., Cremer, J.H., and Weuster-Botz, D. (2015) A novel milliliter-scale chemostat system for parallel cultivation of microorganisms in stirred-tank bioreactors. J. Biotechnol., 210, 19–24. Paalme, T., Elken, R., Kahru, A., Vanatalu, K., and Vilu, R. (1997) The growth rate control in Escherichia coli at near to maximum growth rates: the A-stat approach. Antonie Van Leeuwenhoek, 71, 217–230. Adamberg, K., Valgepea, K., and Vilu, R. (2015) Advanced continuous cultivation methods for systems microbiology. Microbiology, 161, 1707–1719. Wilming, A., Bahr, C., Kamerke, C., and Buchs, J. (2014) Fed-batch operation in special microtiter plates: a new method for screening under production conditions. J. Ind. Microbiol. Biotechnol., 41, 513–525. Funke, M., Buchenauer, A., Schnakenberg, U., Mokwa, W., Diederichs, S., Mertens, A., Müller, C., Kensy, F., and Büchs, J. (2010) Microfluidic biolectormicrofluidic bioprocess control in microtiter plates. Biotechnol. Bioeng., 107, 497–505. Jeude, M., Dittrich, B., Niederschulte, H., Anderlei, T., Knocke, C., Klee, D., and Büchs, J. (2006) Fed-batch mode in shake flasks by slow-release technique. Biotechnol. Bioeng., 95, 433–445. Panula-Perälä, J., Siurkus, J., Vasala, A., Wilmanowski, R., Casteleijn, M.G., and Neubauer, P. (2008) Enzyme controlled glucose auto-delivery for high cell density cultivations in microplates and shake flasks. Microb. Cell Fact., 7, 31. Krause, M., Neubauer, A., and Neubauer, P. (2016) Scalable enzymatic delivery systems for controlled nutrient supply improve recombinant protein production. Microb. Cell Fact., 15:110. Siurkus, J., Panula-Perala, J., Horn, U., Kraft, M., Rimseliene, R., and Neubauer, P. (2010) Novel approach of high cell density recombinant bioprocess development: optimisation and scale-up from microliter to pilot scales while

References

111

112

113 114 115 116 117 118

119

120 121

122

123 124

125

126

maintaining the fed-batch cultivation mode of E. coli cultures. Microb. Cell Fact., 9, 35. Glauche, F., Genth, R., Kiesewetter, G., Glazyrina, J., Bockisch, A., Cuda, F., Goellig, D., Raab, A., Lang, C., and Neubauer, P. (2017) New approach to determine the dependency between the specific rates for growth and product formation in miniaturized parallel robot-based fed-batch cultivations. Eng. Life Sci., in press. Cruz Bournazou, M.N., Barz, T., Nickel, D., Glauche, F., Knepper, A., and Neubauer, P., (2017) Online Optimal Experimental Re-Design in Robotic Parallel Fed-Batch Cultivation Facilities for Validation of Macro-Kinetic Growth Models at the Example of E. coli. Biotechnol Bioengin, 114, 610–619. Enfors, S.O. (2016) Fermentation Process Engineering. p. 99. Novick, A. and Szilard, L. (1950) Description of the chemostat. Supramol. Sci., 112, 715–716. Monod, J.L. (1950) La technique de culture continue. Théorie et applications. Annales de l’Institut Pasteur, 79, 390–410. Powell, E.O. (1965) Theory of the chemostat. Lab. Pract., 14, 1145–1149 passim. Macarrón, R. and Hertzberg, R.P. (2011) Design and implementation of high throughput screening assays. Mol. Biotechnol., 47, 270–285. Tai, M., Ly, A., Leung, I., and Nayar, G. (2015) Efficient high-throughput biological process characterization: definitive screening design with the Ambr250 bioreactor system. Biotechnol. Prog, 31, 1388–1395. Lutz, M.W., Menius, J.A., Choi, T.D., Laskody, R.G., Domanico, P.L., Goetz, A.S., and Saussy, D.L. (1996) Experimental design for high-throughput screening. Drug Discov. Today, 1, 277–286. Haaland, P.D. (1989) Experimental Design in Biotechnology, CRC Press. Körkel, S., Kostina, E., Bock, H.G., and Schlöder, J.P. (2004) Numerical methods for optimal control problems in design of robust optimal experiments for nonlinear dynamic processes. Optim. Method Softw., 19, 327–338. Korkel, D.M.S. (2002) Numerische Methoden fuer Optimale Versuchsplanungsprobleme bei nichtlinearen DAE-Modellen. Ruprecht-KarlsUniversitaet Heidelberg. Neubauer, P. and Junne, S. (2010) Scale-down simulators for metabolic analysis of large-scale bioprocesses. Curr. Opin. Biotechnol., 21, 114–121. Hedrén, M., Ballagi, A., Mörtsell, L., Rajkai, G., Stenmark, P., Sturesson, C., and Nordlund, P. (2006) GRETA, a new multifermenter system for structural genomics and process optimization. Acta Crystallogr. D, 62, 1227–1231. Gill, N.K., Appleton, M., Baganz, F., and Lye, G.J. (2008) Design and characterisation of a miniature stirred bioreactor system for parallel microbial fermentations. Biochem. Eng. J., 39, 164–176. Lamping, S.R., Zhang, H., Allen, B., and Shamlou, P.A. (2003) Design of a prototype miniature bioreactor for high throughput automated bioprocessing. Chem. Eng. Sci., 58, 747–758.

29

30

1 Continuous Bioprocess Development

127 Zhang, Z., Perozziello, G., Boccazzi, P., Sinskey, A., Geschke, O., and Jensen, K.

128

129

130

131

132 133

134

135

136 137

138

(2007) Microbioreactors for bioprocess development. J. Assoc. Lab. Autom., 12, 143–151. Knorr, B., Schlieker, H., Hohmann, H.-P., and Weuster-Botz, D. (2007) Scaledown and parallel operation of the riboflavin production process with Bacillus subtilis. Biochem. Eng. J., 33, 263–274. Huber, R., Ritter, D., Hering, T., Hillmer, A.-K., Kensy, F., Mueller, C., Wang, L., and Buechs, J. (2009) Robo-Lector: a novel platform for automated highthroughput cultivations in microtiter plates with high information content. Microb. Cell Fact., 8, doi: 10.1186/1475-2859-8-42. Barz, T., López Cárdenas, D.C., Arellano-Garcia, H., and Wozny, G. (2013) Experimental evaluation of an approach to online redesign of experiments for parameter determination. AIChE J, 59, 1981–1995.. Cruz Bournazou, M.N., Junne, S., Neubauer, P., Barz, T., Arellano-Garcia, H., and Kravaris, C. (2014) An approach to mechanistic event recognition applied on monitoring organic matter depletion in SBRs. AIChE J., 60, 3460–3472. Franceschini, G. and Macchietto, S. (2007) Model-based design of experiments for parameter precision: state of the art. Chem. Eng. Sci., 63, 4846–4872. Galvanin, F., Barolo, M., and Bezzo, F. (2009) Online model-based redesign of experiments for parameter estimation in dynamic systems. Ind. Eng. Chem. Res., 48, 4415–4427. Galvanin, F., Barolo, M., Pannocchia, G., and Bezzo, F. (2012) Online modelbased redesign of experiments with erratic models: a disturbance estimation approach. Comput. Chem. Eng., 42, 138–151. Huang, B. (2010) Receding horizon experiment design with application in SOFC parameter estimation. 9th International Symposium on Dynamics and Control of Process Systems (DYCOPS 2010), July 5–7, Leuven, Belgium. Stigter, J., Vries, D., and Keesman, K. (2006) On adaptive optimal input design: a bioreactor case study. AIChE J., 52, 3290–3296. Zhu, Y. and Huang, B. (2011) Constrained receding-horizon experiment design and parameter estimation in the presence of poor initial conditions. AIChE J., 57, 2808–2820. Cruz-Bournazou, M.N., Barz, T., Nickel, D., and Neubauer, P. (2014) Slidingwindow optimal experimental re-design in parallel microbioreactors. Chem. Ing. Tech., 86, 1379–1380.

1

1

RI AL

Overview: Affirmation of Nanotechnology between 2000 and 2030 Mihail C. Roco 1,2 1

National Science Foundation, Arlington, VA, USA

TE

2 National Nanotechnology Initiative, U.S. National Science and Technology Council, Washington, DC, USA

PY

RI

GH

TE

D

In the nanoscale domain, nature transitions from the fixed physical behavior of a finite number of atoms to an almost infinite range of physical– chemical–biological behaviors of collections of atoms and molecules. The fundamental properties and functions of all natural and man-made materials are defined and can be modified efficiently at that scale. The unifying definition of nanotechnology, based on specific behavior at the nanoscale and the long-term nanotechnology research and education vision, was formulated in 1997–1999, and its implementation begun with National Nanotechnology Initiative (NNI) in 2000. We have estimated that it would take about three decades to advance from a scientific curiosity in 2000 to a science-based general purpose technology with broad societal benefits toward 2030 [1–3] (see www.wtec.org/nano2/). A long-term strategic view is needed because nanotechnology is a foundational general purpose field. Three development stages of nanotechnology, corresponding to the level of complexity of typical outcomes, have been envisioned: passive and active nanostructures in the first stage of development (Nano 1), nanosystems and molecular nanosystems in the second stage (Nano 2), and converging technology platforms and distributed interconnected nanosystems in the last stage (Nano 3). We use the definition of nanotechnology as set out in Nanotechnology Research Directions [2]. Nanotechnology is the ability to control and restructure matter at the atomic and molecular levels in the range of approximately 1–100 nm, and exploiting the distinct properties and phenomena at that scale as compared to those associated with single atoms or bulk behavior. The aim is to create materials, devices, and systems with fundamentally new properties

CO



MA

1.1 Introduction

Nanotechnology Commercialization: Manufacturing Processes and Products, First Edition. Edited by Thomas O. Mensah, Ben Wang, Geoffrey Bothun, Jessica Winter, and Virginia Davis. © 2018 American Institute of Chemical Engineers, Inc. Published 2018 by John Wiley & Sons, Inc.





2

1 Overview: Affirmation of Nanotechnology between 2000 and 2030

and functions for novel applications by engineering their small structure. This is the ultimate frontier to economically change materials and systems properties, and the most efficient length scale for manufacturing and molecular medicine. The same principles and tools are applicable to different areas of relevance and may help establish a unifying platform for science, engineering, and technology at the nanoscale. The transition from the behavior of single atoms or molecules to collective behavior of atomic and molecular assemblies is encountered in nature, and nanotechnology exploits this natural threshold. This chapter describes the timeline and affirmation of nanotechnology, its three stages, key challenges, and discusses nanotechnology return on investment.

1.2 Nanotechnology – A Foundational Megatrend in Science and Engineering



Nanotechnology is a foundational, general purpose technology for all sectors of the economy dealing with matter and biosystems, as information technology is a general purpose technology for communication and computation. Biotechnology and cognitive technologies are two other foundational technologies growing at the beginning of the twenty-first century (Figure 1.1). Table 1.1 Nanotechnology Development: 2000–2030 Information Technology Spin-offs: Large databases, cyber-physical-social infrastructure, Internet of Things, connected sensorial systems, topical computer-aided design, cyber networks, ...

Information Technology

Spinoffs

Brain simulation Cyber networking Personalized education..

Cognitive Sciences

NBIC System Bits

Biotechnology Genes

Neuromorphic engng. Synapses to mind Smart environments, Cogno aid devices ..

Spinoffs

Nanobioinformatics DNA computing Proteomics, ….

Synapses Twenty-first century Building Blocks

Atoms

Spinoffs

Nanobiomedicine Nanobiotechnology Synthetic biology Bio-photonics, ….

Nanotechnology

Spinoffs

Nanotechnology Spin-offs : Nanophotonics, plasmonics, materials genome, mesoscale S&E, metamaterials, nanofluidics, carbon electronics, nanosustainability, wood fibers, DNA NT, ..

Figure 1.1 Converging foundational technologies, and their interdisciplinary and spin-offs subfields. Modified from Roco and Bainbridge [4].





 25–50 years k f (undamental) kf kt ka kp Knowledge

One-step investment amplification factor

Cumulative investment amplification factor

Game changer for:

k p(product and service) kp

k a(pplication) ka kp Application field

kt ka kp Technology approach

k t(opical)

User consumption

3–5 years

10–25 years

5–10 years

IV. Product and service platform (spin-off, interand recombination)

Car components Medical devices Nano coatings LEDs Nano lasers . . . .

III. Application field platform (branched, inter- and recombination)

Cell phone system Transportation Medicine Energy conversion and storage Agriculture Space exploration . . . .

Essential: Photonics Semiconductors Genomics Biomedicine Contributing: Synthetic biology Neuromorphic eng Proteomics Nanofluidics Metamaterials . . . .

• Nanotechnology: (atom architecture) • Information S&T (bit architecture) • Modern bio S&T (gene architecture) • Cognitive S&T (synapsis architecture) • Artificial Intelligence S&T (system design)

Typical timescales

S&T Platforms

Category

II. Topical S&T platform (hierarchical system from I)

I. Foundational S&T platform (system architecture)

Table 1.1 Proposed classification of science and technology platforms.

 



1 Overview: Affirmation of Nanotechnology between 2000 and 2030

shows several category levels of science and technology (S&T) platforms according to their level of generality and societal impact: foundational S&T, topical S&T, application domain, and product/service platform. While there are only five foundational S&T platforms most dynamic at this moment (Figure 1.1), the number of topical S&T platforms increases with the number of spin-offs, interplatform and further recombination growth. Each topical S&T platform has several application domains, which at their turn each have a series of products and related services. The importance of foundational platforms – and in particular its most exploratory component part at this moment, nanotechnology – is underlined by the cumulative investment amplification factor by developing the respective S&T platform that is a product of the foundational platform investment amplification factor, with the topical, application area and product amplification factors. Nanotechnology continues exponential growth by vertical science-totechnology transition, horizontal expansion to areas as agriculture/textiles/ cement, and spin-off areas (∼20) as spintronics/metamaterials/…, progressively penetrating in key economic sectors. The number of World of Science publications on nano-extended 20 new terms between 1990 and 2014 that now represent over 1/4 of the total publications (Figure 1.2). For this reason, it is increasingly difficult to identify the R&D programs around the word supporting nanotechnology because they are called after an activity that 18000

15000

WoS publication count

4

12000

9000

Plasmonic*

Metamaterials*

Microfluidic*

Spintronic*

Molecular system*

Supramolecule*

Fullerene*

Dendrimers*

Graphene*

2D material*

Atom* layer deposition

Artificial photosynthese*

"Cellulose fiber*" OR "cellulose tube*"

Optoelectronic* OR opto-electronic*

Bio-photonic* OR biophotonic*

Opto-genetic* OR optogenetic*

"DNA computing" OR "DNA assembling"

Proteomic*

Synthetic biology*

Meso*

Graphene*

Proteomic* 6000

Supramolecule* Microfluidic* Plasmonic*

3000

0 1985

Fullerene* Metamaterials*

1990

1995

2000

2005

2010

2015

Publication Year

Figure 1.2 The number of World of Science (WoS) publications on nano-extended 20 new terms between 1990 and 2014.





1.2 Nanotechnology – A Foundational Megatrend in Science and Engineering

(million $ /year)

R&D FUNDING

10000 8000

Japan

7000

USA

6000

Others

5000

Total

4000

Industry $ >Public $ (2006)

NNI (2000)

3000 2000

+ Additional spin-off R&D investments

W. Europe

9000

IWGN

5

1000

Seed funding 1991–1997

NNI Preparation First Generation products Second Generation vision/benchmark passive nanostructures active nanostructures

nano 1

2012

2011

2010

2009

2008

2007

2006

2005

2004

2003

2002

2001

2000

1999

1998

1997

0

Third Generation nanosystems

nano 2

Figure 1.3 International government R&D funding the interval 2000–2012, after 2013 – increase use of new terms and platforms (using NNI definition, 81 countries, MCR direct contacts).



branched out of the foundational field. Figure 1.3 illustrates international government R&D funding the interval 2000–2012 [9]. Most of the larger science and technology initiatives have been justified in the United States and abroad mainly by application-related and societal factors. For example, the Manhattan Project during World War II (with centralized, goal-focused, and simultaneous approaches), the Apollo Space Project (with a centralized, focused goal), and Networking and Information Technology Research and Development (top-down initiated and managed, and established when mass applications justified the return of investment). The initiation of the NNI was motivated primarily by its long-term science and engineering goals and general purpose technology opportunity, and has been managed using a bottom-up approach combined with centralized coordination. A few comments underlying this characteristic are as follows: Charles Vest, President National Academy of Engineering (PCAST meeting, White House, 2005): “NNI is a new way to run an initiative” Steve Edwards, “Hall of Fame for Nanoscale Science and Engineering” (Jan. 1, 2006): “…persuading the U.S. government, not to mention the rest of the world, to support nanotechnology. It was a masterful job of engineering the future” Tim [5], President of the European Nanobusiness Association, and Cientifica Co. (2015): “nanotechnology [is] the first truly global scientific revolution.”





6

1 Overview: Affirmation of Nanotechnology between 2000 and 2030

Level of economic importance and societal impact

ma nuf act uri ng no

Na

Di gi ta lt ec hn ol og y

High

Low

Earlier era

Current era Past

Future era

Envisioned

Figure 1.4 S-curves for two science and technology megatrends: past and envisioned conceptualization of “Nanomanufacturing” and “Digital Technology” [6].



Nanotechnology promises to become a general purpose technology with large-scale applications similar to digital technology. It could eventually match or outstrip the digital revolution in terms of economic importance and societal impact once the methods of investigation and manufacturing are developed and the underlying education and infrastructure are established. During about 2020–2030, nanotechnology could equal and even exceed the digital revolution in terms of technology breakthroughs, investments, and societal importance (Figure 1.4) [6]. The nanotechnology development S-curve shown in Figure 1.4 is supported by the data in Table 1.2 showing an increase of the world annual rate of revenues growth from 25% in 2000–2010 to 44% in 2010–2013. Examples of S&E generic platforms with areas of high impact Nanotechnology-enabled products by sectors with the most traded nanoproducts in 2014 according to Lux Research [7] and other industry sources: • Materials and manufacturing: Fiber-reinforced plastics, nanoparticle catalysts, coatings, insulation, filtration, transportation (cars, trucks, trains, planes, and ships), and robotics (actuators and sensors). For example, Exxon-Mobil has multibillion dollar applications on nanostructured catalysts. TiO2 , MWCNTs, and quantum dots are some of the most frequently encountered nanocomponents. Nanoscale coatings, imprinting, and roll-toroll are three most common manufacturing processes.







30 13 43%

World

United States

United States/World

33%

110

335

2010

−1% 33%

170

514

25%

24%

2011

Annual rate 2000–2010

25%

213

852

2012

2014

1620 370 23%

2013

1190 284 24%

−10%

25%

750

3000

48% 38%

Estimation 2020

Annual rate 2010–2013

Source: Data from Roco et al. [1] for 2000–2010, est. 2020, and est. 2030; and from Lux Research [7] for 2011–2014.

2000

Revenues all in $ billions

Table 1.2 Global and US revenues from products incorporating nanotechnology.

 

25%

7,500

30,000

Estimation 2030

8

1 Overview: Affirmation of Nanotechnology between 2000 and 2030

• Electronics and IT: Semiconductors, mobile electronics and displays, packaging, thermal management, batteries, supercapacitors, paint, and integration with nanophotonics • Healthcare and life sciences: Diagnostic and monitoring sensors (cancer), cosmetics, food products and packaging, personal care products, sunscreen, packaging, surgical tools, implantable medical devices, filtration, treatments (cancer radiation therapies) and medications formulations, contrast agents, quantum dots in lab supplies such as fluorescent antibodies, and drug delivery systems. • Energy and environment: Fuel cells, hydrolysis, catalysts, solar cells, insulation, filtration, supercapacitors, grid storage, monitoring equipment (sensors), water treatment, and purification Nanotechnology development between about 2000 and 2030 is an example of the convergence–divergence process for science and technology megatrends. The convergence process has four phases:



a) First is the “creative phase” (Figure 1.5). Confluence of knowledge of bottom-up and top-down disciplines, of various sectors from materials to medicine, and of various tools and methods of investigation and synthesis have led to an increasing control of matter at the nanoscale. b) The convergence enables the creation/integration of successive generations of nanotechnology products and productive methods (“Integration/fusion phase”). c) At the beginning of the divergent stage, the spiral of innovation (“Innovation phase”) leads to new products and applications that are estimated to reach $3 billion in 2020. d) The spiral of innovation branches out thereafter into new activities, including spin-off disciplines and productive sectors, business models, expertise, and decision-making approaches (“Spin-off phase”). An essential element of progress is the emergence of completely new skills in nanotechnology, some of which have resulted from the interface with other fields and others resultant as a spin-off element of nanoscience, such as spintronics, plasmonics, metamaterials, carbon nanoelectronics, DNA nanotechnology, optogenetics, and molecular design to create hierarchical systems. New technology platforms will be created such as in nanosensors systems, components in robotics, nanoelectronics in cars, cyber manufacturing, unmanned vehicles, and light space crafts. The exponential growth of nanotechnology through discoveries, technological transition, horizontal expansion, and its spin-off areas is expected to continue at high rates through 2030. The economic estimations currently made by





1.3 Three Stages for Establishing the New General Purpose Technology

2000

Convergence

Disciplines Bottom-up and top-down

Tools and Methods

a



Assembly of interacting parts

b

Spin-off disciplines, technology platforms, productive sectors, applications, business

New systems

Five NT Generations

New Products and Applications-$30T

New expertise (NBIC..)

Product, invention

Idea, discovery

Creative phase

New nanosystem architectures

Innovation spiral

Control of matter at the nanoscale

Materials Medical, .. Sectors

2030

Divergence

Knowledge confluence

Integration/ Fusion phase

c

New governance methods

Innovation phase

d

Spin-off phase

Figure 1.5 2000–2030 convergence–divergence cycle for global nanotechnology development. Modified from Roco and Bainbridge [4].

valuing the commercialization of particular products (e.g., single-wall carbon nanotube of single-sheet graphene) are not the most representative for effective products because the most valuable developments are the know-how and technological capabilities that are changing fast toward composite and modular nanosystems. This is particularly true if the material has mostly a “schooling” role (“model nanotechnology” vs “economic nanotechnology”).

1.3 Three Stages for Establishing the New General Purpose Technology We have estimated that about 30 years are needed for nanotechnology development from a scientific curiosity to mass use in economy that may be separated into three stages. Each stage is defined by its investigative methods and synthesis/assembling techniques, level of nanoscale integration and complexity of the respective products, typical application areas, education needs, and risk governance. These characteristics are documented in four reports (Figure 1.6). A schematic of the three successive stages and corresponding nanotechnology generation products are shown in Figure 1.7. Each stage differs by the types



9



10

1 Overview: Affirmation of Nanotechnology between 2000 and 2030

nano1 (2001–2010)

nano2 (2011–2020)

NBIC1 & 2 (2011–2030)

NanoBioITCogno-

1999

2010

2013

2001

Thirty-year vision to establish nanotechnology: changing focus and priorities; used in >80 countries)

Figure 1.6 Thirty-year vision to establish nanotechnology: changing the research and education focus and priorities in three stages from scientific curiosity to immersion in socioeconomic projects [1–4]. The reports are available on www.wtec.org/nano2/ and www.wtec.org/NBIC2-report/.

2030

Creating a general purpose Nanotechnology in three stages

Six generations Nanoproducts

New convergence platforms and economy immersion

6. Nanosystem Conv.Nerworks

Divergence

~2021

Convergence



nano3 technology divergence

~2030

Create spin-off nano-platforms in industry,medicine and services;

5. NBIC Techn. Platforms

NS&E integration for general purpose technology ~2011 ~2020 nano2 system integration

4. Molecular Nanosystems

Create nanosystems by science-based design/processes/technology integration

3. Systems of Nanosystems

Foundational interdisciplinary research at nanoscale ~2001 ~2010 nano1 component basics

2. Active Nanosstructures

Create passive and active nanocomponents by semi-empirical design

1. Passive Nanosstructures

2000 Figure 1.7 Creating a general purpose technology in three stages between 2000 and 2030. Each stage includes two generations of nanotechnology products. Modified from Roco et al. [1].





1.3 Three Stages for Establishing the New General Purpose Technology



of new measurements and simulation methods, advanced processing/synthesis methods, integration and complexity levels, use and application, as well as risk governance [1]. The R&D focus in nanotechnology is evolving: from synthesizing components by semiempirical processing and creating a library of nanostructures (such as carbon nanotubes, quantum dots, and sheets of graphene) in 2000–2010, to science-based design and manufacturing of nanoscale devices and systems (such as in biomedicine and nanoelectronics) in 2010–2020, and establishing economic nanotechnology applications in various technology platforms and immersion in socioeconomic projects in 2020–2030. The planning in the first stage (Nano 1) was focused on discovery of individual phenomena and semiempirical synthesis of nanoscale components. For example, quantum effect repulsion forces between surfaces at small interfaces have been identified, and the first quantum device was built. In another example, quantum dots, nanoparticles, nanotubes, and nanocoatings were created from a majority of elements in the periodic table, and their atom and molecular assembling mechanism explored for generalizations. Typical measurements of the atomic structure with femtosecond changes were indirect, using time and volume averaging. The nanocomponents were used to improve the performance of existing products, such as nanoparticle-reinforced polymers. The main penetration of nanotechnology was in advanced materials, nanostructured chemicals, electronics, and pharmaceutics. For example, Moore’s Law for semiconductors has continued because of nanoscale components added in their fabrication. Education transitioned from fixed properties at the microscale to engineered properties based on the nanoscale understanding of nature and technology. Governance was mostly science-centric with a focus on nanotechnology environmental, health and safety aspects. National and international organizations have formulated the basics for nomenclature to be used in scientific publications and for standards classifications. An international, multidisciplinary nanoscale science and technology community has been established. During the second decade (Nano 2) of nanotechnology development, the focus is on integration at the nanoscale and science-based creation of devices and systems for fundamentally new products – including self-powered nanodevices, self-assembling systems on multiple scales from the nanoscale, and nano-bio assemblies. Examples include nanofluidics systems, integrated sensorial systems, and nanoelectronics and display systems. Direct measurements and simulations with atomic and femtosecond resolutions have been undertaken for many-atom systems encountered in biological and engineering applications. There is an increased focus on new performance in new domains of application and on innovation methods. Nanotechnology penetration is faster in nanobiotechnology, energy resources, food and agriculture, forestry, and cognitive technologies, as well as in nanoscale



11



12



1 Overview: Affirmation of Nanotechnology between 2000 and 2030

simulation-based design methods. In education, we see more attention to cross-disciplinary “T” or “reverse T” learning including general nanotechnology education in the horizontal component. Societal aspects are increasingly on expanding sustainability, exploiting the potential for increased productivity and addressing socioeconomic issues with a focus on healthcare. Governance is increasingly user-centric and multiple player participatory. Global implications are seen on economy, sustainability, and balance of forces. After 2010, there is an increased focus on nanoscale science and engineering integration with other knowledge and technology domains and their applications [3] that will continue through the end of the third decade (Nano 3) of nanotechnology development. After about 2000, convergence of nanotechnology with other key technologies will subsequently lead to bifurcation into emerging and integrated technology platforms. NBIC-based measurements will be needed in these new technology platforms. Integration of foundational and general technologies will branch out to new fields of research and production. Education will need to be more focused on unifying concepts and connecting phenomena, processes, and technologies. The role of bottom-up and horizontal interactions will increase in importance as compared to top-down measures in the S&T governance. The risk analysis will expand to hybrid bio-nano systems and human–technology coevolution. New competencies, socioeconomic platforms, and production capabilities will be taking a significant role in the economy. The international community will be more connected through the scientific and technological developments. It will create opportunities for new models of collaboration and competition. New generations of nanotechnology products and productive processes are timed with the introduction of new prototypes of nanotechnology products and with the successive increases in the degree of control, integration, complexity, and risk. Table 1.3 defines the estimation for introduction of various generations along the three conceptual stages of nanotechnology development. A dominant trend in the interval 2020–2030 is envisioned to be immersion of nanotechnology with other emerging and established technologies, in industry, medicine and services, and in education and training for societal progress to become the largest technology driver in most economical sectors together with information technology. Priorities of the U.S. NNI at the beginning of Nano 2 [1] were grouped and are funded under several “Nanotechnology Signature Initiatives (NSI)” since 2011: (i) Nanoelectronics for 2020 and Beyond; (ii) Sustainable Nanomanufacturing; (iii) Nanotechnology for Solar Energy; (iv) Nanotechnology Knowledge Infrastructure, and (v) Nanosensors. In 2016, the NSI on “Nanotechnology for Solar Energy” was completed and replaced by the NSI on “Water Sustainability through Nanotechnology” (www.nano.gov/node/1577). PCAST [8] has recommended for new set of grand challenges for Nano 2 looking 15 years ahead. The first has been dedicated to “Nanotechnology-Inspired Grand Challenge





1.3 Three Stages for Establishing the New General Purpose Technology

13

Table 1.3 Generations of nanotechnology products and productive processes, and the corresponding interval for beginning commercial prototypes. Stage

Generation

Main characteristics

Nano 1 Component basics

G1: Passive nanostructures (∼2000–2005)

The nanostructures have stable behavior during their use. They typically are used to tailor macroscale properties and functions a) Dispersed nanostructures, such as aerosols, colloids, and quantum dots on surfaces b) Contact nanostructures, such as in nanocomposites, metals, polymers, ceramics, and coatings The nanostructures change their composition and/or behavior during their use. They typically are integrated into microscale devices and systems and used for their biological, mechanical, electronic, magnetic, photonic, and other effects a) Bioactive with health effects, such as targeted drugs, biodevices, and artificial muscles b) Physico-chemical active, such as amplifiers, actuators, adaptive structures, and 3-D transistors Three-dimensional nanosystems frequently incorporated into other systems and using various syntheses and assembling techniques such as bio-assembling, robotics with emerging behavior, and evolutionary approaches. A key challenge is networking at the nanoscale and hierarchical architectures. Research focus will shift toward heterogeneous nanostructures and supramolecular system engineering. This includes directed multiscale self-assembling, artificial tissues and sensorial systems, quantum interactions within nanoscale systems, processing of information using photons or electron spin, and assemblies of nanoscale electromechanical systems (NEMS)

G2: Active nanostructures (∼2005–2010)

 Nano 2 System integration

G3: System of nanosystems (∼2010–2015)

(Continued)





14

1 Overview: Affirmation of Nanotechnology between 2000 and 2030

Table 1.3 (Continued) Stage

Generation

Main characteristics

G4: Molecular nanosystems (∼2015–2020)

Heterogeneous molecular nanosystems, where each molecule in the nanosystem has a specific structure and plays a different role. Molecules will be used as devices and from their engineered structures and architectures will emerge fundamentally new functions. Designing new atomic and molecular assemblies is expected to increase in importance, including macromolecules “by design”, nanoscale machines, and directed and multiscale self-assembling, exploiting quantum control, nanosystem biology for healthcare, and human–machine interface at the tissue and nervous system level. Research will include topics such as atomic manipulation for design of molecules and supramolecular systems, controlled interaction between light and matter with relevance to energy conversion among others, exploiting quantum control mechanical–chemical molecular processes, nanosystem biology for healthcare and agricultural systems, and human–machine interface at the tissue and nervous system level



Nano 3 Technology divergence

G5: NBIC integrated technology platforms (∼2020–2025)

Converging technology platforms from the nanoscale based on new nanosystem architectures at confluence with other foundational emerging technologies. This includes converging foundational technologies (nano-bio-info-cogno) platforms integrated from the nanoscale

G6: Nanosystem convergence networks (∼2025–2030)

Distributed and interconnected nanosystem networks, across domains and interacting at various levels (foundational, topical, application, products/service), for health, production, infrastructure, and services. This includes networks of foundational technologies (nano-bio-info-cogno) platforms and their spin-offs including for emerging nano-biosystems





1.4 Several Challenges for Nanotechnology Development

15

for Future Computing” (http://www.nano.gov/grandchallenges). A focus in 2011–2020 is research on the third generation of nanotechnology products including nanosystems, self-powered nanodevices, and nano-bio assemblies. There is an increased focus on nanoscale science and engineering integration with other knowledge and technology domains to create new nanosystem architectures and corresponding technology platforms by 2030 (“Converging knowledge, technology and Society: Beyond Nano-Bio-Info-Cognitive Technologies” [3]).

1.4 Several Challenges for Nanotechnology Development Several key challenges for nanotechnology development in the next 15 years are identifying and realizing:



• Path to new technological platforms. Identifying and supporting technology platforms exploiting the main features at the nanoscale such as self-assembling, quantum materials and devices, nanofluidics fluid-based manufacturing, new logic and memory paradigms, nanophotonics, and plasmonics. • Path to economicity. Increasing productivity and economical production are the main reasons for advancing the tools of nanoscale science and engineering. In the first stage, the challenge was to prove the ability to create and change nanostructures by control no matter the cost or sustainability. Now, we increasingly aim at minimizing the cost and resources. For example, a general approach is nanomodular materials and systems by design that would allow efficient assembling of nanomodules that individually maintain their nano-specific properties [9]. The purpose is to build economical and versatile products at relatively low temperatures and pressures, as well as to create things that are not possible otherwise such as nanosensors systems and synthetic organs. • Path to sustainability. An initial goal of nanotechnology has been manufacturing using less materials, energy, and water while reducing the waste. An approach was leaving the molecules to do what “they like,” such as self-assembling at low temperature and pressures using noncovalent molecular interactions. This may be associated with the term green nanomanufacturing. Another goal was to develop new methods for efficient energy conversion and storage, economic filtration, and computing, to name several important pathways to sustainable society. • Path to creativity, inventions, and innovations in nanotechnology applications. Because nanoscale science and engineering is at the confluence of multiple disciplines and methods and there are multiple application areas including by convergence with other technologies, there is a good potential for





16

1 Overview: Affirmation of Nanotechnology between 2000 and 2030



• •







innovation. For example, the technology and business branching out in the nanotechnology convergence–divergence cycle created multiple opportunities for value add, such as nano-biomedical devices and synthetic biology, nanosensors for medical and industrial use, and nanotechnology for robotics and autonomous vehicles. Path to wellness. Nanotechnology enables replacing clinical medicine with individual treatment based on conditioning the cell at the subcellular level, and treating chronic diseases with molecular medicine. Physical and mental wellness is at the core of quality of life. Path to almost zero-power internet of things. The connectivity between efficient logic and memory devices, sensors, robotics, and distributed energy harvesting enabled by nanotechnology Path to a comprehensive, flexible, and connected infrastructure responding to the requirements of the three nanotechnology stages (Figure 1.7). Examples of the connected R&D infrastructure in the United States in 2015 are shown in Table 1.4. Path to responsible governance and public acceptance. There is an increased role of ethical, legal, and other social issue (ELSI) and of emergent nano-biosystems, nano-neurotechnology, and overall converging technologies effects. Nanotechnology growth potential in the future. It depends on setting the proper vision and goals, planning R&D to reach the vision, and assuring R&D funding under public scrutiny, and competition with other sectors and other economies.

1.5 About the Return on Investment Foundational science and technology fields have a larger impact as compared to topical technologies they generate, but need longer time to be developed and implemented. Nanotechnology is a foundational technology with implications on knowledge, productivity, health, sustainability, security, and overall wellness that is establishing its methods, transformative approach, and infrastructure between about 2000 and 2030. Still in its infancy in creating novel nanosystems, nanotechnology in 2015 has already shown its promise to society. The United States invested about $20 billion in nanotechnology R&D through the NNI in the fiscal years 2001–2015. The cumulative US nanotechnology commitment since 2000 places the NNI second only to the space program in terms of civilian science and technology investment [10]. Overall, it has been estimated that NNI spent about one fourth of the global government funding since 2001.





1.5 About the Return on Investment

17

Table 1.4 Examples of main NNI R&D centers, user facilities, and networks sponsored by NSF. Name

Institution(s)

NSF – 10 Networks



National Nanofabrication Infrastructure Network (NNIN) – 15 nodes (user facilities) (www.nnin .org) National Nanotechnology Coordinated Infrastructure (NNCI) after September 2015 – 16 nodes

Cornell University – main node (NNIN recompetition in 2015)

National Nanofabrication Infrastructure Network (NNIN) – 15 nodes (user facilities) (www.nnin.org)

Cornell University – main node (under recompetition in 2013–2014)

Network for Computational Nanotechnology (NCN) (nanoHUB.org)

Purdue University – main node

National Nanomanufacturing Network (NNN) (www.internano.org)

University of Massachusetts, Amherst – main node

Centers for Nanotechnology in Society (CNS) (cns.asu.edu)

Arizona State University and University of California, San Diego

Nanoscale Informal Science Education (NISE) Network (www.nisenet.org)

Museum of Science, Boston – main node

Nanoscale Science and Engineering Centers (NSEC)

Distributed centers

Materials Science and Engineering Centers (MRSECs)

Distributed centers

Nanosystems Engineering Research Centers (NERC)

Distributed centers

Centers for the Environmental Implications of Nanotechnology (CEIN) (www.cein.ucla.edu)

University of California, Los Angeles, and Duke University

Center for National Nanotechnology Applications and Career Knowledge (NACK) (nano4me.org)

Pennsylvania State University

NSF – Three Science and Technology Centers Center for Energy Efficient Electronics Science (nanoelectronics) (www.e3s-center.org)

University of California, Berkeley

Emergent Behaviors of Integrated Cellular Systems (nanobiotechnology) (hhttp://ebics.net/about)

Massachusetts Institute of Technology

Center for Integrated Quantum Materials (ciqm.harvard.edu)

Harvard University





18



1 Overview: Affirmation of Nanotechnology between 2000 and 2030

One successful case study at the beginning of the NNI about 2000 was for giant magnetic resonance (GMR) that brought a significant performance improvement in computer hard disks and had an annual economic impact of few billions dollars. An estimation about the long-term impact of nanotechnology was presented in September 2000 during the review of NNI and published in 2001 [4]. The estimation was based on consultation with experts and technical executives in industry in 10 sectors. The level of penetration of nanotechnology was estimated in each sector per group of products and then multiplied by the total production levels. The findings lead to the overall conclusion that nanotechnology will bring revenues of about $1 trillion by 2015 increasing with an annual rate of about 25%. In 2004, Lux Research adopted a similar approach of evaluation using direct surveys in industrial units. The respective survey data on past intervals have a good international database and were used for comparison. Other topical studies on market size based on the collection of information from a single sector and partial databases have been less useful for comparison. In 2014, the US annual public R&D investment (from federal, state, and local sources) has been estimated to about $1.8 billion while the private investment was about $4.0 billion [7]. In 2013, in the United States alone, according to the same industry surveys, there are more than about $370 billion in revenues from products incorporating nanotechnology as a key functional and competitive component. In several areas, nanotechnology has become a large part of the market. For example, around 60% of semiconductors and over 40% of manufactured catalysts have some form of nanotechnology involved. The technology has also shown a footprint in emerging research, with approximately 70% of energy-related proposals submitted to National Science Foundation having a basis in nanotechnology. The numbers are significant considering the variety of proposals and ideas. If we consider that for each $0.5 million annual production is needed a nanotechnology worker, then this would require about 740,000 nanotechnology jobs (Figure 1.8). The revenues from products incorporating nanotechnology are about 150 (∼318/2.1) times larger than public R&D funding in 2013 that have catalyzed such output. The investment amplification factors from foundational nanotechnology to the topical technologies and then to application technology platforms leading to final product and service delivery (Table 1.1) support such a large return. Nanotechnology is rapidly growing in a field still in formation, at the beginning of the S curve of development (Figure 1.4). Only relatively simple nanostructures are found in applications, such as nanolayers in semiconductor industry and coatings, dispersions in catalyst industry and paints, and nanomodules with surface molecular recognition for targeted medical therapeutics, to name a few. Despite the fact that nanotechnology is still in the formative phase of development, if one would consider an average tax of 20% and apply this to about $318 billion US market incorporating nanotechnology in 2013, the result would be





1.5 About the Return on Investment

~$4.0B industry R&D ~$1.8B* NNI +states R&D

~$B industry operating cost

19

~740,000 Jobs*** ~$370B** Final products

~$74B

* The corresponding R&D investment was Taxes about 10 times smaller 10 years ago, while the effects are measured now ~$4.0B ** Estimated taxes 20% *** Est. $500,000/ year/ job Use NSF and Lux Research data

industry R&D

Figure 1.8 The flow of nanotechnology R&D investments and outcomes in the United States in 2014. Roco 2016; updated data and Figure 1.4 from Roco et al. [1].



$64 billion in tax revenue in 1 year (2013) that exceeds the total 13 year R&D investment of NNI by a factor of almost four. The revenues from products incorporating nanotechnology in the world based on direct survey in industry [7] have reached $1014 billion in 2013, with 31.8% of this amount in Europe, 31.3% in the United States, 30.5% in Asia, and 6.4% in the rest of the world. The main sectors identified in the survey are materials and manufacturing with 59%, electronics and information technology with 29%, healthcare and life sciences with 10%, and energy and environment with 2%. Key long-term qualitative and quantitative targets set up in 2000 for this interval have been realized and even some exceeded such as in memory devices, nanoelectronics, molecular detection of cancer, and market impact. Several illustrations are as follows: • In his presentation made on January 21, 2000, at Caltech, President Clinton stated in his announcement of NNI [11]: “Just imagine materials with 10 times the strength of steel and only a fraction of the weight; shrinking all the information at the Library of Congress into a device the size of a sugar cube; detecting cancerous tumors that are only a few cells in size. Some of these research goals will take 20 or more years to achieve. But that is why – precisely why – . . . .there is such a critical role for the federal government.” All these targets are currently more advanced than initially envisioned with significant progress on structural metals, foams and composites, creating addressable nanostructures for memory devices that go to 12 atoms and even one atom, and methods for targeted detection and treatment of cancer [1].





20



1 Overview: Affirmation of Nanotechnology between 2000 and 2030

• The original report with a 10-year vision [2] recommended progress from fundamental concepts to applications in 10 sectors, and establishing a flexible infrastructure and education community. The progress reported after 10 years shows that a multidisciplinary community has been established using a suitable specialized infrastructure and the progress in various sectors generally has reached the core objectives [1]. The exploratory concepts outlined in the 1999 report are still valid. Nanotechnology as a foundational S&T field promised to branch out in various technology platforms. Large impact illustrations of the potential of nanotechnology with multibillion dollar revenues have been the applications GMR at the beginning of NNI 1998–000 (about $2 billion in direct production in 2000, R&D lead by IBM), targeted drug delivery (PHARMA), semiconductors (SRC), and nanostructured catalysts (Exxon-Mobil) [1]. The promise that nanotechnology will penetrate several key industries has been realized. Catalysis by engineered nanostructured materials impacts 30–40% of the US oil and chemical industries (Chapter 10 in the Nano 2020 report); semiconductors with features under 100 nm constitute over 30% of that market worldwide and 60% of the US market (Chapter on Long View in Nano 2020 report); molecular medicine is a growing field, and only in 2010 about 15% of advanced diagnostics and therapeutics are nanoscience based. These and many other examples show nanotechnology is well on its way to reaching the goal set in 2000 for it to become a “general-purpose technology” with considerable economic impact. • We have estimated that nanotechnology funding and production will grow by about 25% in the first 10 years after 2000. Global nanotechnology development has shown significant and consistent increases by high annual rates from 2000 to 2013 of about 25% for primary workforce (about 600,000 in 2010) and market value of final products incorporating nanotechnology (about $300 billion in 2010). The nanotechnology labor and markets are estimated to continue to double every 3 years, reaching about $1 trillion market with 2 million jobs around 2015, and $3 trillion market encompassing 6 million jobs by about 2020. The rate of increase for venture capital investment is about 30% reaching about $1.3 billion in 2010. As shown earlier, in 2000, it was estimated that the products incorporating nanotechnology will bring world revenues of $1 trillion by 2015 [4]. The industry survey of Lux Research [7] has reported that this target has been realized in 2013. The nanotechnology R&D investment has significant returns despite the relatively short term for fundamental discoveries of a foundational field in science and engineering to find the way to applications. Nanotechnology already has a major and lasting impact that promises to be more relevant for healthcare, environment, and manufacturing here on Earth than the Space program. The R&D





1.6 Closing Remarks



challenges for the future in establishing nanotechnology in the economy are the creation of science-based nanosystems, new nanosystem architectures for new technology platforms, emergence of nano-bio hybrid systems, and convergence with other foundational technologies for societal benefit. The nanotechnology revenues are estimated to reach about 5% of the Gross Domestic Product (GDP) in developed economies by 2020 and, respectively, over 10% of GDP by 2030. Nanoscale science and engineering in the past 10 years is a springboard for future nanotechnology applications and other emerging technologies. We estimate that introduction of nanotechnology in various economic sectors such as electronics and pharmaceutics will lead to at least 1% increase annually in productivity during the 2020s in a similar manner as another general purpose technology – information technology – did in the 1990s. R&D investments for various topical S&T platforms have been recognized to have significant returns in long term (with an average investment amplification factor for topical platforms of kt ∼ 3–5 times; Table 1.1) as they support creation and use of new products, services, and tools with higher efficiency. Nanoscale science and engineering, like information and communication technology, is a foundational, general purpose technology that supports the topical S&T platforms. By assuming a similar fractal effect in improving basic methods and tools in foundation and topical platforms (k f ∼ kt ), then the cumulative return on investment may be roughly estimated as being (kf × kt ) ∼ 10–20 times. Nanotechnology would reach economic, large-scale application including convergence with other foundational technologies and their spin-offs domains by 2030, at a similar level of development of infrastructure, tools, and design methods that information technology and the Internet achieved by about 2000.

1.6 Closing Remarks Nanotechnology development has become an international scientific and technological endeavor with focused R&D programs in over 80 countries after the announcement of the NNI in the United States in 2000. The long-term vision and collaboration among the national and international programs are essential factors in this global development. Across the major developments in science and technology at present, nanotechnology positions itself as the most exploratory of them. With much of the research in many sectors, such as in information technology for example, focused on applications, in nanoscale science and engineering there are less established methodologies – giving it perhaps the greatest scope for discovery, manipulation, and diversification in coming years. Nanotechnology is still in the formation phase of creating nanosystems by design for fundamentally new products. As we are at the beginning of the S-curve of nanotechnology development [6], a main challenge is to prepare the



21



22

1 Overview: Affirmation of Nanotechnology between 2000 and 2030

knowledge base, manufacturing, people, physical infrastructure, and anticipatory governance for nanotechnology of tomorrow. In this phase of development of nanotechnology, there is a need to invest in dedicated R&D programs on new nanotechnology methods and system architectures suitable for various economy sectors, adapt education programs and physical infrastructure to its unifying and integrative concepts, and institutionalize nanotechnology in the respective societal institutions. The global nanotechnology-based labor and markets are estimated to double about every 3 years, reaching over $3 trillion market encompassing 6 million jobs by 2020, and about one order of magnitude more by 2030 when nanotechnology will become a key competency and a significant component of GDP in developed countries.

Acknowledgments



This chapter is based on the author’s experience in the nanotechnology field, as founding chair of the NSET subcommittee coordinating the NNI and as a result of interactions in international nanotechnology policy arenas. The opinions expressed here are those of the author and do not necessarily reflect the position of U.S. National Science and Technology Council (NSTC)’s Subcommittee on Nanoscale Science, Engineering and Technology (NSET) or National Science Foundation (NSF).

References 1 Roco, M.C., Mirkin, C., and Hersam, M.C. (2011) Nanotechnology research

2

3

4

5

directions for societal needs in 2020. Journal of Nanoparticle Research, 13 (3), 897–919, (Full report with the same title was published by Springer, Dordrecht, 2011, 670p). Available on www.wtec.org/NBIC2-Report/. Roco, M.C., Williams, R.S., and Alivisatos, P. (1999) Nanotechnology Research Directions: Vision for the Next Decade, Springer (formerly Kluwer Academic Publishers) IWGN Workshop Report 1999, Washington, DC: National Science and Technology Council. Also published in 2000 by Springer, Dordrecht. Available on http://www/wtec.org/loyola/nano/IWGN.Research.Directions/. Roco, M.C., Bainbridge, W.S., Tonn, B., and Whitesides, G. (2013) Converging Knowledge, Technology and Society: Beyond Nano-Bio-Info-Cognitive Technologies, Springer, available on www.wtec.org/NBIC2-Report/. Roco, M.C. and Bainbridge, S.M. (2003) Converging Technologies for Improving Human Performance: Nanotechnology, Biotechnology, Information Technology and Cognitive Sciences, Springer. Harper, T. (2015) AzoNano Reviews.





References

23

6 GAO (Government Accountability Office) (2014) Forum on Nanomanufac-

turing. Report to Congress, GAO-14-181SP.

7 Lux Research (2016) Revenue from nanotechnology (Figure 11), in Nan-

8

9 10 11

otechnology Update: U.S. Leads in Government Spending Amidst Increased Spending Across Asia (eds I. Kendrick, A. Bos, and S. Chen), Lux Research, Inc. report to NNCO and NSF, New York, p. 17. PCAST (2014) Report to the President and Congress on the Fifth Assessment of NNI, White House, Washington, DC, http://www.whitehouse.gov/sites/default/files/microsites/ostp/PCAST/ pcast_fifth_nni_review_oct2014_final.pdf. Roco, M.C. (2014) Nanotechnology Tomorrow: to System Integration and Societal Immersion, Nano Monterrey, November 10, Keynote presentation. Lok, C. (2010) Nanotechnology: small wonders. Nature, 467, 18–21. Clinton (2000) Caltech, “Presidential archives”. http://www.gpo.gov/fdsys/ pkg/WCPD-2000-01-24/html/WCPD-2000-01-24-Pg122-3.htm







1 MA T

ER

IA L

Learning from Experience



“Lieberman, if it won’t work in manual, it won’t work in auto.” “Most control problems are really process problems.”

CO



PY

RI

GH

TE

D

An old Jewish philosopher once said, “Ask me any question, and if I know the answer, I will answer it. And, if I don’t know the answer, I’ll answer it anyway.” Me too. I think I know the answer to all control questions. The only problem is, a lot of my answers are wrong, I’ve learned to differentiate between wrong and right answers by trial and error. If the panel board operator persistently prefers to run a new control loop that I’ve designed in manual, if he switches out of auto whenever the flow becomes erratic, then I’ve designed a control strategy that’s wrong. So, that’s how I’ve learned to discriminate between a control loop that works and a control strategy best forgotten. Here’s something else I’ve learned. Direct from Dr. Shinsky, the world’s expert on process control:

I’ve no formal training in process control and instrumentation. All I know is what Dr. Shinsky told me. And 54 years of experience in process plants has taught me that’s all I need to know.

Troubleshooting Process Plant Control: A Practical Guide to Avoiding and Correcting Mistakes, Second Edition. Norman P. Lieberman. © 2017 John Wiley & Sons, Inc. Published 2017 by John Wiley & Sons, Inc.

1

2

LEARNING FROM EXPERIENCE

LEARNING FROM PLANT OPERATORS My first assignment as a Process Engineer was on No. 12 Pipe Still in Whiting, Indiana. This was a crude distillation unit. My objective was to maximize production of gas oil, as shown in Figure 1-1. The gas oil had a product spec of not more than 500 ppm asphaltines. The lab required half a day to report sample results. However, every hour or two the outside operator brought in a bottle of gas oil for the panel board operator. The panel operator would adjust the wash oil flow, based on the color of the gas oil. While plant supervision monitored the lab asphaltine sample results, plant operators ignored this analysis. They adjusted the wash oil rate to obtain a clean-looking product. The operators consistently produced a gas oil product with 50–200 ppm asphaltines. They were using too much wash oil. And the more the wash oil used, the lower the gas oil production. I mixed a few drops of crude tower bottoms in the gas oil to obtain a bottle of 500 ppm asphaltine material. I then instructed the panel board operators as follows: •





If the sample from the field is darker than my standard bottle, increase the wash oil valve position by 5%. If the sample of gas oil from the field is lighter than my standard, decrease the wash oil valve position by 3%. Repeat the above every 30 minutes.

The color of gas oil from a crude distillation unit correlates nicely with asphaltine content. The gas oil, when free of entrained asphaltines, is a pale yellow. So, it seems that my procedure should have worked. But it didn’t. The operators persisted in drawing the sample every 1–2 hours.

Figure 1-1

Adjusting wash oil based on gas oil color

LEARNING FROM FIELD OBSERVATIONS

3

So, I purchased an online colorimeter. The online colorimeter checked whether the gas oil color was above or below my set point. With an interval of 10 minutes, it would move the wash oil valve position by 1%. This never achieved the desired color, but the gas oil product was mixed in a tank. The main result was that gas oil production was maximized consistent with the 500 ppm asphaltine specification. One might say that all I did was automate what the operators were already doing manually, that all I accomplished was marginally improving an existing control strategy by automating the strategy. But in 1965 I was very proud of my accomplishments. I had proved, as Dr. Shinsky said, “If it does work on manual, we can automate it.”

LEARNING FROM FIELD OBSERVATIONS Forty-eight years ago I redesigned the polypropylene plant in El Dorado, Arkansas. I had never paid much attention to control valves. I had never really observed how they operate. But I had my opportunity to do so when the polypropylene plant was restarted. The problem was that the purchased propylene feed valve was too large for normal service. I had designed this flow for a maximum of 1600 BSD, but the current flow was only 100 BSD. Control valve response is quite nonlinear. Nonlinear means that if the valve is open by 5%, you might get 20% of the flow. If you open the valve from 80 to 100%, the flow goes up by an additional 2%. Nonlinear response also means that you cannot precisely control a flow if the valve is mostly closed. With the flow only 20% of the design flow, the purchased propylene feed was erratic. This resulted in erratic reactor temperature and erratic viscosity of the polypropylene product. The plant start-up had proceeded slowly. It was past midnight. The evening was hot, humid, and very dark. I went out to look at the propylene feed control valves. Most of the flow was coming from the refinery’s own propylene supply. This valve was half open. But the purchased propylene feed valve was barely open. The valve position indicator, as best I could see with my flashlight, was bumping up and down against the “C” (closed) on the valve stem indicator. The purchased propylene charge pump had a spillback line, as shown in Figure 1-2. I opened the spillback valve. The pump discharge pressure dropped, and the propylene feed valve opened to 30%. The control valve was now operating in its linear range. Now, when I design a control valve to handle a large reduction in flow, I include an automated spillback valve from pump discharge to suction. The spillback controls the pump discharge pressure to keep the FRC valve between 20 and 80% open. Whenever I sketch this control loop I recall that dark night in El Dorado. I also recall the value of learning even the most basic control principles by personal field observations.

4

LEARNING FROM EXPERIENCE

Figure 1-2 Opening spillback to keep FRC valve in its linear operating range

LEARNING FROM MISTAKES Adolf Hitler did not always learn from his mistakes. For example, he once ordered a submarine to attack the Esso Lago Refinery in Aruba. The sub surfaced in the island’s harbor and fired at the refinery. But the crew neglected to remove the sea cap on the gun’s muzzle. The gun exploded and killed the crew. I too had my problems in this refinery. The refinery flare was often very large and always erratic. The gas being burned in the flare was plant fuel. The plant fuel was primarily cracked gas from the delayed coker, supplemented (as shown in Fig. 1-3) by vaporized LPG. So much fuel gas was lost by flaring that 90% of the Aruba’s LPG production had to be diverted to fuel, via the propane vaporizer. I analyzed the problem based on the dynamics of the system. I modeled the refinery’s fuel consumption versus cracked gas production as a function of time. The key problem, based on my computer system dynamic analysis, was the cyclic production of cracked gas from the delayed coker complex. My report to Mr. English, the General Director of the Aruba Refinery, concluded: 1. The LPG vaporizer was responding too slowly to changes in cracked gas production from the delayed coker. 2. The natural log of the system time constants of the coker and vaporizer was out of synchronization. 3. A feed-forward, advanced computer control based on real-time dynamics would have to be developed to bring the coker vaporizer systems into dynamic real-time equilibrium. 4. A team of outside consultants, experts in this technology, should be contracted to provide this computer technology. Six months passed. The complex, feed-forward computer system was integrated into the LPG makeup and flaring controls shown in Figure 1-3.

LEARNING FROM MISTAKES

5

Figure 1-3 Unintentional flaring caused by malfunction of LPG makeup control valve is an example of split-range pressure control

Adolf Hitler would have been more sympathetic than Mr. English. The refinery’s flaring continued just as before. Now what? Distressed, discouraged, and dismayed, I went out to look at the vaporizer. I looked at the vaporizer for many hours. After a while I noticed that the fuel gas system pressure was dropping. This happened every 3 hours and was caused by the cyclic operation of the delayed coker. This was normal. The falling fuel gas pressure caused the instrument air signal to the LPG makeup valve to increase. This was an “Air-to-Open” valve (see Chapter 13), and more air pressure was needed to open the propane flow control valve. This was normal. But, the valve position itself did not move. The valve was stuck in a closed position. This was not normal. You will understand that the operator in the control room was seeing the LPG propane makeup valve opening as the fuel gas pressure dropped. But the panel board operator was not really seeing the valve position; he was only seeing the instrument air signal to the valve. Suddenly, the valve jerked open. The propane whistled through the valve. The local level indication in the vaporizer surged up, as did the fuel gas pressure. The flare valve opened to relieve the excess plant fuel gas pressure and remained open until the vaporizer liquid level sank back down, which took well over an hour. This all reminded me of the sticky side door to my garage in New Orleans. I sprayed the control valve stem with WD-40, stroked the valve up and down with air pressure a dozen times, and cleaned the stem until it glistened. The next time the delayed coker cycled, the flow of LPG slowly increased to catch the falling fuel gas pressure, but without overshooting the pressure set point and initiating flaring.

6

LEARNING FROM EXPERIENCE

My mistake had been that I had assumed that the field instrumentation and control valves were working properly. I did not take into account the probability of a control valve malfunction. But at least I had learned from my mistake, which is more than you could say for Adolf Hitler. LEARNING FROM THEORY Northwestern University has an excellent postgraduate chemical engineering program. I know this because I was ejected from their faculty. I had been hired to present a course to their graduate engineers majoring in process control. My lecture began: “Ladies and gentlemen, the thing you need to know about control theory is that if you try to get some place too fast, it’s hard to stop. Let’s look at Figure 1-4. In particular, let’s talk about tuning the reflux drum level control valve. Do I want to keep the level in the drum close to 50%, or doesn’t it matter? As long as the level doesn’t get high enough to entrain light naphtha into fuel gas, that’s okay. What is not okay is to have an erratic flow feeding the light naphtha debutanizer tower. On the other hand, if the overhead product was flowing into a large feed surge drum, than precise level control of the reflux drum is acceptable. In order for the instrument technician to tune the level control valve, you have to show him what you want. To do this, put the level valve on manual. Next, manipulate the light naphtha flow to permit the level swings in the reflux drum you are willing to tolerate. But you will find that there is a problem. If you try to get back to the 50% level set point quickly, you will badly overshoot your level target.

Figure 1-4 Tuning a level control valve depends on what is downstream

LEARNING FROM RELATIONSHIPS

7

If you return slowly to the set point, it’s easy to reestablish the 50% level target. However, the level will be off the target for a long time. In conclusion, ladies and gentlemen, tuning a control loop is a compromise between the speed at which we wish to return to the set point and our tolerance to overshooting the target. To establish the correct tuning criteria, the control loop is best run on manual for a few hours by the Process Control Engineer. Thank you. Class adjourned for today.”

My students unfortunately adjourned to Dean Gold’s office. Dean Gold lectured me about the student’s complaints. “Mr. Lieberman, did you think you were teaching a junior high school science class or a postgraduate course in process control?”

And I said, “Oh! Is there a difference?” So that’s how I came to be ejected from the faculty of Northwestern University after my first day of teaching. LEARNING FROM RELATIONSHIPS My ex-girlfriend used to tell me, “Norm, the reason we get along so well is that I give you a lot of positive feedback.” From this I developed the impression that positive feedback is good. Which is true in a relationship with your girlfriend. But when involved in a relationship with a control loop, we want negative feedback. Control logic fails when in the positive feedback mode of control. For example: •



Distillation—As process engineers and operators we have the expectation that reflux improves fractionation, which is true, up to a point. That point where more reflux hurts fractionation instead of helps is called the “incipient flood point.” Beyond this point, the distillation tower is operating in a positive feedback mode of process control. That means the tray flooding reduces tray fractionation efficiency. More reflux simply makes the flooding worse. Fired Heaters—Increasing furnace fuel should increase the heater outlet temperature. But if the heat release is limited by combustion air, then increasing the fuel gas will reduce the heater outlet temperature. But as the heater outlet temperature drops, the automatic control calls for more fuel gas, which does not burn. As the heater outlet temperature continues to fall, because combustion is limited by air, the outlet temperature drops further. The heater automatic temperature control loop is now in the positive feedback mode of control. As long as this control loop is on auto, the problem will feed upon itself.

8

LEARNING FROM EXPERIENCE

Figure 1-5 Too much steam flow causes a loss in vacuum



Vacuum Ejector—Some refineries control vacuum tower pressure by controlling the motive steam flow to the steam ejector. As the steam pressure and flow to the ejector increases, the ejector pulls a better vacuum, as shown in Figure 1-5, but as the steam flow increases, so does that load on the downstream condenser. As the condenser becomes overloaded, the ejector discharge pressure rises. At some point the increased discharge pressure adversely affects the ejector’s suction pressure. A further increase in motive steam will make the vacuum worse, instead of better. As the vacuum gets worse, the control loop calls for more steam. Having now entered the positive feedback mode of control, the problem feeds upon itself.

Many control loops are subject to slipping into a positive feedback loop. The only way out of this trap is to switch the controls to manual and slowly climb back out of the trap. Once you guess (but there is no way to know for sure) that you are in the safe, negative feedback mode of control, you can then safely switch back to automatic control.

NORMAL PURPOSE OF CONTROL LOOPS Typically, a control loop is tuned to achieve two objectives: 1. To return a variable to its set point as fast as possible. 2. To avoid overshooting the set point.

MANUAL VERSUS AUTO

9

If a heater outlet set point is at 700°F, and it’s currently running at 680°F, the firing rate should increase. However, if the firing rate increases too fast, the heater outlet may jump past the set point to 720°F. Tuning a control loop is meant to balance the instrument, “gain and reset,” to balance these two contradictory objectives. The balance between gain and reset (i.e., instrument tuning) is not the main object of this text. Only rarely have I seen a panel board operator complain about this problem. Another purpose of control is to optimize process variables. This is an advanced control that attempts to optimize certain variables. This is also not the sort of problem that the panel operator would be concerned about. An example of advanced control would be to optimize the ratios of several pumparounds, versus the top reflux rate, for a refinery crude distillation tower. For the units I work on, such advanced computer control is rarely used, or has been simplified, so that it is not much different than ordinary closed-loop control. MANUAL VERSUS AUTO In reality, the main complaint about control loops that are communicated to me by operators is that the controller will not work in the automatic mode of control, and that the operators are forced to run the control loop in manual. This greatly increases and complicates their work. To a large extent, this text examines why control loops are forced to run in the manual mode. A few of the reasons are the following: 1. The control loop is trapped in a “positive feedback loop.” This is often a dangerous situation. 2. There is no direct relationship between the variable being controlled and the response of the control valve. This is typically a design error. 3. The facility that measures the process parameter in the field is not working correctly. This represents the majority of control problems that I have seen. 4. The bypass valve is open around the control valve. 5. The control valve is running too far closed because it is oversized, or badly eroded. 6. The control valve is running too far open, because it is too small, or its port size is too small, or an isolation gate valve in the system is partly closed. 7. The control valve’s “Hand Jack” has been left engaged. Thus, the control valve cannot be manipulated from the computer console or panel. The hand jack is a mechanical device, used to manually move the control valve in the field. 8. The control valve is stuck in a fixed position.

10

LEARNING FROM EXPERIENCE

9. The air signal connection to the diaphragm that moves the control valve has come loose. 10. The diaphragm is leaking, so the sufficient instrument air pressure cannot be applied to the control valve mechanism to force it to move.

PROCESS CONTROL NOMENCLATURE The reader who is new to process plant vocabulary may wish to briefly skip to the glossary at the end of this book. I have assembled a list of “Process Control Nomenclature Used in Petroleum Refineries and Petrochemical Plants.” As in any other industry, your coworkers will have developed a vocabulary of their own, and will assume you understand the terms they employ. To an extent, in the following chapters of this text, I have also made a similar assumption. A brief review of these terms may make it easier for you to communicate with some of your coworkers.