Prediction of Reactive Multiphase Flows in Chemical

1 downloads 0 Views 3MB Size Report
rate was increased, the simulation problem rapidly increased in complexity, both due to the fine ... None of this would have been possible without the initial ...... Each kinetic rate is double that of the previous one, making the fastest rate 16 ...... For the 3D case, it appears as if insufficient gas diffusion takes place as the gas ...
Prediction of Reactive Multiphase Flows in Chemical Looping Combustion Schalk Cloete

Note: This version omits the collection of papers (pp. 85 – 464) in order to avoid copyright infringement.

In case both the scientific consensus on climate change and the consensus forecast on global energy consumption prove to be reasonably accurate...

Abstract The fundamental motivation behind this work is the large mismatch between the current trajectory of the global economy and the recommendations of climate science. Given the economic pressures created by the seemingly permanent quadrupling of the oil price and the large burdens of debt and unfunded liabilities carried by most developed nations, this mismatch is likely to persist for many years into the future. The result is a paradoxical situation where the need for rapid development and deployment of low-carbon energy technologies is greater than ever, but the amount of capital available for such developments remains far from adequate. In order to most effectively address this unique problem, this thesis is focussed on accelerating the development of one very promising low-carbon technology: Chemical Looping Combustion (CLC). It is postulated that reactive multiphase modelling can contribute greatly towards advancing CLC technology to the stage of commercial readiness within the very strict funding limitations faced by CO2 Capture and Storage (CCS) technologies. The primary objective is therefore to advance the current state of the art in reactive multiphase flow modelling to the point where it can make significant positive contributions to the CLC development process. As a first step, the limits of current state of the art models were determined to find that even the mature Two Fluid Model (TFM) can already provide industrially interesting simulation results in certain cases featuring dense fluidized beds, larger particle sizes and slower reaction rates. Substantial effort was invested in order to form a more fundamental understanding of the grid independence behaviour of TFM simulations, revealing the particle relaxation time as a surprisingly reliable predictor of the cell size required for grid independence. However, as the particle size was decreased, the fluidization velocity was increased, the bed width was decreased and the reaction rate was increased, the simulation problem rapidly increased in complexity, both due to the fine meshes required and uncertainties related to various closure model coefficients. For the wide range of cases that will remain out of the reach of the TFM approach for the foreseeable future, four alternative modelling methodologies were investigated: 2D simulations, a Lagrangian parcel-based approach, a filtered Eulerian approach and a phenomenological 1D approach. The bulk of work was dedicated to the relatively new Lagrangian parcel-based approach which was thoroughly tested and improved by adding the transport of granular temperature and the influence of the full stress tensor on particle motion. Practical experience was gained with the other approaches as well, ultimately allowing for the tabulation of the pros and cons of these approaches and the formulation of clear recommendations for future work. Experience suggests that no single approach will be generically applicable over all fluidization cases within the foreseeable future. Modellers should therefore respect the different strengths and weaknesses of each approach in order to select the most efficient modelling approach for any given application. A large amount of effort was also dedicated to model validation, both against published experiments and experiments carried out within the project. Although a number of unexplained discrepancies remain, comparisons to experiments focussing on hydrodynamics, species transfer and heterogeneous reactions were generally encouraging. Furthermore, experience gained with the operation of the novel reactive unit constructed in this project will be very valuable in directing future reactive validation studies. It was also found that building dedicated experiments was a i

significantly better investment than the inefficient practice of trying to validate against published experimental data that was not collected for the primary purpose of model validation. Finally, practical experience was gained with two possible applications of reactive multiphase flow modelling to accelerate the development of CLC: virtual prototyping of new process concepts and process optimization. In both cases, the fundamental advantages of such a simulation-based process design strategy were found to be highly attractive. Virtual prototyping granted complete creative freedom when it comes to the design of new reactor concepts and statistical optimization methods that previously were practically impossible became highly practical. In summary, it was found that the simulation-based process design of fluidized bed reactors such as those employed in the CLC process is already feasible over a range of flow conditions. A number of alternative modelling approaches which have been further developed and tested in this project will gradually extend the range of model applicability over coming years. These results together with encouraging comparisons to dedicated validation experiments clearly indicate that reactive multiphase flow modelling can now start the transition from development to application. It is therefore recommended that industry is gradually engaged through intelligently selected applications where current state of the art models can reliably predict the performance of industrially relevant fluidized bed reactors. Such a conscious shift to model application is vital to accelerate the development of CLC technology so that a commercially viable process can be made available for deployment as soon as the policy environment finally becomes favourable for CCS. The thesis is presented three parts: 1. Setting the stage: This introductory section gives background information on the necessity for second generation CO2 capture technology such as CLC and also the role that reactive multiphase flow modelling can play in accelerating the development process. 2. Review of technical work: This is the main section of the thesis and will bind together the conclusions drawn from all the papers completed in this project in a coherent manner. 3. Collection of papers: Finally, all the technical papers referenced in the review section (published and unpublished) will be included as an appendix for ease of reference.

ii

Acknowledgements First and foremost, I would like to express my sincere gratitude to the Research Council of Norway for funding this work. Given that the original project proposal was submitted in a very competitive call after only about one year of experience in the fields of fluidized bed reactor modelling and Chemical Looping Combustion, it came as a very pleasant surprise when we were given the opportunity to expand our knowledge base through three years of focussed research. Since the start of the project, much technical progress has been made (summarized in this thesis) and continuation of this work has been ensured through the acquisition of two EU-funded projects and another Research Council project. None of this would have been possible without the initial support for this project. Secondly, we gratefully acknowledge our project partners from the Eindhoven University of Technology; Dr. Fausto Gallucci and Prof. Martin van Sint Annaland. This project would simply not have been possible without their extensive experience in this field and their world-leading experimental facilities. My sincere gratitude also goes to Dr. Abdelghafour Zaabout who constructed and operated the experimental units in this project. The number of challenges that Dr. Zaabout had to overcome to produce the results reported in this thesis is truly daunting and he therefore deserves special merit for his contribution to this project. Not many people would have been able to construct and successfully operate a novel high temperature reactor concept while being guided by tedious longdistance communication and making do with only half the time originally envisioned for this task. Finally, I'd like to extend a big thank you to my supervisors, Prof. Stein Tore Johansen and Dr. Shahriar Amini for guiding my actions from their uniquely different perspectives. They certainly are two of the busiest individuals I have ever come across in life and I greatly appreciate every hour of fruitful discussions we have had over the past three years. I am constantly amazed by Prof. Johansen's deep fundamental understanding of multiphase flow modelling and by Dr. Amini's ability to link scientific research to market realities and ensure continuation of the work. A doctoral student simply could not wish for more balanced or better qualified supervision and guidance.

iii

Table of contents Part 1: Setting the stage .................................................................................................... 1 Chapter 1:

The sustainability crisis ................................................................................................ 2

Chapter 2:

The need for CCS ......................................................................................................... 7

Chapter 3:

Second generation CO2 capture technologies ........................................................... 10

Chapter 4:

The role of simulation-based process design ............................................................ 15

4.1

The fundamental advantages of simulation-based process design .................................. 15

4.2

Formalizing the role of simulation-based process design ................................................. 18

4.3

Steps in reactor model development and demonstration ................................................ 20

4.3.1

Model development .................................................................................................. 20

4.3.2

Model validation........................................................................................................ 21

4.3.3

Multiscale modelling ................................................................................................. 22

Chapter 5:

Overall technical objectives....................................................................................... 26

5.1

Performance against original objectives ........................................................................... 26

5.2

Additional objectives added throughout the project ........................................................ 26

Part 2: Review of technical work ..................................................................................... 29 Chapter 6:

The TFM and the KTGF .............................................................................................. 30

6.1

Grid independence behaviour ........................................................................................... 30

6.2

Sensitivity to closure laws and coefficients ....................................................................... 36

6.3

Summary............................................................................................................................ 40

Chapter 7:

Simulating larger reactors ......................................................................................... 41

7.1

2D modelling...................................................................................................................... 41

7.2

The DDPM.......................................................................................................................... 43

7.3

The fTFM............................................................................................................................ 47

7.4

1D phenomenological modelling....................................................................................... 48

7.5

Summary............................................................................................................................ 49

Chapter 8: 8.1

Model validation........................................................................................................ 52

Cold flow validation ........................................................................................................... 52

8.1.1

Comparisons to published data ................................................................................. 52

8.1.2

Comparisons to data collected within the project .................................................... 54

8.2

Reactive model validation ................................................................................................. 59

8.2.1

Comparisons to published data ................................................................................. 59

8.2.2

Comparisons to data collected within the project .................................................... 62 iv

8.3

Summary............................................................................................................................ 66

Chapter 9:

Model application...................................................................................................... 67

9.1

Virtual prototyping of new reactor concepts .................................................................... 67

9.2

Process optimization ......................................................................................................... 70

9.3

Summary............................................................................................................................ 72

Chapter 10:

Conclusions and recommendations .......................................................................... 73

10.1

Fluidized bed simulations cannot be painted with a common brush ............................... 73

10.2

A range of modelling approaches is required to cover all applications ............................ 74

10.3

Dedicated validation experiments are of central importance .......................................... 75

10.4

Simulation-based process design is a potential gamechanger ......................................... 76

Chapter 11:

Nomenclature ............................................................................................................ 77

11.1

Abbreviations .................................................................................................................... 77

11.2

Clarification of concepts .................................................................................................... 78

Chapter 12:

References ................................................................................................................. 79

Part 3: Collection of papers ............................................................................................. 85 Chapter 13:

Journal papers ........................................................................................................... 87

13.1

Paper 1............................................................................................................................... 87

13.2

Paper 2............................................................................................................................... 99

13.3

Paper 3............................................................................................................................. 127

13.4

Paper 4............................................................................................................................. 143

13.5

Paper 5............................................................................................................................. 155

13.6

Paper 6............................................................................................................................. 169

13.7

Paper 7............................................................................................................................. 183

13.8

Paper 8............................................................................................................................. 199

Chapter 14:

Conference papers .................................................................................................. 211

14.1

Conference 1 ................................................................................................................... 211

14.2

Conference 2 ................................................................................................................... 221

14.3

Conference 3 ................................................................................................................... 231

14.4

Conference 4 ................................................................................................................... 241

14.5

Conference 5 ................................................................................................................... 257

Chapter 15:

Draft papers ............................................................................................................. 267

15.1

Draft 1 .............................................................................................................................. 267

15.2

Draft 2 .............................................................................................................................. 295

15.3

Draft 3 .............................................................................................................................. 315 v

15.4

Draft 4 .............................................................................................................................. 339

15.5

Draft 5 .............................................................................................................................. 359

15.6

Draft 6 .............................................................................................................................. 377

Chapter 16:

Incomplete drafts .................................................................................................... 407

16.1

Appendix 1 ....................................................................................................................... 407

16.2

Appendix 2 ....................................................................................................................... 429

Chapter 17:

Supplementary papers ............................................................................................ 441

17.1

Appendix 3 ....................................................................................................................... 441

17.2

Appendix 4 ....................................................................................................................... 455

vi

Part 1: Setting the stage Before the technical contributions of this project are discussed, this first part of the thesis will set the stage by putting the work in the correct perspective. Five chapters will be covered: •

• •





Chapter 1: The sustainability crisis. This first chapter will give a brief overview of unsustainable trends within our environment, our economy and our society that will have to be very carefully dealt with as we progress through the 21st century. Chapter 2: The need for CCS. Subsequently, the potential of CCS in mitigating this sustainability crisis will be discussed. Chapter 3: Second generation CO2 capture technologies. This chapter will look at the necessity of the second generation of CO2 capture technologies currently being developed with a special focus on CLC. Chapter 4: The role of simulation-based process design. The need for more cost effective process development and scale-up through simulation tools such as reactive multiphase flow modelling will be explored. Chapter 5: Overall technical objectives. Finally, the project objectives will be stated in line with insights from the preceding chapters in order to introduce the technical part of the thesis.

1

Chapter 1: The sustainability crisis Since fossil fuels ignited the second industrial revolution, the world has experienced 150 years of unchecked exponential growth, resulting in a staggering 5000% economic expansion over this period (Figure 1). This unique multi-generational historical period has engrained perpetual exponential growth deep into our economy and societal expectations, but this previously unquestioned paradigm is now being challenged by the limitations of our finite planet. 8 7

50

6 40

5 GDP

30

4

GDP growth

3

20

GDP growth rate (%)

Global GDP (trillions of 1990 International Geary-Khamis dollars )

60

2 10

1

0 1000

0 1200

1400

1600

1800

2000

Year Figure 1: Long-term historical real GDP and real GDP growth rate [1]. 2008-2012 data from [2].

The seemingly permanent quadrupling of the oil price and the economic effects that this has had on highly indebted oil-importing developed nations has given society the first major real-world example of what eventually happens to an exponentially expanding society within a finite environment. In the USA, emergency measures were taken to rescue the economy after the crash of 2008 in the form of unprecedented monetary stimulus form the Federal Reserve and large-scale government borrowing pushing public debt from $9 trillion to $17 trillion. Despite these measures, however, unemployment [3] remains stubbornly high while median wealth [4] and income [5] have dropped back to levels last seen in the early 1990's. In the EU, which remains the world's largest economy when viewed as one entity, there has been no recovery whatsoever. GDP has not regained pre-crisis levels [6], unemployment is at all-time highs and bailouts of various sorts have become the norm. As a direct result of the total dependence of the global economy on perpetual exponential economic growth, many disruptive events similar to the great oil shock of the previous decade can be foreseen in the medium-term future. Since we live on a finite planet, peaks/plateaus in other natural resources and/or a marked increase in extreme weather events will result in further cost-push inflation. Such inflationary pressures combined with the enormous (and growing) burdens of debt and unfunded liabilities carried by developed nations could result in further 2008-like financial shocks. Meanwhile, aging population demographics, structural unemployment problems, increases in inequality and further increases in population will continue to erode societal resilience. 2

Fossil fuels lie at the foundation of this sustainability dilemma. The abundant energy derived from fossil fuels has sustained exponential growth for such a long time that the Global Footprint Network [7] now estimates that the global population consumed as much resources and excreted as much wastes in 2008 as the planet could replenish and process in 1.52 years. Despite this substantial ecological overshoot, however, the debt-based nature of the global economy makes sustained exponential growth absolutely mandatory for economic stability and this is clearly reflected in the modern policy landscape. The global reliance on fossil fuel-driven exponential growth is therefore far from over. The world still derives fully 87% of its energy from fossil fuels and, as shown in Figure 2, this trend is expected to continue over coming decades. Three mainstream energy predictions from the International Energy Agency (IEA), British Petroleum (BP) and the US Energy Information Administration (EIA) are contrasted with one independent prediction by the Energy Watch Group (EWG). 6

Primary energy consumption (Gtoe)

5

4

3

2

1

0 2010

2015

2020

2025

2030

2035

Oil - IEA Oil - BP Oil - EIA Oil - EWG Coal - IEA Coal - BP Coal - EIA Coal - EWG Gas - IEA Gas - BP Gas - EIA Gas - EWG Nuclear - IEA Nuclear - BP Nuclear - EIA Nuclear - EWG Renewables - IEA Renewables - BP Renewables - EIA

Year Figure 2: Energy mix forecasts by the IEA [8], BP [9], the EIA [10] and the EWG [11].

Aside from the much lower renewables contribution reported by the IEA caused by renewable electricity not being converted to primary energy by dividing by the mean efficiency of thermal power plants (normally about 37%), the mainstream consensus is clear: continued growth in all fossil fuel consumption over the coming decades. Exxon Mobil [12], Shell [13] and MIT [14] have produced similar predictions, resulting in an overwhelming mainstream consensus for decades of increased fossil fuel combustion. Despite the increased flurry of attention following the recent oil shock, the theory of peak oil and other resource peaks such as those predicted by the EWG remains a minority viewpoint. 3

This projected increase in fossil fuel combustion is calculated to support the projected exponential expansion of the global economy at an annual rate of 3.5% (resulting in a doubling every two decades). Very optimistic reductions in energy intensity (the ratio of energy consumption over GDP) are also made. The IEA projects a yearly reduction in energy intensity of 1.8% which would be a great improvement on the 0.5% reduction over the previous decade. From a climate viewpoint, the mainstream consensus is that long-term atmospheric CO2 concentrations will greatly exceed the reasonably safe limit of 450 ppm. The CO2 emissions pathways predicted by the organizations mentioned in Figure 2 are given in Figure 3 for some perspective (the 450 ppm scenario was also compiled by the IEA). It is evident that there is a clear disconnect between the energy pathways predicted by all the mainstream energy authorities and the 450 ppm pathway. Not even the fossil fuel peak predicted by the EWG (which could conceivably lead to great economic disruption) results in the necessary reductions in CO2 emissions. 50 45

CO2 emissions (Gt/year)

40 35 30 25 450 ppm

20

IEA

15

BP

10

EIA

5 0 2010

EWG 2015

2020

2025

2030

2035

Year Figure 3: CO2 emissions as predicted by the IEA [8], BP [9] the EIA [10] and the EWG [11]. The EWG curve was calculated based on supply data in Figure 2.

There is good reason for the continued reliance on fossil fuels predicted by all major energy authorities. Fossil energy is highly scalable, it gives a large (but declining) energy return on energy invested (EROEI), it is highly concentrated, it is dispatchable on demand, it is easily transported and, thanks to its natural occurrence in gas, liquid and solid forms, it fits an incredibly diverse set of applications. No alternative energy source can even come close to matching this unique set of advantages. Many pundits feel that renewable energy should expand greatly over coming decades in order to quickly wean society from fossil fuels. However, renewable energy faces substantial EROEI challenges caused by the diffuse and variable nature of the two most naturally abundant forms: solar and wind. The diffuse nature of solar and wind requires vast areas to be covered with wind turbines and solar panels, thereby demanding great quantities of energy intensive infrastructure. Variability over a wide 4

range of time- and length scales demands vast amounts of energy storage and long-distance electricity transport, requiring additional energy intensive infrastructure and causing losses from energy transformation and transmission. The fact that the bulk of renewable energy will be in the form of electricity will also require a complete revamp of industries such as transportation, steelmaking and cement, leading to further energy costs and inefficiencies. This is a substantial problem because a complex modern society requires an estimated minimum EROEI of around 10:1 to maintain such a level of societal complexity. This point is illustrated in Figure 4 which depicts the increase in oil EROEI necessary to support a society of increasing complexity. Naturally, as societal complexity increases, an ever-increasing quantity of energy is required for purposes other than harvesting, refining and distributing new energy. Therefore, as a society modernizes, the quality demands on its energy resources continue to increase.

Figure 4: A graphical representation of the increasing EROEI necessary to support a society of increasing complexity [15, 16] (republished with permission).

The EROEI of the most scalable renewable energy resources; wind and solar, has been estimated at an average of 18:1 and 6.8:1 respectively [17]. These quantities might be able to sustain a complex society according to Figure 4, but contain the implicit assumption that intermittent electricity from wind and solar is just as valuable as on-demand energy from oil. One very recent study [18] made a first estimate of the additional energy costs related to battery storage, additional grid infrastructure and installation services which are necessary to increase the reliability of intermittent solar PV. The result was a reduction in overall EROEI of solar PV to about 2.1:1. Only 4 hours of battery storage was included in this study and it is clear that much more will be needed to balance out variations from longer weather patterns and seasonal changes.

5

It is therefore clear that renewable energy will require a number of gamechanging technological breakthroughs both in generation and balancing before it can be viewed as a viable alternative to fossil fuels. However, fossil fuels themselves have also experienced significant declines in EROEI in recent decades. Oil is the foremost example where EROEI has declined from 100:1 in the early 20th century to just over 10:1 today [17]. Figure 5 shows results from a study which modelled the oil price based on the oil EROEI and used the validated model to predict the oil price behaviour based on two different EROEI trends over the past two decades and into the future. 180 160

Oil price (2010$/barrel)

140 120

Oil price data Model validation Model with linear decline in EROEI Model with exponential decline in EROEI

100 80 60 40 20 0 1954

1968

1982

1995 Year

2009

2023

2036

Figure 5: Oil price predictions of an EROEI-based model [19].

New shale gas and tight oil production in the USA has the potential to halt the steady EROEI decline of oil and gas for a number of years, but significant uncertainties still exist over the longer-term prospects of these new unconventional fossil energy sources. In general, coal is the only fossil fuel that still retains a very high EROEI of around 80:1 [17]. This is at least part of the reason why coal production has increased by 63.7% between 2000 and 2011 while gas production increased by 35.9% and oil production rose only 11.7% [20]. Since coal is the most polluting fossil fuel, this is a grave problem – a problem which arguably can only be solved by CCS.

For the interested reader: The author regularly writes articles on energy and climate issues for the popular moderated discussion forum "The Energy Collective" [21]. These articles contain deeper analysis of the energy and sustainability issues briefly touched upon in this first chapter.

6

Chapter 2: The need for CCS In essence, CCS represents the middle ground between the serious risks involved in burning fossil fuels (catastrophic climate change) and the equally serious risks involved in not burning fossil fuels (economic and societal collapse). Risks at both extremes are very real and potentially catastrophic, but both can be minimized by finding a middle ground where the majority of remaining economically viable fossil fuels are burned, only without emitting the primary greenhouse gas: CO2. When CCS deployment eventually takes off, the primary application will probably be low-carbon baseload power generation from coal (the only fossil fuel that retains a very high EROEI). As outlined above, coal has been rapidly increasing its market share in recent years and this momentum is set to continue. 1199 coal-fired power stations with a total capacity of 1400 GW are currently being proposed worldwide, with 76% of this proposed capacity residing in China and India [22]. For perspective, 1400 GW of coal-fired capacity at a capacity factor of 85% is equivalent to 7933 GW of solar PV at a capacity factor of 15% - 113 times more than the total installed solar PV capacity by the end of 2011 [20]. (Note: The capacity factor is the ratio between the actual generation and the nameplate capacity of an electricity generation facility. Solar and wind typically have low capacity factors simply because the sun does not always shine and the wind does not always blow.) Thus, while every gigawatt of renewable energy will continue making headlines, coal will quietly continue to expand its role as the primary driver of catch-up growth in the developing world. CCS offers the most viable solution to this grave problem; not only for practical reasons, but also in terms of economics. As illustrated in Figure 6, even CCS with first generation CO2 capture technology is highly competitive against alternative low-carbon electricity sources.

CO2 abatement cost (2011$/ton)

300 250 200 150 100 50 0 -50

Figure 6: CO2 abatement costs for different technologies [23].

However, these CO2 abatement costs do not tell the whole story. Geothermal and hydro, despite being reliable and dispatchable renewable energy sources, are limited in total capacity and are unlikely to significantly increase their share in the global energy mix. Energy from biomass faces similar limitations in total capacity, has a low EROEI, faces numerous environmental concerns and 7

often competes with food for limited agricultural land. Onshore wind has a reasonable total capacity and EROEI, but faces the problem of variability over a wide range of time- and length scales. The more expensive options of offshore wind and solar have a greater total capacity, but also have a lower EROEI and similar problems with intermittency. Nuclear energy also has a reasonable total capacity and is highly reliable, but it has fallen out of favour after the Fukushima accident. CCS, on the other hand, maintains all of the advantages of the fossil fuels which have totally transformed the world over the past century, only without the majority of greenhouse gas emissions. It is also capable of reducing emissions which are already locked in by existing fossil fuel infrastructure and can (to a limited extent) act as a carbon sink when applied to biomass combustion. Another very important advantage of CCS is that it is applicable to many industrial processes which consume fossil fuels directly. These processes (e.g., iron and steel, oil refining, cement and chemicals & petrochemicals) are responsible for about a quarter of global CO2 emissions – emissions which cannot be economically abated by any means other than CCS. As shown in Figure 7, adding CCS with first generation CO2 capture to fossil fuel power plants in Europe will require a CO2 price of roughly €40/ton for coal and a little more than $80/ton for natural gas. From these estimations, adding CCS to coal-fired plants adds about €34/MWh to the electricity price.

Levelized cost of electricity (€/MWh)

120 100

CO2 price €80/ton

80

CO2 price €40/ton CO2 price €20/ton

60

CO2 storage

40

CO2 transport

20

Power plant and CO2 capture

0 Coal

Coal + CCS

Lignite Lignite + Natural Natural CCS gas gas + CCS

Figure 7: A breakdown of the levelized cost of electricity for different plants under different CO2 prices [24]. CO2 capture costs are calculated for currently available first generation technologies.

For some perspective, the EEG component added to the German electricity price to support the Energiewende (the much publicised German transition to renewables) currently stands at €52.8/MWh [25], primarily to support the rollout of wind and solar power which contributed 8.2% and 5% of total German electricity production respectively in 2012 [26]. This 13.2% contribution of intermittent power sources is now approaching the stage where balancing costs start to displace generation costs as the primary expense, implying that this already high renewable energy surcharge can only increase in the future (a further 20% increase is already in the pipeline). 8

As a result, rising electricity prices have become a central issue in the 2013 German elections with all major parties promising substantial reforms of the legislative framework driving the Energiewende. Regardless of any reforms, however, Germany has committed itself to high electricity prices for the long haul because the feed-in tariff schemes driving renewable energy deployment are generally paid out over a period of decades. Unless Germany implements retroactive feed-in tariff cuts (which would severely hurt investor confidence and be a major setback for the global renewable energy industry), electricity prices can only go higher over the next decade or two. However, despite the clear practical and economic disadvantages of renewables against CCS, their very large ideological clean energy appeal is driving an accelerating rollout of wind and solar power around the world. The modular nature of solar and wind also makes these energy sources highly suited to the feed-in tariff policies that have been the primary driver behind the global renewable energy rollout. Solar PV projects can be installed 1 kW at a time while the most economical CCS projects would be installed 1 GW at a time, making policy support for renewables much more convenient than for CCS. It is not inconceivable that CCS projects might win feed-in tariff support in the future, but because it lacks the ideological and modular advantages of renewables, this outcome is somewhat unlikely. CCS will therefore probably only enjoy widespread deployment when a consistent and sufficiently high CO2 price is implemented. The European emissions trading scheme has proven to be totally ineffective in this regard and, due to the on-going recession, has collapsed to levels of complete insignificance. In addition, the IEA projects CO2 prices to stay at levels below €40/ton for many years into the future, even under the highly unlikely 450 ppm policy scenario (Table 1). Table 1: Carbon dioxide price in US$ per ton under the New Policies and 450 ppm scenarios [8]. The "shadow price" implies the expectation of a future CO2 price even though no direct tax or trading scheme is implemented.

Scenario

New Policies

450 ppm

Region EU Australia and New Zealand Korea China USA, Canada and Japan (shadow price) USA and Canada EU Japan Korea Australia and New Zealand China, Russia, Brazil and South Africa

2020 30 30 23 10 15 20 45 25 35 45 10

2030 40 40 38 24 90 95 90 90 95 65

2035 45 45 45 30 35 120 120 120 120 120 95

For this reason, there is a great need to further reduce the costs of CCS so that an economically viable commercial technology can be made available as soon as a reliable CO2 price is finally implemented. As shown in Figure 7, CO2 capture is responsible for more than 80% of the total costs related to CCS, implying that this is the area in which the greatest cost reductions are possible. From this viewpoint, CO2 capture technology will have to be improved to the point where CO2 can be captured and compressed at costs below $25/ton. This is where second generation CO2 capture technologies must play a central role.

9

Chapter 3: Second generation CO2 capture technologies The primary goal of second generation CO2 capture processes is to reduce the energy penalty (also called the parasitic load) imposed on the system. For first generation technologies applied to coalfired power production, this is between 14% and 40% of the total plant output with the lower ranges applying to pre-combustion capture from expensive IGCC plants [27]. The energy penalty directly reduces the plant output per unit of fuel combusted, thereby increasing the levelized cost of electricity accordingly. The nature of the energy penalty changes depending on the type of CO2 capture process under consideration. Three classes of CO2 capture are generally defined: post-combustion capture where CO2 is scrubbed from a process outlet stream, pre-combustion capture where hydrogen is produced and CO2 is separated out prior to combustion, and oxy-fuel combustion where pure oxygen is supplied to the combustor. For pre- and post-combustion systems, the energy penalty results mainly from the heat required to regenerate the solvent, while the penalty in oxy-fuel systems arises primarily from the air separation unit. Second generation post-combustion CO2 capture technologies generally focus on developing better solvents/sorbents which reduce the energy penalty by reducing the amount of heat required in regeneration. This can be achieved either by through a lower heat capacity or a better CO2 absorption capacity (requiring a smaller mass of sorbent/solvent). A recent paper has compared the potential of a number of second generation post-combustion capture systems. As shown in Figure 8, the two gas-solid processes; alkali metals and calcium looping, come close to the $25/ton target mentioned above and could therefore conceivably play a major role in the retrofitting of existing power plants or in capturing CO2 from carbon intensive industries such as steelmaking and cement.

Increase in levelized cost of electricity (%) Cost of CO2 avoided ($/ton)

80 Cost of electricity

70

Cost of CO2

60 50 40 30 20 10 0 Amines

Chilled ammonia

Alkali metals

Membranes Calcium looping

Figure 8: Levelized cost of electricity and costs of CO2 abatement from different post-combustion CO2 capture technologies [28].

10

Pre-combustion capture systems can also decrease costs through advanced sorbents/solvents, but membranes are perhaps the most promising area of focus since hydrogen is highly suited to membrane separation. This is a fairly new field, however, and reliable cost information is not yet available. Finally, second generation oxy-fuel processes arguably show the most promise for delivering cost effective solutions at projected CO2 prices within reasonable timeframes. In first generation technologies, the bulk of the energy penalty comes from the air separation unit (about 14% of total plant power production). In addition, the CO2 stream typically contains water and must be dried before it can be compressed. The CO2 purification unit together with CO2 compression generally consumes another 9% of the total power output [29]. Little can be done about the power consumption of the CO2 purification and compression units, but the power consumption of the air separation unit can essentially be eliminated by the concept of Chemical Looping Combustion (CLC). As illustrated in Figure 9, the standard CLC configuration consists of two fluidized bed reactors – an air reactor and a fuel reactor – which circulate an oxygen carrier (typically a metal oxide) in a closed loop. In the air reactor, the oxygen carrier is oxidized by air in a high temperature exothermic environment. The high-volume high-temperature flue gas from this reactor contains only depleted air which can drive a suitable power cycle. Subsequently, the oxidized oxygen carrier is transferred to the fuel reactor where it is contacted with a fuel gas and reduced. The outlet stream from this reactor is a wet CO2 stream which can be used for additional power production and subsequently sent to a CO2 purification unit to be prepared for transport and storage. The CLC concept is therefore able to achieve virtually 100% CO2 capture (as opposed to about 90% for pre- or post-combustion systems) while eliminating the majority of the energy penalty. Depleted air

CO2 & H2O

MeO Air reactor

Fuel reactor Me

Air

Fuel

Figure 9: Simple schematic of a typical CLC system.

In addition, CLC also enjoys a thermodynamic advantage by avoiding a significant part of the exergy loss associated with fuel combustion. The heat release in the air reactor is significantly higher than that of standard combustion (the balance is consumed by the endothermic fuel reaction) and this increased energy release can be used for higher efficiency power production. In 1987, it was found that a gas-fired CLC system could achieve 50.2% efficiency compared to 45.9% for the combined cycle technology of the time [30]. A Global CCS institute review recently estimated that CLC could 11

achieve 41.4% efficiency in comparison to 39% for a pulverized coal plant with an ultra-supercritical steam cycle and 31.5% with CO2 capture through standard oxy-firing [29]. These are some very attractive advantages, but CLC is still no magic bullet and faces many challenges. The primary challenge is the selection of a suitable oxygen carrier. Most importantly, this oxygen carrier should be 1) highly reactive and selective towards the desired reactions, 2) mechanically stable over a large number of cycles under fluidization, 3) relatively cheap and 4) non-toxic. Selecting an oxygen carrier which fulfils all of these criteria can be very difficult and more than 700 candidate materials have been evaluated by 2010 [31]. In most cases, the oxygen carrier is the primary factor influencing the techno-economic feasibility of CLC. The second major challenge with CLC is the complexity of scaling up the interconnected dual fluidized bed system. It is of central importance that the system is designed and operated in such a way that the oxygen carrier has the correct residence time in each reactor in order to ensure complete reactant conversion, but still keep the solids inventory as low as possible. In addition to carrying the oxygen, the oxygen carrier also carries heat from the exothermic air reactor to the endothermic fuel reactor. Ensuring this mass and heat balance while maximizing reactor performance and minimizing the solids inventory presents a significant scale-up challenge. In addition, the CLC system requires numerous loop seals and cyclones in order to prevent the mixing of N2 and CO2 and to deliver a particle-free stream to downstream gas turbines. All of these units add to the capital expenditures and the complexity of the process. In addition, CLC faces the fundamental limitation that it is only applicable to new power plants and can therefore not be used for retrofitting. CLC also transfers some of the additional costs from the energy penalty faced by other technologies to the increased capital costs required by the more complex CLC process design. This can make initial financing harder to obtain. Finally, the attractiveness of the CLC process is also dependent on the chosen fuel. It is most convenient to use a gaseous fuel such as natural gas because this fuel can be introduced directly as the fluidizing gas in the fuel reactor. However, gas-fired CLC competes with highly efficient natural gas combined cycle power plants with post-combustion CO2 capture and will need to operate at high pressure in order to compete [31]. High pressure operation of the complex interconnected flow loop is expected to add significantly to the already high level of complexity of these systems. Even if this is accomplished, however, a recent economic evaluation showed that a gas-fired CLC combined cycle configuration offers only moderate economic improvements compared to a standard natural gas combined cycle plant with first generation post-combustion capture. The study found that CLC can capture CO2 for a 32% lower CO2 price than a standard natural gas combined cycle with post-combustion CO2 capture, but the cost of electricity still increased by 24% relative to the plant without CO2 capture [32]. This electricity price increase is also reflected by the European ENCAP project [33]. It is likely that second generation post-combustion technologies will be able to compete with this performance and therefore emerge as the preferred choice due to their relative simplicity, greater level of maturity and ability to retrofit existing plants. CLC for coal-fired power production appears to be more promising. It is possible to run CLC systems with syngas instead of natural gas, but this strategy will still incur the energy penalty of air separation in the coal gasification step. In this case, CLC will compete with already highly efficient IGCC plants 12

with pre-combustion CO2 capture where second-generation pre-combustion systems could prove to be more economical. Ultimately, the only area where CLC appears to have an unassailable advantage over other approaches is coal-fired CLC with internal gasification in the fuel reactor. In this case, coal is fed directly to the fuel reactor where it is gasified and combusted. This internal gasification CLC system would compete with standard pulverized coal plants with post-combustion CO2 capture where much room for improvement remains. The EU project, ENCAP, calculated that CO2 capture from such systems could be accomplished at costs as low as 6-13 €/ton [33]. Such a plant would almost be competitive in an open market and, in an environment with carbon prices rising towards $30/ton, would present a very strong business case. Despite its great economic potential, however, this configuration also faces a number of important challenges. The primary challenge is that the coal gasifies throughout the reactor, implying that some of the volatiles will be released close to the surface of the fluidized bed and exit the reactor without reacting. In this case, an oxygen polishing step has to be inserted downstream in order to convert the remaining fuel. The air separation unit required by this step will impose an energy penalty on the system and increase the system complexity. In order to address this challenge, chemical looping with oxygen uncoupling (CLOU) systems have been proposed where oxygen is spontaneously released in the fuel reactor to greatly increase reaction rates and limit fuel slippage. However, the oxygen carriers displaying suitable CLOU properties are prohibitively expensive [31]. This is a major problem, because one of the features of coal-fired CLC is a relatively high loss of oxygen carrier material with the removal of ash, making a cheap oxygen carrier indispensable for this particular process. Despite these challenges, the great promise of CLC drives significant research efforts into developing better oxygen carriers and simpler process concepts. For example, the process complexity of CLC could be greatly reduced by replacing the complex looping cycle with a cluster of packed or fluidized bed reactors which alternatively expose the oxygen carrier material to fuel and air. Operation of such a cluster of transient reactors can be designed so as to provide steady depleted air and wet CO2 streams to downstream equipment. This configuration would greatly reduce process complexity and make process scale-up substantially easier. It is currently under evaluation in the European project DemoCLOCK. Future European research priorities for oxy-fuel CO2 capture are given in Table 2. It is clear that CLC systems feature strongly in European research objectives with the large scale demonstration of natural gas-fired CLC taking priority despite its fundamental disadvantages against the coal-fired CLC process which remains at a much earlier stage of development. Shortening the time to market for these crucial technologies must enjoy a high level of priority in the years ahead. One route through which this could be accomplished will be discussed in the next chapter.

13

Table 2: European oxy-fuel CO2 capture research priorities.

Priority

Description Optimization and cost reduction of oxy-coal Highest retrofit boilers. Oxy-combustion of biomass. High Natural gas CLC. High efficiency second-generation oxy-fuel High circulating fluidized beds. Oxygen transport membranes for energy-efficient High/medium air separation. Medium Coal and biomass CLC. Medium Oxy-gas turbine technology.

14

Project type

Time to market

Large pilot

Short term

Large pilot

Medium term

Large pilot

Medium term

Small pilot Laboratory

Long term Long term

Chapter 4: The role of simulation-based process design Relative to other low-carbon technologies, renewables in particular, CCS is facing some serious funding challenges. This is of serious concern to the development of second generation CO2 capture technologies for two primary reasons: 1) a large number of candidate technologies exist and 2) almost all of these technologies still reside on the wrong side of the investment-risk curve shown in Figure 10. For these reasons, second generation CO2 capture technologies such as CLC face the tough task of successfully screening the wide range of candidate processes and scaling up the most promising candidates on a very tight budget and in time for commercial deployment as soon as the policy environment turns favourable.

Figure 10: Investment-risk curve for several CCS technologies [34] (republished with permission).

It is highly likely that this task will not be feasible through traditional experimental methods. For this reason, the current work is focussed on evaluating, improving and demonstrating simulation-based process design (SBPD) through reactive multiphase flow modelling as an alternative/complimentary strategy for process screening and scale-up. A more detailed outline of this philosophy is presented in the following sections.

4.1 The fundamental advantages of simulation-based process design The primary advantage of simulation-based process design (SBPD) is that it is not subject to any physical limitations. This implies that the modeller can evaluate the performance of any combination of design and operating variables without having to build any reactors, run any experiments, work with any expensive and/or dangerous chemicals, adhere to any stringent safety procedures or face frustrating and expensive delays due to a single faulty component. This advantage is very large when working with lab-scale experiments where equipment might be quite expensive and experiments might be time consuming, limited in scope and confined to office hours. When scaling up, however, this advantage becomes enormous. In theory, a modeller could deliver the same knowledge and insight from one month of simulation work as an entire multifaceted team of scientists and engineers can derive from a multi-million dollar four-year demonstration project. 15

Naturally, it will take many years of model development and demonstration before industry will be able to successfully employ simulations as the primary design tool for full scale industrial processes. However, it is fully possible that fluidized bed reactor models become sufficiently mature to start fulfilling this role within the next decade. In the aviation industry (where CFD has a much longer history), Boeing has already employed CFD to greatly accelerate the design process of full sized aircraft (see [35] for a good example). One can argue that the single phase flow over an aircraft body is less complex than the multiphase reacting flow in a fluidized bed reactor, but the principle is the same. Just like supercomputers are gradually replacing the wind-tunnel in the aviation industry, they will eventually start replacing the demonstration plant in the process industry. It is not so much a question of "if" as it is a question of "when". As was demonstrated by Boeing, simulation can greatly reduce the amount of time (and money) between conceptualization and demonstration / test flight. Such a speed-up can greatly benefit the process industry where conceptualization and commercial demonstration are often separated by decades. Second generation CO2 capture processes have a particular need for such a speed-up in process development. CLC, for example, has been around for decades. Two decades have passed since it was first proposed for the purpose of CO2 capture [36] and it has been one decade since it was successfully demonstrated on a 10 kW scale by Chalmers University of Technology in the European GRACE Project [37]. Even after all of this time, CLC has only advanced to the 1-3 MWth scale [38] and still has a long way to climb up the investment-risk curve in Figure 10. Nonetheless, Figure 11 shows that the leading utility company, ALSTOM, expects to reach commercial scale (>100MWe) sometime between 2020 and 2025, incidentally with a significantly contribution from CFD modelling [39]. 1000000

Plant capacity (kWe)

100000 10000 1000 100 10 1 2000-2004

2008-2012 2015-2020 Time period (calender years)

2020-2025

Figure 11: ALSTOM's CLC development outlook [39]. The thermal capacity (kWth) of the two smaller scales was converted to electrical capacity (kWe) by assuming 40% plant efficiency.

It is good to see this industrial acknowledgement of the important role SBPD should play in the scaleup process of second generation CO2 capture technologies. The strict market-based challenges faced 16

by second generation CO2 capture technology development could therefore serve to drive industry to fully explore all the options for cutting costs in the traditional scale-up and demonstration process. Such an environment could be highly favourable for the development of SBPD tools. In addition to the substantial fundamental advantages offered by SBPD in the scale-up of proven process concepts, it also has great potential to aid in the development of new process concepts that could result in technological breakthroughs. The process of invention has traditionally been a process of trial and error. SBPD will simply allow for a great speed-up in the rate at which novel ideas can be evaluated, thereby also causing a great speed-up in the rate at which useful new process concepts are discovered. For CLC, this advantage can lead to new process configurations that will reduce complexity (and costs), making scale-up and demonstration much easier/faster. On the other side of the spectrum, simulation-based process design also has great potential to economically optimize large-scale reactors. Since a highly general model is capable of evaluating a wide parameter space of design and operating variables, it is ideal for optimization studies. If the reactor performance at all of these different design and operating points can be predicted with reasonable accuracy, such a large reactor performance map can greatly improve the process of economic optimization. Finally, SBPD (reactive multiphase flow modelling in particular) offers rich flow information which can aid greatly in establishing a proper understanding of any given reactor process. Any flow variable can be extracted from any point in time and space from a reactive multiphase flow model. Accomplishing this in experiments is often very expensive or simply impossible. When viewed in combination, these highly attractive advantages offered by SBPD could potentially be a real gamechanger in the process industry. Promising process concepts and any new process ideas can be screened very rapidly, selecting only the best configurations for further development. These promising reactor configurations can then be scaled up much more rapidly and economically by taking larger process scale-up steps and ensuring that each scale-up step is economically optimized. Finally, the process control of the resulting fully optimized plant can be greatly enhanced by the complete understanding granted by the rich stream of simulation data generated in the simulation-based design and scale-up process. These advantages of SBPD are especially suited to second generation CO2 capture processes where a wide range of candidate processes are competing for an insufficient funding base. Effective process screening and rapid scale-up and demonstration of the most promising concepts can greatly accelerate the development of second generation CO2 capture processes such as CLC and ensure that these concepts are ready for widespread industrial implementation the moment that the policy environment becomes favourable. As will be discussed in subsequent sections, a lot of work remains to be done before such lofty ideals can be realized, but the sheer magnitude of the potential of SBPD to add value to industry should be sufficient to drive this work forward at the maximum rate possible. The understanding about what needs to be done is quite clear. All that is needed is the funding and the expertise necessary to get the job done.

17

4.2 Formalizing the role of simulation-based process design At its root, the process of design, demonstration and ultimate widespread deployment is all about advancing the basic understanding of the particular process under consideration to the point where a commercial reactor can be designed with a sufficiently high degree of confidence. If the scale-up of complex processes is attempted before the necessary understanding has been gained, there is a significant risk of poor process performance or even expensive/dangerous process failures. This is the reason why traditional scale-up and demonstration approaches employ small scale-up steps to gradually build up the required understanding through practical experience at progressively larger scales. As understanding is gradually gained through experimentation and experience, this understanding must be formally condensed into a useful tool which can be used to guide future decisions. In traditional chemical engineering practice, this tool is usually a model of some kind. The role of the reactor model in a typical design and scale-up methodology for a CO2 capture process is shown in Figure 12.

Exploratory Studies

Catalyst/Solvent Studies

Kinetics, Selectivities, Thermodynamics

Data Analysis and Correlation

Reactor Model

Transport Phenomena

Design of Commercial Reactor

Engineering Design Studies

Bench Scale Reactor Studies

Mockup Studies

Pilot Plant Studies

Figure 12: Flowchart of the typical chemical engineering approach to the design and scale-up of a CO2 capture process [40].

The flowchart clearly shows the vast amount of work necessary to build the required amount of understanding into the model so that it can safely aid in the design of the commercial reactor. On the left-hand side of the figure, the typical scale-up process of initial exploratory studies, bench scale reactors, mockup (cold flow) studies, pilot plant studies and finally the commercial scale plant is 18

shown. As indicated in the figure, the three boxes on the left all feed further understanding into the model. Boxes on the right-hand side have a bi-directional interaction with the model in order to further refine the current level of understanding and thereby improve the design of the reactor and the entire process. The box directly above the reactor model shows how the required kinetic models are derived from exploratory studies and incorporated into the model. It is clear from Figure 12 that a great amount of time and effort is needed to arrive at a suitable commercial scale design through this traditional approach. If a much more generic model of high fidelity was available, however, this process could be significantly accelerated. Figure 13 illustrates such an improved workflow.

Exploratory Studies

Bench Scale Reactor Studies

Kinetics, Selectivities, Thermodynamics

Pilot Plant Studies

Reactor Model

Design of Commercial Reactor

Catalyst/Solvent Studies

Engineering Design Studies

Figure 13: Example of a design and scale-up process using a generic, high fidelity reactor model.

In this case, the reactor model (equipped with the required kinetic models from exploratory studies with the particular catalyst involved) is used to aid in the reactor design at each particular scale. This will significantly accelerate and economize each scale-up step. In addition, the purpose of the scaleup steps now changes from gathering the understanding required to build a reliable model to confirming model performance and guiding any model improvements that might be necessary. If (and only if) the model shows good accuracy, only one laboratory scale and one pilot scale demonstration should be necessary to build the confidence necessary to design the full scale plant. 19

This will yield very large savings in terms of time and money and should result in an optimized commercial scale reactor which will avoid the typical large cost overruns associated with first-of-akind plants. However, if bench and/or pilot plant studies show that the model cannot yet deliver a good representation of reality, additional experimental studies such as mockup tests will be required to build the required understanding. Obviously, the central determining factor in this process is the fidelity of the model. A more generic and trustworthy model will require less experimental work and will lead to better commercial scale designs, while a more limited model will require more experimental scale-up steps and might still lead to a sub-optimal commercial design. Developing a trustworthy model and delivering the required proof of model fidelity must therefore be the central objective of SBPD. This is the topic of the next section.

4.3 Steps in reactor model development and demonstration Broadly speaking, SBPD requires three components in order to live up to the great potential outlined in the previous sections: model development, model validation and multiscale modelling. These topics will be discussed in three subsections below. (Note that the term "model" is used primarily to refer to reactor scale models comprised of a set of sub-scale models and not to the sub-scale models themselves.) 4.3.1 Model development SBPD requires models which can deliver 1) accurate results 2) over a wide parameter space 3) at industrial scales 4) within reasonably short simulation times. Achieving all four of these requirements presents a substantial challenge because improvements in points 1 and 2 normally occur at the expense of points 3 and 4 (and vice versa). Currently, reasonably accurate and generic simulations can be carried out for lab-scale systems using the traditional Eulerian Two Fluid Model (TFM) closed by the Kinetic Theory of Granular Flows (KTGF) [41-43]. If the domain size is increased, however, the small computational cells required by this approach quickly make simulations unaffordable (simply increasing the cell size with the domain size to keep the number of cells constant will greatly reduce simulation accuracy and generality). This issue will be discussed in much greater detail in Section 6.1. Simulating industrial scale process units within reasonable timeframes will therefore require different modelling approaches to the traditional TFM/KTGF approach. The most popular of these is the filtered approach (discussed in Section 7.3) where the mesoscale particle structures requiring the fine grids in the TFM/KTGF approach are modelled instead of being resolved [44, 45]. This allows the use of much larger grid sizes and speeds up simulation times by several orders of magnitude. The underlying physical phenomena being modelled in this approach are quite complex, however, implying that model accuracy and generality will remain inferior to expensive resolved simulations. (Note that the term "resolved" is used to describe direct resolution of mesoscale structures (clusters) and not in the complete DNS sense.) Another promising approach is a Lagrangian parcel based method (Section 7.2) where parcels of particles are tracked through space and the interactions of these particle parcels with the fluid phase and with each other is modelled [46, 47]. This method is known as the Particle in Cell or the Dense Discrete Phase Model (DDPM). The DDPM approach is capable of resolving the most important 20

particle structures on much coarser grids than the Eulerian TFM approach because it eliminates numerical diffusion in the solids phase, thereby enabling large computational speed-ups. The maximum potential speed-up for this approach is not as large as it is for the filtered approach, but it is capable of simulating certain industrial scale reactor setups without the additional modelling uncertainty associated with the filtered approach. Finally, 1D phenomenological modelling of fluidized bed reactors [48-50] (Section 7.4) has also been a popular option for many years due to its ease of use and very rapid simulation times. This approach requires a very large amount of modelling, however, and this will inevitably result in substantial reductions in accuracy and generality. From a non-technical point of view, the primary difference between these three advanced modelling approaches is the degree to which accuracy and generality are sacrificed for simulation speed. In general, the DDPM approach strives to retain maximum generality, the filtered approach strives for a middle-ground and the phenomenological approach strives for maximum simulation speed. These qualitative differences are well known, but the quantitative impact of the different degrees of tradeoff between accuracy/generality and simulation speed involved in these different approaches remains largely unknown. The DDPM and filtered approaches are also still in the development phase. This topic will resurface in Part 2 of the thesis, but here it should only be mentioned that both the DDPM and filtered approaches will require further model improvements before they can be safely used to simulate full scale fluidized bed reactors. The modelling frameworks required to conduct sufficiently accurate and generic simulations of full scale fluidized bed reactors within reasonable timeframes therefore already exist, but will require further development before they can be confidently applied. In that sense, these modelling tools are much like CCS technology – the only difference is that the development of these modelling approaches is likely to be much cheaper than the gradual CCS learning curve achieved through traditional scale-up and demonstration. 4.3.2 Model validation Dedicated experimental validation studies are absolutely vital to ensure rapid development and widespread acceptance of SBPD tools. This section will therefore further elaborate on the importance of such experimental campaigns. The majority of validation studies carried out today are performed only on a limited number of cases. This type of validation study is of limited use because it does not properly evaluate model generality. In real applications, models will inevitably be used to describe flow situations far removed from the conditions under which they were validated, thereby implicitly assuming excellent model generality. If, in reality, the model is not sufficiently generic, this approach can result in a substantial degree of error and a loss of confidence in the idea of SBPD. In addition, model validation studies should be carried out in such a way that different model constituents can be evaluated in isolation so as to prevent any complex interaction effects from complicating the comparison between simulation and experiment. In practice, this implies carrying

21

out experiments isolating the phenomena of hydrodynamics, heat transfer, species transfer and reaction kinetics, and making a clean simulation comparison to each set of experiments. These two criteria require rather extensive experimental campaigns to be carried out since the ability of the model to predict each set of physical phenomena must be evaluated in substantial detail (numerous local flow measurements throughout the experimental unit) and over a wide range of flow conditions (to properly evaluate model generality). However, if done well, such a set of coordinated experimental campaigns only needs to be carried out once after which it can be employed in a wide range of model validation studies. If such campaigns are carried out (and carried out correctly), the resulting dataset will greatly accelerate the model development process by clearly identifying any areas where further improvement is necessary. On the other hand, if the model proves to match well with a wide range of experimental measurements over a wide range of flow conditions, it will be very strong proof of model fidelity, making it substantially easier to achieve gradual industry engagement. A thorough and specialized set of experimental campaigns especially for model validation is therefore of great importance both to accelerate model development and to formally demonstrate the trustworthiness of the final product. Before such campaigns are carried out, however, SBPD will remain nothing more than a very promising idea. It really is of utmost importance that the experimental work necessary to create a detailed and generic experimental results database is carried out as soon as possible. 4.3.3 Multiscale modelling The specialized set of experimental validation campaigns outlined above will only be practical on a relatively small scale, implying that it will not be possible to thoroughly evaluate model generality on an industrial scale. However, if a model has been proven to be highly generic at smaller scales, the process of multiscale modelling can be used to develop, improve and verify larger scale models such as the DDPM, filtered and phenomenological approaches discussed in the previous section. This model versus model verification process will be the primary form of validation of large scale modelling approaches supplemented by occasional comparisons to limited large scale experimental datasets. Of the three approaches listed above, the filtered and phenomenological models can be developed directly through multiscale modelling. In the filtered approach, for example, small scale simulations are run on grids which are sufficiently fine to resolve the mesoscale flow structures. Results from these (very computationally expensive) simulations are then filtered in order to derive models capable of capturing the macroscopic flow behaviour of fluidized bed units on much coarser grids. Figure 14 shows the effects of such filtering. Even though all mesoscale detail is lost with the filtered model, it is shown that the macroscopic bed expansion is predicted correctly on a cell size which is 8 times larger. In a 3D transient simulation, this would result in a factor of 84 = 4096 speed-up in the simulation time. This is an extreme case, but it clearly illustrates the potential simulation speed-ups that can be attained through filtering.

22

Figure 14: Comparison of the filtered (left) and resolved (right) approaches applied to an industrial scale (5 m ID) bubbling fluidized bed with Geldart D particles. The resolved approach used 4 cm cells and the filtered approach used 32 cm cells. The scale on the left shows the solids volume fraction.

The phenomenological approach functions in a somewhat different way. Whereas the filtered approach still solves the full set of 3D conservation equations and only modifies the closure laws used for particle drag, solids stresses and heterogeneous reaction rates, the phenomenological approach only conserves species molar concentrations and energy in the axial direction. Instead of solving the flow of gas and solids in the bed, the phenomenological approach uses closure relations for important characteristics such as the bubble size, the bubble fraction, the bubble rise velocity, the bubble-to-emulsion mass transfer and the emulsion void fraction to determine the bed height and to close the 1D mole balances. Traditionally, these closure relations are derived from experiments, but, in the case of a fully validated generic CFD model, they could be derived from simulations in much greater detail, with much greater accuracy and at much smaller costs. The DDPM, on the other hand, is in fact a resolved approach which is capable of resolving the most important flow structures on substantially coarser meshes than the traditional TFM approach. It can therefore not benefit from any direct multiscale model development. However, since this modelling approach is still at a fairly early stage of development, it can benefit greatly from comparisons against a limited number of expensive, larger scale, resolved simulations carried out with a more thoroughly validated TFM approach. Ultimately, when the DDPM approach is developed to a suitable level of maturity (more detail about this to follow in Section 7.2), it should gradually take over from the TFM as the primary resolved modelling approach.

23

This approach of verifying the performance of large scale models against expensive, but thoroughly validated resolved TFM simulations can also be followed with the filtered and phenomenological approaches. Ultimately, the accuracy of the phenomenological model can also be verified at an industrial scale against filtered simulations which should be significantly more generic. Due to the large amount of modelling necessary in the phenomenological approach, it will always have limits to its generality, thereby requiring specialized studies to specify safe operating ranges for different models. The full multiscale development pathway discussed in this section is summarized in Figure 15. DDPM

Accelerate development

Verify generic performance

Filtered TFM simulation results database TFM simulations for filtering Sufficiently mature and thoroughly validated DDPM approach

Eventual replacement

Model development

Thoroughly validated TFM approach TFM simulation results database

TFM simulations for closure law derivation

Verify generic performance

TFM simulation results database

Phenomenological

Model development

Sufficiently mature and thoroughly validated filtered approach

Industrial scale filtered model results database Limited amount of large scale experimental data

Verify generic performance

Phenomenogical approach close to maximum potential

Application-specific model development

Phenomenological models with welldefined boundaries to generality

Figure 15: Example flow diagram of a multiscale model development pathway for fluidized bed reactor simulations. Information flow assumes a TFM/KTGF approach which has been thoroughly validated as outlined in Section 4.3.2 (the central blue box).

The different roles of each of the four modelling approaches in Figure 15 can be summarized as follows: • •

TFM: Foundational resolved approach from which other approaches are developed. DDPM: Potentially more useful resolved approach which can benefit greatly from comparisons against an extensive TFM model results database and an extensive experimental results database. It should be noted, however, that the Lagrangian particle tracking employed by the DDPM has certain fundamental advantages over the TFM (see 24





Section 7.2) and attempts to match to TFM results in these flow situations can be counterproductive. Filtered: Industrial scale modelling approach which is derived from resolved TFM simulations and verified against larger scale (and much more expensive) TFM simulations. Should be reasonably generic, but will still have limits. Phenomenological: Fast and user friendly industrial scale modelling approach designed especially for industrial end-use. Will probably have to be custom made for different applications due to fundamental limits to generality.

As will be further discussed in the technical part of this thesis, one of the primary challenges facing SBPD is to establish best practice guidelines for the optimal usage of these different modelling approaches. It is unlikely that any one of these approaches will emerge as most suitable for any of the myriad of possible applications within the foreseeable future. A significant part of future research efforts will therefore need to be directed towards fully understanding the behaviour of all of these approaches and building experience in direct model application.

25

Chapter 5: Overall technical objectives The objectives of the project have evolved with time as more experience was gained. This final section of the first part of the thesis will therefore briefly discuss these alterations.

5.1 Performance against original objectives The project objectives as stated in the original project proposal are given below: Primary: To develop validated reactive multiphase computational fluid dynamic (CFD) models for both the fuel and air reactors of the Chemical Looping Combustion (CLC) process. Secondary: 1. To gain fundamental understanding of granular viscosity, complex turbulence, inter-particle collisions, wall interactions and reaction kinetics of multiphase flow in CLC reactors. 2. Implementation of kinetics in the CFD model. 3. Validation of the CFD modelling work by experimental work through void fraction profile determination and complete concentration and temperature analysis under reactive conditions. 4. Development of a 3D discrete hard sphere particle model. 5. Upgrading the national knowledge in CLC for CO2 capture. 6. Strengthening the international research collaboration between SINTEF, NTNU and Eindhoven University of Technology, where the latter is in the forefront of research on multiphase flows and novel reactor concepts. The primary objective was completely met for the fuel reactor of the CLC process, but not completely for the air reactor. Due to factors explained in Section 8.2.2, useful experimental data for the oxidation reaction could not be collected. This project also focussed primarily on bubbling fluidization and, since the air reactor is normally operated as a riser, this objective was not fully addressed. The secondary objectives were also largely met with the exception of the development of the 3D discrete hard sphere particle model where the investment return/cost ratio was deemed to be unfavourable. Specialized experiments to investigate the capability of the model to correctly predict heat transfer through temperature measurements could also not be completed due to time constraints.

5.2 Additional objectives added throughout the project In exchange for the omissions in the original objectives, a number of additional objectives were set and accomplished. These are listed below: 1. To build a more complete understanding of the behaviour of the traditional Two Fluid Model under different flow conditions, especially under reactive conditions. 2. To evaluate all modelling options for simulating larger scale reactors and identify relative strengths and weaknesses. 3. To further develop the Lagrangian parcel-based approach for resolved simulations of larger flow domains. 4. To develop best-practice guidelines for experimental campaigns dedicated to reactive multiphase flow model validation. 26

5. To develop methodologies for effective application of simulation-based process design. 6. To investigate (both numerically and experimentally) new concepts for accelerating the development of CLC technology. More details on the completion of these objectives are presented in the next part of the thesis.

27

28

Part 2: Review of technical work This second part of the thesis will review the technical work completed during the 3 year PhD period. 8 journal papers (Part 3, Papers 1-8, [51-58]), 5 conference papers (Part 3, Conferences 1-5, [59-63]) and 1 popular science article [64] have been published in this time with 6 further publications (Part 3, Drafts 1-6, [65-70]) yet to be submitted and 2 works not yet at the point of a complete draft (Part 3, Appendices 1-2, [71, 72]). In addition, 4 papers published during the PhD period from work mostly completed prior to this project [73-76] will also be referenced in the discussion, with the two most important of these being included as appendices for easy reference (Part 3, Appendices 3-4, [73, 74]). Over the following pages, these works will be referenced as in the following example: Paper 1 [51]. The insights gained from the work completed in the above-mentioned papers will be summarized here from the perspective of the big-picture view described in Part 1. Very briefly summarized; CCS has a central role to play in the mitigation of the 21st century sustainability crisis, but first generation CO2 capture technologies will not be economically feasible at projected CO2 prices. For this reason, the development of second generation CO2 capture processes such as CLC is required. However, due to the unique funding challenges faced by this wide range of technologies, an alternative to the timeand capital-intensive traditional experimental scale-up and demonstration process is required. Simulation-based process design through reactive multiphase flow modelling is proposed as such an alternative approach. An important aim of this review is therefore to clearly demonstrate the value that simulation-based process design can add already today and map out the further developments that are necessary. The ultimate goal is to bolster the confidence of funding institutions and industry so that progress with this promising approach can be further accelerated and real positive contributions to the development of CLC technology can be made in the near future. The work is presented in four main chapters: 1. Chapter 6: The TFM and the KTGF. The well-developed TFM approach forms the foundation from which reactive multiphase flow modelling will expand in the future and must therefore be thoroughly understood before moving on. 2. Chapter 7: Simulating larger reactors. Since the TFM faces strict limitations with regard to the maximum cell size it can use, this chapter explores the merits of four approaches for larger scale reactor simulations. 3. Chapter 8: Model validation. Validation was a primary focus of this project and various comparisons between model and experiment are presented in this chapter. 4. Chapter 9: Model application. Finally, initial practical experience with simulation-based process design was gained through some simple example applications. Each chapter is closed with a brief summary and the entire review is closed by a final chapter outlining the overall conclusions and recommendations from the work.

29

Chapter 6: The TFM and the KTGF The Two Fluid Model (TFM) closed by the Kinetic Theory of Granular Flows (KTGF) forms the backbone of fluidized bed modelling. Since its development 3 decades ago, it has been gradually improved and extended and is currently primarily employed as the basis from which to develop newer modelling approaches. A full TFM/KTGF equation set can be viewed in Appendix 3 [73]. The TFM closed by the KTGF is stable and reasonably accurate over a wide range of flow conditions, but suffers from one major drawback: the fine grid sizes required for accurate solutions. As a result of the non-linear drag interaction in fluidized bed reactors, unsteady inhomogeneities (particle clusters and bubbles) are formed and must be resolved in order to correctly capture the hydrodynamics and reaction kinetics occurring within the reactor. The grid sizes demanded by these structures typically restrict the TFM to application in lab-scale reactors. If larger grid sizes are used and these structures are not properly resolved, the interphase momentum exchange and the interphase chemical reactions are overestimated (Appendix 3, [73]). A simple analogy for this phenomenon can be made out of mist and raindrops [64]. The tiny droplets of water which form mist are so tightly coupled to the continuous phase (air) that gravity does not have a significant influence their motion. However, when many of these tiny droplets join together to form a larger raindrop (cluster of droplets), it drops to the ground readily under the influence of gravity. Simulating a fluidized bed reactor on a grid that is too coarse to resolve the raindrops (clusters of droplets) can make the system behave like a mist cloud instead of a rainstorm – a very large difference indeed. Aside from the primary problem with small cell sizes to resolve clusters, the TFM/KTGF approach still faces some uncertainty with the various closure laws and model coefficients involved, especially with respect to wall interactions. Problems can also arise in cases where wide particle size distributions are involved because the TFM is designed for a single averaged particle size. If necessary, more particle size classes can be included as additional phases, but each additional phase modelled ads to the computational cost. This chapter will investigate the grid-dependence behaviour of the TFM/KTGF approach and also discuss the closure sensitivity. The effect of size distribution will not be discussed as experience suggests that it is of lesser importance and, where it is important, a polydisperse powder can mostly be well approximated as a bidisperse powder by adding only one phase to the simulation [77]. More work is required to better ascertain these claims though.

6.1 Grid independence behaviour When the grid size used in a TFM simulation is too large, both the momentum and heterogeneous reaction interaction between phases are over-predicted. This is shown for a riser case in Figure 16 and Figure 17.

30

Figure 16: Influence of grid width (GW) and aspect ratio (AR) on the solids flux (indicator of momentum interaction) in a periodic riser section together with instantaneous volume fraction contours for qualitative comparison (Appendix 3, [73]). The grid width and aspect ratio are given in non-dimensional coded variables (see given reference for details).

Figure 16 clearly shows that the mass flux through the periodic riser section increases sharply as the grid size (GW and AR) is increased. As can be seen in the volume fraction contours, an increase in grid size decreases the amount of cluster resolution, thereby simulating mist instead of raindrops according to the above-mentioned analogy.

Figure 17: Influence of grid width (GW) and aspect ratio (AR) on the reaction time (the time required for a fixed amount of reactant to be converted) in a periodic riser section (left). Instantaneous volume fraction (middle) and reactant mole fraction (right) contours are also given for a qualitative illustration of the effect of cluster formation on heterogeneous reactions (Appendix 3, [73]) (taken from the circled centre point simulation in the left-hand figure). The grid width and aspect ratio are given in non-dimensional coded variables (see given reference for details).

Figure 17 shows that the reaction time decreases sharply with an increase in grid size. This increase in overall reaction rate is due to a sharp decline in the mass transfer resistance as the degree of cluster resolution decreases. The instantaneous contours of volume fraction and reactant mole fraction clearly illustrate the nature of this mass transfer resistance. It can be seen that a well 31

resolved cluster creates a situation where the majority of solids are concentrated in a region containing virtually no reactant. This situation significantly decreases the overall rate of reaction inside the reactor and, if the clusters are not properly resolved, this effect will be lost. The limitations imposed by the clustering phenomenon illustrated in Figure 16 and Figure 17 can be quite severe for small particle sizes such as the 67 µm particles used to generate the results in these figures. Calculations were carried out on a 2D periodic domain, 0.076 m in width and 0.8 m in height and the small cell sizes required for reasonable grid independence (~1 mm2) already made these calculations very expensive (Appendix 3, [73]). Simulating a 3D domain would increase the computational cost by about two orders of magnitude and simulating an industrial scale riser would increase the cost by another four orders of magnitude, bringing the total up to six orders of magnitude. Thus, despite the continuing exponential increase in computational capacity, fine Geldart A powders will not be simulated using the TFM within the foreseeable future. When investigating larger particle sizes, however, this outlook changes surprisingly quickly. For example, a detailed study into the grid independence behaviour of 2D bubbling fluidized bed simulations carried out with the TFM found that an increase in particle size from 200 µm to 1000 µm allows for a factor of 63 increase in the cell size employed (Draft 1, [65]). In the four-dimensional time-space continuum, this allows for a factor of 634 ≈ 107 decrease in computational costs, implying that industrial scale TFM simulations are already possible in full 3D as long as very coarse powders are used. This assertion is quantified in Figure 18.

Particle size simulated (microns)

1000 900 800 700 600 500

3D

400

2D

300 200 1

2

3

4

5 6 7 Reactor diameter (m)

8

9

10

Figure 18: The size of reactor that can be simulated within a reasonable time using standard computational facilities available today as a function of particle size (Draft 1, [65]).

When considering the expected extension of Moore's law (doubling of computational capacity every two years) into the foreseeable future, the potential for using the TFM in industrial scale simulations is shown in Figure 19.

32

Particle size simulated (microns)

1000 3D

900

2D

800 700 600 500 400 300 200 2012

2017

2022 Year

2027

2032

Figure 19: The particle size that can be safely simulated in a 5 m ID reactor in the foreseeable future assuming that Moore's law is maintained (Draft 1, [65]).

It is evident that, even though computational capacities might be 210 ≈ 103 times greater in 2032 than they are today, TFM simulations of fine powders in a reactor size that might be of industrial interest will remain out of reach for many decades to come. The newer models formulated especially for larger scale simulations discussed in Section 7.3 are therefore mandatory for conducting simulations of industrial interest for such fine powders. For coarser powders, on the other hand, the TFM can already offer reliable predictions at scales of industrial interest. The sufficiently grid independent cell size (the size where another halving of the cell size will result in less than 10% change in the solution (Draft 1, [65])) also depends strongly on the reaction rate employed. Figure 18 and Figure 19 were derived from the cell sizes required for grid independence regarding the reactor performance (amount of overall conversion achieved) because, for that particular simulation setup, grid independence for the reactor performance was achieved on smaller grid sizes than grid independence for reactor hydrodynamics. However, if slower kinetic rates are simulated, the grid independence requirements for correctly predicting reactor performance become less stringent for reasons evident from Figure 20.

33

Figure 20: Contours of instantaneous solids volume fraction (left in each image pair) and reactant mole fraction (right in each image pair) for fast reaction kinetics (pair of images on the left) and slow reaction kinetics (pair of images on the right) (Paper 1, [51]). The colour map on the left shows the solids volume fraction and the one on the right shows the reactant mole fraction.

It is clear that the overall reaction rate in the fast kinetics case is almost entirely determined by the bubble-to-emulsion mass transfer (no reactant reaches the particles within the clusters), while the reaction kinetics are limiting the overall reaction rate in the slow kinetics case (a substantial amount of reactant is present within the clusters where it reacts relatively slowly). Since inadequate cluster resolution on coarser grids will influence only the mass transfer resistance and not the reaction kinetics, the simulation becomes much less sensitive to poor cluster resolution when slower reaction kinetics are simulated. This is illustrated in Figure 21.

Sufficiently grid independent cell size (m)

0,12 Slowest

0,1

Slow Medium

0,08

Fast Fastest

0,06 0,04 0,02 0 0

0,05

0,1 0,15 0,2 Particle relaxation time indicator

0,25

0,3

Figure 21: The influence of the kinetic rate on the grid independence behaviour of a fluidized bed reactor simulated by the TFM (Draft 2, [66]). Each kinetic rate is double that of the previous one, making the fastest rate 16 times faster than the slowest rate.

34

The cell sizes achieving satisfactory grid independence were plotted as a function of a particle relaxation time indicator because, as illustrated in Figure 21, a fairly general proportionality was observed between these two variables. The data points in Figure 21 were collected for different particle sizes, particle densities, gas densities and gas viscosities (all variables influencing the particle relaxation time). More details can be found in Draft 2 [66]. It is clear from Figure 21 that slower reaction kinetics allows sufficiently grid independent solutions to be achieved on much coarser grids. For example, at a particle relaxation time indicator of 0.081, the grid independent cell size is 0.0075 m for the fastest kinetics and 0.053 m for the slowest kinetics – a factor of seven difference in cell size leading to a factor of 74 = 2401 difference in computational cost for 3D simulations. Slower kinetics can therefore lead to large savings in computational costs when the TFM is employed. In addition, it must be recognized that bubbling fluidized bed reactors employing reaction kinetics as fast as the fastest kinetics shown in Figure 21 will typically result in virtually 100% reactant conversion, making a fully accurate, grid independent solution for reactor performance essentially unnecessary. For example, the reactors simulated in the study from which Figure 21 was compiled (Draft 2, [66]) typically achieved conversions greater than 99.9% for the two fastest kinetic rates employed. For such high conversions a substantial over-prediction of the reactor performance as a result of inadequate cluster resolution would have no practical influence. This situation implies that grid independence can be achieved on much coarser grids for slower reaction kinetics and, when fast kinetics are implemented, a grid independent solution of the reactor performance will in many cases be unnecessary due to virtually compete conversion. For these reasons, a more generic grid independence criterion might be found from the hydrodynamics. The most practical hydrodynamic measure for judging grid independence was found to be a volume average of the RMS solids volume fraction statistic collected after a period of time averaging. A high RMS value indicates that a substantial amount of fluctuation in the solids volume fraction took place over the sampling period and that a high degree of cluster resolution was achieved. Similarly, a low RMS value indicates poor cluster resolution. In addition to being a very convenient direct measure of the degree of cluster resolution achieved by any particular simulation, the volume average of the RMS solids volume fraction also correlated very well with the particle relaxation time as shown in Figure 22. The influence of the particle relaxation time on the cell size achieving sufficiently grid independent solutions of the reactor performance (amount of conversion achieved) is fairly complex because it must indicate both the degree of cluster formation and the permeability of the cluster (Draft 2, [66]). This is why a more complex "particle relaxation time indicator" had to be used in Figure 21. When considering the volume averaged RMS solids volume fraction, however, the grid independence behaviour depends only on the degree of cluster formation, hence the good direct fit to the particle relaxation time achieved in Figure 22 over a range of particle sizes, particle densities, gas densities and gas viscosities.

35

Sufficiently grid independent cell size (m)

0,04 0,035 0,03 0,025 0,02

R² = 0,9861

0,015 0,01 0,005 0 0

0,05

0,1

0,15

0,2 0,25 0,3 Particle relaxation time (s)

0,35

0,4

0,45

0,5

Figure 22: The correlation between the cell size giving a sufficiently grid independent solution of the volume average of the RMS solids volume fraction and the particle relaxation time (Draft 2, [66]).

When comparing Figure 22 with Figure 21, it becomes clear that the cell sizes achieving sufficiently grid independent results for the RMS solids volume fraction are close to those achieving sufficiently grid independent results for the reactor performance with very fast kinetics. This implies that a grid independent cell size which resolves clusters with adequate grid independence will in most cases also resolve the overall reactor performance with adequate grid independence. The very good fit shown in Figure 22 also implies that the grid independent cell size could be described as a correlation, thereby negating the need for tedious grid independence studies before every investigation carried out with the TFM. Work is under way to derive such a correlation, but could not be included in this thesis due to time constraints. If such a correlation is successfully derived, however, it could be of great use to 1) clearly identify the conditions under which the TFM can already be used to simulate reactor sizes of practical interest, 2) significantly accelerate the development of filtered modelling approaches which require large numbers of grid independent TFM simulations as inputs for model development and 3) possibly lead to improved filtered correlations based on the particle relaxation time.

6.2 Sensitivity to closure laws and coefficients Fluidized bed reactor models will never be 100% accurate, but good approximations of reality can be achieved by ensuring that only the most important closure laws and coefficients are developed to a high level of maturity. It is therefore important to determine the sensitivity of the TFM/KTGF approach to changes in the various closure laws and coefficients employed. Such an analysis was carried out for the same well-resolved periodic riser section as used to generate the data in Figure 16 and Figure 17 (Appendix 4, [74]). This study identified a substantial number of closures which have a statistically small or negligible effect on the overall riser behaviour. These included the drag law, the bulk viscosity, the frictional viscosity, the gas-phase turbulence and the particle-wall restitution coefficient.

36

The negligibly small effect of the drag law could be somewhat surprising, but can be easily understood when acknowledging that the overall gas-solid momentum exchange in a well-resolved TFM simulation is dominated by the resolved pressure field around the clusters and not by the particle-scale drag law implemented. This was also confirmed in a subsequent model validation study [75]. Cluster resolution also explains the very small effect of gas-phase turbulence on overall riser behaviour. Momentum dispersion caused by the presence of clusters will be much greater than that caused by small subgrid velocity fluctuations, implying that the most important physics can be captured by only resolving the clusters and treating the gas phase as laminar. For reactive systems, inclusion of gas-phase turbulence proved to have a small positive effect on reactor performance by enhancing reactant diffusion from the bubble to the emulsion (Paper 1, [51]), but this effect will mostly not be large enough to merit the added model complexity. The absence of an effect of bulk viscosity and frictional viscosity can be expected in a riser case, but the small effect of the particle-wall restitution coefficient was somewhat surprising. Closer investigation showed that changes in this coefficient did indeed significantly influence the granular temperature at the wall, but that this did not translate into significant changes in the overall riser behaviour in this particular case. In addition to the particle-wall restitution coefficient, standard fluidized bed wall treatment also considers a specularity coefficient – an indicator of the degree of slip between the solids and the wall. This coefficient has been widely found to have a very large effect on the flow whenever walls are important and the aforementioned riser study (Appendix 4, [74]) confirmed this large influence. In this case, changes in the wall shear stress through changes in the specularity coefficient influenced the flow patterns by controlling the degree of cluster formation as illustrated in Figure 23. A high specularity coefficient (large particle-wall shear stress) stopped particles from slipping down the wall at high velocities, causing clusters to build up at the wall until the point where they extrude into the domain and are swept upwards by the rising gas stream. When low specularity coefficients were used, however, long and thin streaks of particles could slip down the walls without building up, thereby completely changing the behaviour of the riser. The importance of the specularity coefficient was also confirmed in a narrower, faster riser (Appendix 1, [71]) and in a pseudo-2D bubbling fluidized bed (Paper 5, [55]). In general, the model describing wall effects will become increasingly important as the wall/volume ratio of the domain and the gas flow rate through the domain are increased. This presents a substantial problem (especially in lab-scale geometries with a high wall/volume ratio) because both the particle-wall restitution coefficient and the specularity coefficient depend greatly on the materials used and are not easily measured, implying that they are mostly used as tuning parameters for the model. Therefore, CFD validation experiments conducted in a small domain will always contain a substantial amount of uncertainty stemming from the walls.

37

Figure 23: Instantaneous solids volume fraction contours showing the influence of different specularity coefficients (FLTR: 1, 0.1, 0.01, 0.001, 0.0001) (Appendix 4, [74]). The two excerpts show the axial velocity in the selected regions. The colour map on the left shows the solids volume fraction and the one on the right shows the solids velocity.

Another highly influential parameter associated with substantial uncertainty is the particle-particle restitution coefficient. This parameter was found to be one of the most influential factors in the periodic riser study (Appendix 4, [74]) because it directly influenced the degree of cluster formation. A lower particle-particle restitution coefficient would rapidly dissipate granular temperature and reduce granular pressure in dense regions, thereby removing the primary force counteracting the compaction of particles into clusters. Since cluster formation is the dominant influence on the behaviour of riser flows, this direct influence of the particle-particle restitution coefficient on the cluster formation makes it a highly influential parameter. This high level of importance of at least two of the three largely unknown coefficients (the specularity coefficient, the particle-wall restitution coefficient and the particle-particle restitution coefficient) introduces a substantial amount of uncertainty and it would therefore appear logical to minimize this uncertainty as much as possible. This can be done by simulating domains with lower wall/volume ratios and fluidization velocities. A study into this effect showed that even small changes in the specularity and particle-particle restitution coefficients can lead to dramatic non-linear effects in a narrow, fast riser (Appendix 1, [71]) (Figure 24). When a wider bubbling fluidized bed was simulated, however, even large changes in these parameters had much smaller and much more predictable effects on the flow behaviour.

38

Figure 24: Response of the total pressure drop (Pa) and the CO2 absorption (%) in response to changes in the specularity coefficient and the particle-particle restitution coefficient. Note that the specularity coefficient is given in coded

= ς variables (c) where the actual specularity coefficient is given by:

0.002 × 2c .

Denser bubbling fluidized beds therefore appear to be much easier to simulate accurately than fast risers (especially the very narrow risers typically used in lab-scale experiments), but the physics of dense particle regions dominated by sustained contacts also presents substantial modelling challenges. For example, including a model for frictional pressure (the dominant part of the normal component of the solids stress tensor in dense flow regions) to represent the tendency of dense particle flows to resist maximum packing due to constant shear motion can have a significant effect on the flow behaviour. In a pseudo-2D bubbling fluidized bed, the inclusion of frictional pressure caused a significant decrease in the emulsion phase volume fraction which caused more gas to flow through the emulsion and therefore less gas to rise as bubbles (Draft 6, [70]). The frictional pressure is also used in the frictional viscosity formulation, thereby causing simulations including the frictional pressure to behave much more solid-like than beds neglecting it. In the case of the pseudo-2D bed investigated in the aforementioned paper (Draft 6, [70]), this resulted in the formation of central gas channels which significantly reduced the gas/solid contact, leading to much poorer reactor performance (more reactant slippage). However, the added complexity brought by the necessity to include frictional models in bubbling fluidized beds (which is very often neglected in the scientific literature) is offset by the fact that translational effects can be neglected. For example, riser simulations can fail completely if granular temperature is not transported by solving the full partial differential equation (Appendix 4, [74]). In bubbling fluidized beds, however, the convection and diffusion terms in the granular temperature equation can be neglected because local generation and dissipation terms are dominant in dense regions. This enhances simulation stability and avoids the complexity of modelling granular temperature diffusion.

39

6.3 Summary Experience from the work outlined in the previous two sections suggests that achieving good simulation accuracy for wider bubbling and turbulent fluidized beds is much (i.e. some orders of magnitude) easier than doing the same for narrow risers. This is due to two primary factors: 1) bubbling fluidized beds usually use larger particles which allow for the use of much larger cell sizes and 2) simulations of wider bubbling and turbulent fluidized beds are much less sensitive to the uncertainties brought by the specularity coefficient, the particle-wall restitution coefficient and particle-particle restitution coefficient. Thus, even though riser simulations will present a substantial modelling challenge for many years to come, industrially interesting simulations of bubbling fluidized beds can already be completed with a reasonable degree of confidence even with the TFM/KTGF approach. It is important to again emphasize the magnitude with which the challenge of modelling narrow risers using fine powders exceeds the challenge of modelling wider bubbling fluidized beds using coarse powders. This vast difference is further enhanced by the fact that fast kinetics (which is again very difficult to model accurately) will result in virtually complete conversion in bubbling fluidized beds, rendering accurate modelling of the overall reaction rate unnecessary, except for very shallow beds. Only in the case of slow kinetics (which is much easier to model) must a bubbling fluidized bed reactor model perform well from a reactant conversion point of view. Practically speaking, the most important conclusion from this chapter is that industrial end-users must understand that fluidized bed reactor modelling of bubbling (and turbulent) fluidized bed reactors using larger particle sizes is already a practical feasibility, while modelling of risers (and circulating fluidized beds) still requires significant research attention. This understanding is vital to start the process of demonstrating simulation-based process design in real-world applications in order to gradually build industrial confidence.

40

Chapter 7: Simulating larger reactors Even though it appears as if the well-established TFM/KTGF approach would be able to simulate large scale bubbling fluidized beds using coarse particles with a high level of accuracy, there is still a wide range of regularly employed fluidization conditions (involving smaller particle sizes and faster fluidization) which will not be accessible to the TFM approach for many decades into the future. If these fluidization conditions are to be simulated with reasonable accuracy at industrial scales, alternative modelling approaches need to be adopted. This chapter will investigate four such approaches: 2D simulations, the Dense Discrete Phase Model (DDPM), the filtered Two Fluid Model (fTFM) and phenomenological 1D modelling.

7.1 2D modelling Simulating a 3D fluidized bed geometry on a 2D plane is the simplest method for reducing computational costs and allowing simulations of larger reactors to be carried out. Depending upon the number of cross-stream cells employed, this assumption will typically reduce computational costs by about two orders of magnitude, thereby allowing the simulation of a reactor diameter (reactor width in 2D) which is about 5 times greater than that which would be possible in 3D. The simplicity of 2D simulation and the large computational savings it brings have therefore made 2D planar fluidized bed simulations very common in the scientific literature. Naturally, however, the 2D assumption brings a significant amount of uncertainty to the simulation results. This uncertainty stems from two primary sources: 1) the removal of one degree of freedom will alter the way in which the gas interacts with the particle clusters and 2) cross stream transport in a 2D planar geometry will differ substantially from radial transport in a typical cylindrical 3D geometry. In order to investigate these factors, a dedicated study was completed to compare the behaviour of 2D planar simulations to that of 3D cylindrical simulations. The result showed that the 2D simulations predicted qualitatively correct trends, but produced substantial quantitative differences from 3D simulations. As an example, the systematic differences between the reactor performance (degree of conversion achieved) predicted by 2D and 3D simulations are shown in Figure 25. The explanations behind the trends in Figure 25 are somewhat complex and the reader is referred to Paper 7 [57] for more details. In brief, it was found that the difference in the bubble-to-emulsion mass transfer caused by the missing degree of freedom was the primary source of discrepancy. At low flow rates, the small bubbles forming close to the inlet could not be resolved in 2D, leading to substantial over-predictions of the mass transfer rate and the reactor performance. In addition, the mass transfer in the splash region at the top of the bed was over-predicted because the gas could not slip out of the bed efficiently due to the missing degree of freedom. Both these factors caused the 2D simulations to generally predict better reactor performance than the 3D simulations.

41

Figure 25: Response of the difference between 2D and 3D reactor performance to changes in the fluidization velocity (U), the static bed height (H), the particle diameter (d) and the reactor temperature (T) (Paper 7, [57]). Differences should be interpreted as the percentage by which the 2D reactor performance is greater than the 3D reactor performance.

The geometrical differences between the 2D and 3D geometries also caused errors in the bed expansion ratio and the overall recirculatory flow pattern as the gas flow rate was increased (Figure 26). These hydrodynamic effects had a smaller influence on the overall reactor performance though.

Figure 26: Time averaged contours of solids volume fraction (two images on the left) and axial velocity (two images on the right) for a 2D simulation (left in each image pair) and a 3D simulation (right in each image pair) (Paper 7, [57]).

Despite these clear discrepancies, 2D simulations can deliver quantitatively accurate hydrodynamic results [74, 75] (discussed in Section 8.1.1). However, a study attempting to validate a 2D reactive fluidized bed model could not capture a counter-intuitive experimental trend in reactor performance (Paper 1, [51]) (more details in Section 8.2.1). These results suggest that, although 2D modelling can deliver surprisingly good representations of reality in some cases, the substantial amount of uncertainty it introduces makes it risky in any situation where good quantitative predictions are 42

desired over a range of design and operating variables (e.g. for an optimization study). 2D simulations should therefore only be used for applications such as model testing and rapid prototyping of new concepts. A limited amount of time was also dedicated to the idea of using smaller domains with periodic boundaries (e.g. a pie-slice of a cylindrical geometry with rotationally periodic boundaries), but it was found that the improvements brought by these arrangements were not worthwhile. The primary issue that was not addressed by this method is the splash zone which primarily forms in the centre of the vessel. If this region is not resolved in full 3D, the gas will not be able to exit the bed cleanly and the reactor performance will be over-predicted as discussed above. It is possible that this problem will not be present in riser simulations which have no splash zone, but this option has not yet been investigated.

7.2 The DDPM The parcel-based Lagrangian modelling approach known as the Dense Discrete Phase Model (DDPM) in ANSYS FLUENT and the Particle in Cell method in the specialized fluidized bed simulation software package Barracuda VR offers a very attractive alternative to the TFM. In this approach, the solids phase is tracked in a Lagrangian framework as parcels of particles (each parcel can contain millions or billions of particles), while the gas phase is represented as an Eulerian continuum on a standard computational grid. The solids phase is also represented on the Eulerian grid as it is in the TFM, but the difference is that the solids volume fraction and velocity are interpolated from the Lagrangian parcels instead of being calculated via the Navier-Stokes transport equations. A full description of the DDPM equation set can be found in Paper 4 [54]. The primary advantage of the DDPM over the TFM is that the Lagrangian particle tracking does not result in any numerical diffusion of the volume fraction field. In practice, this means that a particle cluster requiring around 5-10 cells across to be properly resolved with the TFM can now be resolved on only one cell. In 3D this would result in a computational speedup in the range of 2-4 orders of magnitude. The much larger cell sizes that can be used with the DDPM has been practically demonstrated both for risers (Conference 1, [59]) and for bubbling fluidized beds (Conference 2, [60]). For the riser case, it was shown that the DDPM allowed for at least a factor of 3 increase in cell size although this estimate might be too low because the cell size became so large that only 14 cells were deployed across the width of the domain – a number which could be too small to properly capture the macroscopic core-annular flow structure. The bubbling fluidized bed study showed that the DDPM could give accurate solutions using a cells size at least 4 times larger than the TFM (the largest grid size investigated with the DDPM still gave grid independent solutions) and could affordably complete industrial scale bubbling fluidized bed simulations in 3D. The aforementioned studies considered only hydrodynamic performance, however, and another study (Draft 3, [67]) found that grid independence behaviour regarding the reactor performance (amount of conversion achieved) behaves differently. In this study, the excellent performance with regard to hydrodynamic grid independence was confirmed, but significant grid dependencies remained for the reactor performance. This is because of the problem illustrated in Figure 27.

43

0,6

1 0,9 0,8 0,7

0,4

0,6 0,3

0,5 0,4

0,2

0,3 0,2

0,1

Reactant mole fraction

Solids volume fraction

0,5

Solids volume fraction (reality) Reactant mole fraction (reality) Solids volume fraction (model) Reactant mole fraction (model)

0,1 0

0 0

2

1

3

Cell boundaries Figure 27: Simple illustration of the interpolation error resulting in over-predicted gas-solid contact (Draft 3, [67]).

Even if the DDPM requires only one cell to resolve the cluster interface, that cell would still substantially over-predict the gas-solid contact. In the case of a first order reaction, the reaction rate is proportional to the product of the solids volume fraction and the reactant mole fraction. When looking at Figure 27, the integrated product of the solids volume fraction and the reactant mole fraction would be roughly one order of magnitude higher when the cell centre values are used in the reaction rate calculation than it would be in reality. This error would be small if the cells containing such cluster interfaces occupy only a small percentage of the total volume, but, as the cell size is increased to draw maximum value from the cluster resolution capabilities of the DDPM, the volume occupied by the cluster interfaces would increase proportionately. This effect can be captured by a relatively simple filtered model which decreases the reaction rate based on the gradients of solids volume fraction and reactant mole fraction in any given cell. As already demonstrated nearly two decades earlier [78], such a model is fairly straightforward to derive, but work in this area could not yet be completed due to time constraints. If such a model could be successfully derived, however, grid dependence behaviour regarding the overall reactor performance should be mostly eliminated and simulation speed-ups of 2-4 orders of magnitude over the TFM could become a reality even for fully reactive simulations. The DDPM therefore holds great promise of delivering simulation accuracy similar to that of the TFM at a greatly reduced computational cost. However, substantial fundamental problems must first be resolved before the DDPM (as implemented in ANSYS FLUENT 13.0) can be generally employed with a high degree of confidence. These problems exist both in the dense and dilute regions of fluidized bed reactors. In the dense regions, the discrete nature of the volume fraction derived from the particle parcels creates a need for some non-physical numerical tweaks in order to maintain model stability. For example, if the number of parcels is chosen such that 10 parcels represent a maximum packing volume fraction of 0.6, the addition of the 10th parcel to the cell (moving the volume fraction from 44

0.54 to 0.6) would cause the solids pressure to suddenly jump by many orders of magnitude as a result of the formulation of the radial distribution function. It is not possible to achieve a numerically stable solution for such a situation. In ANSYS FLUENT, a "dense packing treatment" option is provided which ensures a stable solution in regions of dense packing, but this is done at the expense of correct physics. The developers of the DDPM in FLUENT decline to disclose any details about this dense packing treatment, but practical experience with this modelling approach will be discussed here. When the dense packing treatment is activated, cells containing more parcels than that which is required to reach maximum packing appear to transfer the excess volume fraction to adjacent cells in order to conserve mass. This phenomenon can become quite pronounced as shown in Figure 28.

Figure 28: A typical correlation between the volume fraction field and the parcel positions in a region of dense packing. Note that some cells have a high volume fraction on the Eulerian grid despite not containing any Lagrangian parcels.

Aside from the uncertainty brought by this undisclosed mechanism, the dense packing treatment appears to override all KTGF modelling applied to influence the motion of the parcels. Access to the particle force balance through User Defined Functions (UDF's) is also denied in regions where the dense packing treatment is activated. This implies that the particle motion in regions of dense packing is influenced not by the KTGF modelling framework, but by the undisclosed numerical treatment required to ensure model stability. This sounds like a potentially very large source of error, but the DDPM in ANSYS FLUENT provides predictions which are quite similar to the TFM when bubbling fluidization is simulated. The major difference observed is that the bubbles simulated by the DDPM are virtually devoid of any particles, while the TFM generally simulates bubbles which still contain a solids volume fraction in the range of 0.01-0.05. This has a substantial influence on reactive simulations (especially when fast kinetics are considered) and causes the DDPM to predict substantially poorer reactor performance due to the poorer gas-solid contact simulated (Draft 3, [67]). 45

The aforementioned study did not include frictional pressure because the very rapid increase in frictional pressure as maximum packing is approached cannot be accommodated in the DDPM. The DDPM will therefore behave similarly to the TFM with no frictional pressure, simulating a fully compacted emulsion phase which behaves very liquid-like. This shortcoming will introduce some degree of error, but the nature and severity of this error not quantified as of yet. In dilute regions, the current formulation of the DDPM in ANSYS FLUENT (version 13.0) does not transport the granular temperature – something which leads to very large errors as mentioned in Section 6.2 – and also neglects shear stresses (only accounting for normal stresses caused by the simulated granular pressure). The granular temperature transport problem can be corrected by simply transporting the granular temperature as a scalar variable on the particle parcels (Paper 4, [54]). In this way, granular temperature can be transported in the Lagrangian sense by solving an ordinary differential equation for each parcel based on the underlying information provided by the cell in which it currently resides. The aforementioned paper (Paper 4, [54]) also included the effect of the full stress tensor in the particle force balance in order to evaluate the effect that this would have on simulation results. As illustrated in Figure 29, both these improvements have large effects on model behaviour. If no transport of granular temperature is included the granular temperature in the domain decreases greatly and cluster formation is dominated by hindered settling effects. If the shear stress is neglected, on the other hand, the granular temperature is over-predicted because the neglected shear stress allows for very large flow gradients, leading to excessive generation of granular temperature. It is only when both these effects are included that the expected smooth cluster formation is resolved.

Figure 29: Instantaneous snapshots of particle volume fraction obtained with the full model (left), the negligence of shear forces (middle) and the negligence of granular temperature transport (right) (Paper 4, [54]). Dark regions contain high solids volume fractions (clusters).

An additional advantage of the DDPM approach is that it avoids the unphysical merging of dilute crossing particle jets in the TFM (a phenomenon known as "delta-shocks"). This advantage was also explicitly proven in the aforementioned paper (Paper 4, [54]). The elimination of delta-shocks would reduce the degree of clustering by allowing dilute particle streams to pass through each other, but, 46

although the effect of delta-shocks was found to be significant, it is not large enough to be of major importance in most fluidization cases. The final fundamental advantage of the DDPM is the natural inclusion of a particle size distribution. Investigations into the importance of this advantage are planned for the near future, but could not be completed within the current project.

7.3 The fTFM The filtered Two Fluid Model is the most popular approach for large scale fluidized bed modelling today. It is derived through a multiscale modelling approach by filtering well resolved TFM simulations to derive a new set of closures which can model the effects of clusters on a much coarser grid. The full set of fTFM equations (hydrodynamics only) can be viewed in Paper 6 [56]. Theoretically, this multiscale modelling approach has the potential to achieve a computational speedup of many orders of magnitude in comparison to the standard TFM, while still preserving reasonable accuracy. In order to reach this ideal, however, the effects of clustering on all transport phenomena must be accurately modelled – a task which is proving to be very challenging. When compared to the DDPM approach, the fTFM benefits from being based on the stable and mature TFM and from being potentially many orders of magnitude faster, especially for smaller particle sizes which form very small clusters (these small clusters must still be resolved by the DDPM). On the other hand, the fTFM requires a large number of additional models, while the DDPM requires only the well-established KTGF. However, when considering the substantial problems still facing the DDPM discussed in the previous section, it is unclear which of these approaches will emerge as the dominant large scale fluidized bed flow model in the future. One fTFM paper (Paper 6, [56]) was completed within the current project in order to gain some practical experience with the filtered models. State of the art models developed for riser flows by the group of Prof Sundaresan at Princeton University was applied in dense fluidized beds. No fundamental filtered model development was completed within the current project. Practically speaking, the filtered models proved to be easy to implement and numerically very stable. In terms of accuracy, the models performed well over a range of flow conditions and particle sizes compared to experiments and resolved TFM simulations (Figure 30). Aside from the standard filtered models for drag and solids stresses, a relatively new addition in the form of wall corrections was also found to be very important for ensuring accurate results (also evident from Figure 30). It was postulated that these wall functions could be derived in a more generic way by applying corrections in all regions which contain substantial flow gradients within the filtered variables (not only the wall regions).

47

Figure 30: Time-averaged solids vertical velocity (top row) and volume fraction (bottom row) for the filtered case without wall corrections and with 16 cm cells (left) the resolved case with 4 cm cells (centre) and the filtered case with wall corrections and 16 cm cells (right) (Paper 6, [56]).

Further work to develop reliable filtered models for species and heat transfer as well as heterogeneous reaction kinetics is currently under way within a number of research groups. Once these models become available, it will be possible to critically compare the performance of the DDPM and fTFM in large scale fluidized bed simulations.

7.4 1D phenomenological modelling Simplified 1D modelling has been used by industry for a number of decades due to its relative simplicity and low computational cost. For bubbling fluidized bed applications, the two phase theory has emerged as the preferred approach and models the bed to consist of two interacting phases: the bubble phase and the emulsion phase. Interactions between these two phases (primarily the rate of mass transfer) are then modelled through a set of phenomenological closures. The full equation set for the 1D approach can be viewed in Paper 3 [53]. When compared to the DDPM and fTFM approaches, the 1D approach benefits from its simplicity and very fast simulation times (order of seconds/minutes), making it ideal for industrial application. However, since it is much less fundamental, it is also likely to be much less generic, implying that it must be used with great care in order to avoid potentially very costly modelling errors.

48

Establishing the generality of phenomenological closure laws is therefore a very important task and work was therefore initiated during this project to compare the predictions of a 1D phenomenological approach to a 2D TFM approach (Paper 3, [53]). As shown in Figure 31, results were encouraging even for the variables showing the greatest differences.

Figure 31: Response surfaces of the difference between the predictions made by the CFD and 1D approaches (Paper 3, [53]). The left-hand response surface shows the reactor performance (-log(outlet_reactant_concentration)) and the righthand surface shows the expanded bed height (m). The two most influential independent variables; fluidization velocity (U) and particle diameter (d) are plotted.

From the perspective of the study comparing 2D and 3D simulation results (Paper 7, [57]), discussed in Section 7.1 (which was completed after this 1D/2D study), the majority of the differences in Figure 31 may well be explained by discrepancies resulting from the 2D assumption rather than discrepancies resulting from the highly simplified 1D approach. Further work will therefore be carried out to compare 1D phenomenological modelling to 3D CFD modelling. Nonetheless, the reasonable comparison between more fundamental CFD modelling and phenomenological 1D modelling opens up very attractive pathways to the development of reliable 1D models with well-established safe operating boundaries. Traditionally, the phenomenological closures used in 1D models are empirically derived based on experimental data and this process can be significantly enhanced by the rich flow data available from CFD simulations. It is therefore theoretically possible to derive a set of phenomenological closures that replicate fundamental CFD model behaviour over a wide range of flow conditions – something which would be of great industrial interest.

7.5 Summary The four pathways to sufficiently accurate large scale fluidized bed reactor simulations covered in this chapter all have their own pros and cons as summarized in Table 3. This wide range of modelling possibilities implies that, even though model development remains very important, research into best practices for model application also becomes a high research priority.

49

Table 3: Pros and cons of four options for large scale fluidized bed reactor simulations.

Approach 2D modelling

• • •

The DDPM

• • • •

The fTFM





1D phenomenological models

• • •

Pros Simplest computational cost saving option for a CFD modeller Grants large computational savings in the range of two orders of magnitude When applied to DDPM simulations, these savings can be augmented by another two orders of magnitude Being a resolved approach it needs no new closures Grants large computational savings in the range of 2-4 orders of magnitude Particle size distributions are easily included The phenomenon of delta shocks is inherently avoided Can theoretically simulate any reactor size as long as the grid is sufficiently fine to capture large scale flow patterns The only CFD approach that can simulate large scale reactors with fine powders Very fast simulation times for any size reactor Can be implemented in a simple Matlab code Simple to use

• • • • •



• •

• •

Cons Fundamentally alters the domain and the mass transfer interactions This results in systematic differences between 2D and 3D simulations Can lead to large quantitative errors Modelling uncertainties in regions of dense packing Current model formulations (ANSYS FLUENT 13.0) are incomplete in dilute situations Will still be limited for fine powders which form very fine particle structures Requires a wide range of new closure models to be developed The clustering phenomenon which must be modelled is very complex thereby making model development challenging Relies completely on models (almost no fundamental transport included) Therefore less general

A large quantity of work is necessary to weigh the pros and cons in Table 3 for different modelling purposes (prototyping, scale-up, optimization, real-time operational guidelines, troubleshooting or multiscaling) and for different reactor conditions (particle size, flow regime, reactor size or reaction rate). No single approach will be able to accurately and affordably cover all of these options and it is therefore vital to have a good understanding of where these different approaches (the standard TFM included) can offer the greatest value. In addition, focussed model development is necessary for the DDPM, the fTFM and the phenomenological approach. The DDPM requires significant amounts of work both in dense and dilute regions, the fTFM requires accepted models for heterogeneous reactions and the phenomenological approach requires further benchmarking against CFD simulations in order to improve the various closure laws involved.

50

Parallel progress both in terms of model development and model application should result in a rapid and well-documented expansion of the number of real applications in which fluidized bed reactor modelling can add value. Model developers and modelling service providers must keep up to date with these developments so that industry can be gradually engaged through the fluidized bed reactor applications which fall within the capabilities of current state of the art models. Further progress beyond the state of the art will then continue to accelerate this process up to the point where simulation-based process design becomes the industry standard.

51

Chapter 8: Model validation As outlined in Section 4.3.2, dedicated model validation studies (based on experiments designed and executed exclusively for this purpose) are essential for two primary reasons: 1) accelerating model development by clearly indicating the areas where further improvement is necessary and 2) establishing proof of model fidelity through successful validation so that industry can start investing in simulation-based process design. In acknowledgement of this central role of experimental validation, the current project has invested much of its time and resources in this area. A two-year postdoc was dedicated to the construction and operation of two pseudo-2D fluidized beds – one cold flow unit and one reactive unit – especially for the purpose of CFD model validation. Results from this study will be reported in this chapter together with some additional validation efforts against published experimental data. The chapter will be presented in two parts: cold flow validation and reactive validation.

8.1 Cold flow validation Validation experiments performed at room temperature are much easier and less costly to complete than reactive experiments at elevated temperatures. It therefore makes sense to extract as much information from cold flow experiments as at all possible, reserving the reactive experiments only for data which cannot be collected in any other way. Validation against cold flow experiments will be presented in two sections: comparisons to published data and comparisons to data collected within the current project. 8.1.1 Comparisons to published data Comparisons to published cold flow experiments yielded encouraging results, even for 2D TFM simulations of bubbling beds and risers [74-76] (e.g. Figure 32), confirming the fundamental usefulness of the TFM. 2D DDPM simulations also yielded good results in a riser case (Conference 1, [59]) although such good performance could not be repeated for more dilute cases where the transport of granular temperature added in a later work (Paper 4, [54]) became very important. Larger scale comparisons were also made in 3D using the DDPM (Conference 2, [60]) and the fTFM (Paper 6, [56]) to show adequate performance in industrial scale reactors. The generality of the fTFM was also evaluated in 3D against a fairly extensive data set collected in a bubbling bed with Geldart A particles to show satisfactory generic applicability (Paper 6, [56]) (Figure 33). As outlined in Section 7.5, it is very important that the fTFM performs well for fine powders where not even the DDPM can simulate industrial scale units due to the fine particle structures formed.

52

6

5 Experimental 2D

Height (m)

4

pseudo 3D 3

2

1

0 0

5

10 15 Percentage solids

20

25

Figure 32: Comparison of axial volume fraction profiles from 2D TFM simulations and 3D experiments [75]. The pseudo 3D line was generated by taking a weighted average across the cross-section of the 2D riser as if it were a 3D cylinder. This adjustment caused solids residing at the wall to have a substantially higher weight than solids towards the centre, thereby increasing the cross-stream average volume fraction in areas of core-annular flow.

0,6

0,9

Experiment

0,8

0,5

fTFM TFM

Volume fraction

Height (m)

0,7 0,6 0,5

Experimental fTFM TFM

0,4

5000 Pressure drop (Pa/m)

0,3 0,2 0,1

0,3 0

0,4

10000

0,0 0

0,5 Normalized radius

1

Figure 33: Axial and radial comparisons of the performance of the fTFM against experiments collected in a lab-scale 3D turbulent fluidized bed using a fine Geldart A powder (Paper, [56]). The performance of the unfiltered TFM is also included to illustrate the importance of filtering when simulating such fine powders.

53

8.1.2 Comparisons to data collected within the project A transparent pseudo-2D fluidized bed reactor was constructed and operated during the current project in order to collect data on the hydrodynamics and species transfer for the purpose of CFD model validation. The unit was originally designed for maximum flexibility by inserting a separation plate one third of the distance from one of the walls so that three different bed diameters could be investigated. However, this arrangement led to gas leakages and asymmetric flows from the two gas distributors employed and the bed had to be simplified to a standard arrangement with one distributor and a fixed bed width. More details on the experimental setup can be viewed in Paper 5 [55]. As outlined in Section 4.3.2, experiments were designed from two fundamental guiding principles: 1. The primary purpose of any validation campaign should be to test the generality of a model. Detailed comparisons at one or two flow conditions are of limited use because a model will inevitably be employed under flow conditions far removed from those under which it was validated. It is therefore very important to explicitly evaluate the performance of the model against experimental data collected over a wide range of flow conditions. 2. Experiments should be designed in such a way that the ability of the model to predict hydrodynamics, species transfer, heat transfer and reaction kinetics are evaluated in isolation. All of these phenomena will occur simultaneously in a real reactor and, if a systematic approach is not followed, differences between model and experiments will be very difficult to interpret because they can result from any combination of these four phenomena. These principles were used as general guidelines, but practical circumstances often demanded some compromise. For example, PIV studies could only be completed for large particles, the MS for gas species measurements was not readily available and time constraints did not allow for investigations into heat transfer. The first hydrodynamic study completed evaluated the generality of the 2D TFM approach for comparisons to pseudo-2D experimental data (Paper 5, [55]). The bed expansion ratio was used as the primary variable for comparison over a range of fluidization velocities, static bed heights and particle sizes. Velocity profiles collected via PIV facilitated more detailed comparisons for selected cases. The 2D TFM achieved a reasonable match against experimental measures, but showed systematic differences with regard to the response to changes in the particle size and especially the static bed height. This is illustrated in Figure 34 where it is shown that the model increasingly over-predicted the bed expansion as the particle size was decreased and under-predicted the bed expansion as the static bed height was decreased. On the other hand, the model responded very accurately to changes in the fluidization velocity.

54

Figure 34: Response surface of the changes in the percentage deviation between simulation and experiment to changes in the static bed height and the particle size (Paper 5, [55]). The figure should be interpreted as the percentage by which the CFD bed expansion ratio was greater than the experimental bed expansion ratio.

Closer investigations into the flow fields using PIV data revealed that the 2D TFM approach greatly over-predicted the average velocity inside the bed. This was found to be the result of 2D simulations neglecting the friction on the large front and back walls of the pseudo-2D fluidized bed unit and was corrected by running a 3D simulation with a partial slip boundary condition on the walls. As illustrated in Figure 35, this large discrepancy was not a result of the 2D assumption itself, but only of the negligence of wall friction on the large front and back walls. 0,8 0,6

y-velocity (m/s)

0,4 0,2 0 -0,2

Experiments

-0,4

2D 3D: S = 0.0

-0,6

3D: S = 0.5

-0,8 0

0,1 0,2 x-coordinate (m)

0,3

Figure 35: Comparisons of experimental time-averaged y-velocity (vertical component) data to simulation results from a 2D geometry as well as a 3D geometry with specularity coefficients (S) of 0 and 0.5 on the large front and back walls (Paper 5, [55]).

55

Following these findings, another study was completed to investigate the effects of three highly influential factors: the frictional pressure, the 3D geometry and wall friction (Draft 6, [70]). Including all of these factors managed to eliminate the systematic difference in the response to changes in particle size, primarily because 3D simulations of smaller particle sizes which formed smaller 3D particle structures gave the gas an extra degree of freedom to slip past the solids and thereby reduce the bed expansion. The large difference in the response to changes in the static bed height remained unchanged though.

Figure 36: Response surface of the changes in the percentage deviation between simulation and experiment to changes in the static bed height and the particle size (Draft 6, [70]). The figure should be interpreted as the percentage by which the CFD bed expansion ratio was greater than the experimental bed expansion ratio.

The curves in Figure 34 and Figure 36 appear to be very similar with the primary differences being the elimination of the response to changes in the particle size. It appears as if the quantitative match at static bed heights higher than about 0.3 m is very good, but the agreement rapidly deteriorates at lower static bed heights. Significant efforts were expended to try and correct this discrepancy through changes in model coefficients, changes to the gas inlet condition and inclusions of two particle size classes, but satisfactory results could not be attained. For this reason, experiments will be repeated in a new project starting directly after the completion of the current project in order to ensure that no experimental error was made. Experimental data will also be collected over a wider range of static bed heights in order to see whether the good agreement between simulation and experiment persists at higher static bed heights. Despite this unexplained discrepancy at low static bed heights, however, the likeness between complete 3D TFM simulations and experiments was very encouraging. This likeness is best observed in an animation where it becomes apparent that the complete simulation accurately captures complex behaviour such as central channelling of gas due to dense phase shear stresses and wall stresses as well as entrainment of small structures into the bubble phase due to the 3D geometry and wall stresses. Figure 37 gives a snapshot of such a visual comparison. 56

Figure 37: Visual comparison of the bed dynamics of the experiments and different simulation setups. From left to right: the experiment, 2D simulation, 2D simulation with frictional pressure, 3D simulation with frictional pressure, 3D simulation with frictional pressure and wall friction, and 3D simulation with frictional pressure and wall friction with grey levels adjusted to better match the experiment.

Apart from the unexplained discrepancy at low static bed heights, the TFM/KTGF approach therefore appears capable of simulating complex fluidization behaviour with good accuracy. From an experimental point of view, it would have been preferred if the influence of the wall friction could be reduced in order to limit the sensitivity of simulation results to the unknown specularity coefficient. For this reason, future experiments will be designed in a larger 3D reactor with a much lower walls/volume ratio. Subsequently, the same experimental setup was used to study species transfer by injecting a steady point source of CO2 tracer into the unit. More details of this experimental setup can be viewed in Paper 5 and Conference 5 [55, 63]. Initially, a study with the complete TFM setup (as was used to generate the rightmost image in Figure 37) was done to compare detailed species measurements for a case in which PIV measurements were also available (Conference 5, [63]). In addition, the effect of the 2D assumption on the ability of the model to predict gas dispersion was also evaluated. Results for the 3D and 2D simulations are shown in Figure 38.

57

0,4

0,4 Experiment 10.5 cm Experiment 34.5 cm Experiment 60.5 cm Simulation 10.5 cm Simulation 34.5 cm Simulation 60.5 cm

CO2 volume fraction

0,3 0,25

Experiment 10.5 cm Experiment 34.5 cm Experiment 60.5 cm Simulation 10.5 cm Simulation 34.5 cm Simulation 60.5 cm

0,35 0,3

CO2 volume fraction

0,35

0,2 0,15 0,1

0,25 0,2 0,15 0,1

0,05

0,05

0 0

0,1 0,2 x-coordinate (m)

0,3

0 0

0,2 0,1 x-coordinate (m)

0,3

Figure 38: Comparisons local experimental measurements of CO2 tracer concentrations with 3D simulations (left) and 2D simulations (right).

It is immediately evident that the 3D simulation produced a reasonable fit while the 2D simulation predicted species concentrations which were generally too low. The under-prediction in the 2D simulations was the result of the 3D nature of the tracer dispersion through the bed. Tracer gas was injected from the back plate of the pseudo-2D bed and generally maintained higher concentrations at the back plate than at the front, especially in the lower reactor regions. Since the tracer concentrations were also measured at the back plate, the 2D simulations could not capture the higher tracer gas measurements caused by this 3D effect. For the 3D case, it appears as if insufficient gas diffusion takes place as the gas rises through the bed. In the aforementioned paper (Conference 5, [63]), it was speculated that this lack of diffusion was due to particle induced gas dispersion in the emulsion (such as is known to occur in fixed beds) which was not modelled in the standard TFM setup. Such a model [79] was evaluated together with a simple Smagorinsky-type LES model for gas phase turbulence and the effects were compared over a wide range of flow conditions. Results showed that the added physics did not correct the insufficient gas dispersion shown in Figure 38. The results in Figure 39 clearly indicate that, although the model captured the qualitative trend quite accurately, the tracer concentrations measured at the common location (17 cm directly above the tracer injection point) are consistently higher than the experimental observations. This indicates that all model setups predicted insufficient gas dispersion between the injection and measurement points. Specifically, the standard 3D simulations over-predicted the tracer concentrations by 46.5%, the simulations with particle induced diffusion by 33.0% and the simulations with particle induced diffusion and turbulent diffusion by 32.6%. It therefore appears as if particle induced diffusion explains some of the observed discrepancy, but that a major cause of gas species diffusion is still unaccounted for. An additional source of diffusion caused by particle motion is currently being investigated based on the work of Derksen [80, 81], but results are not yet available at the time of this writing. 58

Figure 39: Variation in the tracer concentration measured at a common location in the experiments (top left), standard 3D simulations (top right), 3D simulations with particle induced diffusion in the emulsion (bottom left) and 3D simulations with particle induced diffusion and turbulent diffusion (bottom right) as a function of fluidization velocity (U0) and particle size (dp).

8.2 Reactive model validation As in the previous section, results in this section will be split between comparisons to published data and comparisons to data collected within the project. 8.2.1 Comparisons to published data Sufficiently detailed experimental data from reactive fluidized beds are very scarce in the literature. In fact, this scarcity of reactive flow data was one of the primary motivations behind the experiments conducted within this project. The data which is available was also found to be very challenging to use for the purpose of model validation simply because experiments were done as proof of concept and not for the purpose of model validation. The primary challenges involved were typically related 59

to correctly defining boundary conditions, material properties and reaction rates, running very expensive simulations and comparing to a limited data set. Two comparisons to published experimental data were made: one for the fuel reactor in a lab-pilot CLC plant (Paper 1, [51]) and one for the carbonator in a lab-pilot potassium looping plant (Appendix 1, [71]). For the CLC plant, data was available over a range of fuel gas flow rates so that model generality could be evaluated. This study was particularly interesting because it reported a counter-intuitive trend where the amount of conversion achieved in the reactor increased as the fuel gas flow rate was increased. Under normal circumstances, one would expect the reactor to achieve greater conversion as the fuel gas flow rate is decreased because of longer gas residence times and smaller bubbles leading to better gas-solid contact. In simulations with standard fluidized bed reactors (using a distributor plate), this expected trend is observed and is normally very large (e.g. Conference 3, [61]). As shown in Figure 40, the model failed to capture this counter-intuitive trend correctly even though quantitative comparisons at higher gas flow rates were very encouraging. This is a good illustration of the importance of validating model performance over a wide parameter space. Had comparisons only been carried out at the highest two flow rates, it could have been wrongly concluded that the model performs well and over-confidence in model accuracy could subsequently have led to costly design errors. 1 0,99

Fraction fuel gas converted

0,98 0,97 0,96 0,95 0,94

Experiment

0,93

Simulation

0,92 0,91 0,9 50

70

90 110 Fuel Power (kW)

130

150

Figure 40: Comparison between experiment and simulation for a CLC fuel reactor operated with a highly reactive oxygen carrier at different fuel flow rates (Paper 1, [51]).

The primary reason for the discrepancy in Figure 40 is the fact that simulations were carried out in 2D. This was done simply because the reactor size was too large and the particle size too small to complete 3D simulations. As discussed in Section 7.1, the 2D assumption significantly alters the bubble-to-emulsion mass transfer characteristics of the bed, primarily close to the inlet (for low gas 60

flow rates) and in the splash zone at the top of the bed. In this case, correct predictions of the mass transfer were very important because a highly reactive NiO oxygen carrier material was used, implying that the overall reaction was limited primarily by the bubble-to-emulsion mass transfer. These effects were accentuated by the unconventional fuel injection mechanism employed which used three point injections instead of a standard distributor plate. At lower flow rates, small, but well defined bubbles were formed at the highly localized point injection sources in the experiments, but this could not be properly captured in the 2D simulations and mass transfer was substantially overpredicted in this region. In addition, the lower flow rates fell in the bubbling fluidized bed regime where the top of the bed was well defined and the missing degree of freedom in 2D simulations caused a substantial increase in the mass transfer in this region (see Section 7.1). For these reasons, the reactor performance was substantially over-predicted at low gas flow rates. Further investigations revealed, however, that if a much less reactive oxygen carrier material is used, the expected trend of decreasing reactor performance with increasing fluidization velocity is observed. In this case, the reaction is limited primarily by the kinetics and the correct resolution of the bubble-to-emulsion mass transfer becomes less important. Figure 41 show that the model captured the trend correctly in this case because the errors in the prediction of mass transfer resistance were of much lesser importance.

Fraction fuel gas converted

1 0,9

Experiment1

0,8

Experiment2

0,7

Experiment3 Simulation

0,6 0,5 0,4 0,3 0,2 0,1 0 0

50

100 Fuel Power (kW)

150

Figure 41: Comparison between experiment and simulation for a CLC fuel reactor operated with a slowly reacting oxygen carrier at different fuel flow rates (Paper 1, [51]).

The second study on the potassium looping process (Appendix 1, [71]) was completed primarily to evaluate the performance of a reaction rate equation derived from various literature sources. Experimental data was not of a sufficiently high standard for use in anything other than a first indication of model validity and the narrow riser geometry was very sensitive to the unknown model coefficients (the specularity coefficient and the particle-particle restitution coefficient) discussed in 61

Section 6.2. For these reasons, the primary purpose of this study was actually to evaluate the sensitivity of model results to changes in these unknown coefficients under different flow regimes. The potassium sorbent used in this case exhibited relatively slow kinetics, implying that the accuracy of the kinetic model employed would have a very large impact on model accuracy. Comparisons to experimental data showed that the reaction rate equation performed reasonably well, but appeared to over-predict reactor performance. Dedicated experiments will therefore be required to derive a generically applicable reaction rate for this particular process. 8.2.2 Comparisons to data collected within the project The primary purpose of the experimental campaign was to construct, operate and collect data from a fluidized bed reactor operating at high temperatures under real CLC conditions. This reactor was designed to have identical pseudo-2D dimensions to the cold flow unit. Heating was supplied from the large back wall, while the front wall was reserved for a number of local experimental excess ports. Specialized experimental ports were designed in order to allow for the collection of temperature, pressure and gas species data from a single location. Further details of the experimental setup can be viewed in Draft 4 [68]. Predictably, the novelty of this endeavour brought a wide range of unforeseen problems which are discussed in detail in the aforementioned paper (Draft 4, [68]). The most important of these problems are briefly outlined below: • •



• •

• • • •

The reactor had to be heated up very slowly to prevent bending due to thermal expansion of the body. For this reason, the reactor had to be kept at high temperatures for as long as possible. This resulted in the thick insolation layer being gradually heated up, increasing its thermal conductivity. As a result, the reactor started losing large amounts of heat after a few days of operation, creating axial temperature gradients and making it difficult to maintain high temperature operation. Fines were lost from the reactor at much lower than expected gas feed rates, requiring the reactor to first be cooled down before new material could be added. The NiO oxygen carrier used was highly reactive and converted the fuel gas almost instantly as it entered the reactor under normal CLC temperatures, thus not providing any useful gas species data for model validation. For this reason, the bed had to be operated at much lower temperatures in order to achieve the incomplete conversion necessary for collecting useful gas species data. The large front and back walls exerted a large influence on the flow inside the reactor, causing counter-intuitive flow patterns. The probe used to extract the gas species measurements caused a long signal delay which made data interpretation challenging. Methane or hydrogen could not be used for fuel because water in the product gasses condensed in the capillary tubes leading to the MS for gas analysis. For this reason, CO had to be used as fuel gas and this resulted in problems with carbon deposition.

62

Naturally, the experience gained in this project will be used to guide future model validation campaigns. The following recommendations can be made for future work: • •

• • •



• •



A burner can be used to rapidly heat up the bed with hot gas in order to complement external heating elements. Insolation should be carried out with a thick layer of material with a high heat capacity. A particulate insulation medium should not be used as this can result in rapid radiative heat transfer at higher temperatures. If the reactor can be rapidly heated up and/or the insulation layer can be made to function much better at higher temperatures, much better temperature control will be possible. The reactor should include an expanding freeboard region in order to prevent the elutriation of fines. Measures must be taken to prevent complete reactant conversion. This can be done either by reducing the kinetics (operating at lower temperatures or using a less reactive material) or by increasing the mass transfer resistance (by injecting the reactant in a highly concentrated manner). The choice between pseudo-2D and 3D must be carefully considered based on the large influence of the front and back walls observed in this study. Pseudo-2D should only be used if the collection of local gas species concentrations or pressure fluctuations within the bed is a high research priority. Otherwise, if the evaluation of model generality over a wide range of flow conditions is the primary priority, a 3D cylindrical reactor will be more practical. Local gas species concentrations must be sampled as close to the bed material as at all possible in order to prevent an excessively long time delay in signal transmission. It is advisable to choose reactions which do not result in wet product gasses and, if this is unavoidable, special measures must be taken to dry the gas before analysis. This will probably not be possible for online local measurements where a large signal delay cannot be afforded. Reactants with known carbon deposition issues (such as CO with NiO) should be avoided, but, if this is unavoidable, carbon deposition can be minimized by feeding a significant amount of CO2 with the CO feed.

This long list of unforeseen problems delayed progress substantially, but initial experimental data allowed for an evaluation of the ability of the TFM approach to predict reactor performance over a range of fluidization velocities and reactor temperatures (Appendix 2, [72]). In this study, the reactor was operated with periodic gas switching to alternatively reduce and oxidize the oxygen carrier. Only the reduction stage was studied for the purpose of model validation because the oxygen carrier had to be maintained in a highly oxidized state in order to prevent carbon deposition. This meant that oxidation could not be started from a fully reduced particle in order to ensure reliable data collection. The resulting comparison is shown in Figure 42 and Figure 43. It is clear that the experimental trends in the amount of CO exiting the bed without reacting generally trend downwards and then upwards again. The initial downwards trend is due to a temporary activation phase that the oxygen carrier experiences at the start of each reduction stage. TGA studies confirmed this behaviour at the low temperatures considered here (Appendix 2, [72]). The subsequent upward trend is expected to be due to the oxygen carrier becoming more reduced and therefore less reactive as time goes by. 63

Experiments were stopped after the time it would take the oxygen carrier to be 20% reduced if 100% CO conversion was achieved after which the oxygen carrier was fully oxidized in preparation for the next experiment. 0,9

Fraction unconverted CO

0,8 0,7

Exp 0.2 m/s

0,6 0,5

Exp 0.3 m/s

0,4

Exp 0.4 m/s

0,3

Sim 0.2 m/s

0,2

Sim 0.3 m/s

0,1

Sim 0.4 m/s

0 0

200 400 600 Reduction time (s)

800

Figure 42: Comparison between experimental and simulation results for three fluidization velocities at a reactor temperature of 360ᵒC (Appendix 2, [72]).

1

Fraction unconverted CO

0,9 0,8 0,7

Exp 360C

0,6

Exp 380C

0,5

Exp 400C

0,4

Sim 360C

0,3

Sim 380C

0,2

Sim 400C

0,1 0 0

100

200 300 Reduction time (s)

400

500

Figure 43: Comparison between experimental and 2D TFM simulation results for three reactor temperatures at a fluidization velocity of 0.4 m/s (Appendix 2, [72]).

For comparison, simulations were run for only 30 seconds of real time steady state operation at the reaction rate displayed by a highly oxidized and fully activated powder. The dashed lines in Figure 42 and Figure 43 represent the time-averaged reactor performance over the simulation period. It is clear that both figures show a satisfactory match between simulation and the part of the experimental curve following the initial activation time even for the simple 2D TFM approach. In general, the simulation slightly over-predicted reactor performance (under-predicted the amount of unconverted CO exiting the reactor), but captured the trend with changing temperatures and fluidization velocities quite successfully.

64

Part of the reason for the good simulation performance even with the simple 2D TFM approach is the fact that the reaction rate was quite slow at these low temperatures, implying that the overall reactor performance was mostly limited by the kinetics and not as much by the bubble-to-emulsion mass transfer. Mass transfer errors caused by the 2D assumption therefore had only a limited influence on the overall prediction. This is a very important point to consider in future fluidized bed reactor validation experiments because the ability of the model to predict the bubble-to-emulsion mass transfer is the most crucial aspect to be validated. For this purpose, fast reaction kinetics is required so that mass transfer becomes the limiting factor. However, in bubbling fluidized bed reactors, such fast kinetics would result in complete reactant conversion only a short distance above the distributor plate, making the collection of meaningful validation data impossible. This problem can be solved in two ways: 1) by constructing and operating a circulating fluidized bed and extracting flow information from the riser section or 2) by designing a bubbling fluidized bed with a concentrated point reactant source designed especially to minimize the degree of gas/solid contact. It is likely the option 2 will be the most practical for CFD validation purposes, primarily because it is much easier to operate experiments and run accurate simulations for bubbling fluidized beds than for risers. The surprisingly good results in Figure 42 and Figure 43 are put into better perspective by local gas species measurements collected in the experiments. These measurements revealed a highly counterintuitive trend where reactant (CO) concentrations within the bed were very low, but concentrations in the freeboard region were higher. The intuitive trend is a gradual reduction in CO concentration along the height of the bed until the concentration reaches a constant value in the freeboard. This surprising behaviour is shown in Figure 44. 0,7 2D

Reactor height

0,6

3D

0,5

3D dT 3D 5 deg

0,4

3D 10 deg

0,3

Experiment

0,2 0,1 0 0

0,1

0,2 0,3 Fraction unconverted CO

0,4

0,5

Figure 44: Comparison of several simulation setups (see Appendix 2, [72]) to the counter-intuitive axial species concentrations measured in the experiments.

It is clear from Figure 44 that the simulation did not compare satisfactorily to the counter-intuitive experimental trend even when unconventional effects such as a large temperature difference across the reactor thickness (3D dT) or a forwards inclination of the reactor (3D 5 deg and 3D 10 deg) was included. More work is planned to better understand this surprising reactor behaviour. 65

Another important conclusion to be drawn from Figure 44 is that the large influence of the walls in the pseudo-2D reactor seems to introduce a very large amount of unwanted uncertainty. It is likely that larger reactors (with much less wall influence) would exhibit the intuitive axial species concentration profiles predicted by the simulations and that the counter-intuitive trend observed here will only apply to this small pseudo-2D geometry. Future experimental work should therefore focus on reducing such unwanted wall effects by using larger 3D geometries.

8.3 Summary The analysis in this section has proven the fundamental usefulness of the traditional TFM for fluidized bed reactor modelling. In addition, the merits of the DDPM and fTFM were also confirmed from a hydrodynamic point of view. The promise of accurate simulation-based process design based on generic fundamental flow models was therefore confirmed and it can be stated with a high degree of confidence that further investments in this area will be very fruitful. As the promise of simulation-based process design attracts more funding, it is highly likely that an increased amount of research effort will be directed towards the thorough validation of various modelling approaches both for the purpose of accelerating model development and for demonstrating model fidelity to industrial end-users. The lessons learnt in this project can be of great use for guiding such validation studies. Very briefly summarized, four recommendations can be drawn from the experience gained in this project: 1. Experiments carried out for no purpose other than generic model validation are vital to the future of simulation-based process design. Relying on published experimental data collected for other purposes was found to be very time-consuming and inefficient. 2. Experiments should be designed for the primary purpose of evaluating model generality over as wide a parameter space as possible. Excellent generality is absolutely mandatory for successful simulation-based process design and should therefore be the primary principle guiding the design of validation experiments. 3. Expensive and time-consuming reactive experiments should be very carefully designed in order to properly evaluate the most important aspect of reactive model performance: predictions of the bubble-to-emulsion mass transfer. Much cheaper and more practical cold flow experiments should be employed for everything else. 4. Wall effects in small and/or pseodo-2D geometries have a very large and poorly understood influence on simulation results. It is therefore recommended that larger 3D geometries be used in validation studies and that techniques for gathering local flow data in such geometries be developed. Aside from these four general recommendations, a good deal of experience has also been gained on the finer details of how dedicated validation experiments should (and should not) be carried out. Information on this specialized practice is likely to expand rapidly from this small base over coming years.

66

Chapter 9: Model application As outlined in Section 4.1, simulation-based process design has a wide range of fundamental advantages over traditional methods of process design and scale-up. As soon as these advantages are unlocked through successful experimental validation, simulation-based process design can start adding value to fluidized bed reactor technology at all stages of development; from the conceptual design phase to the existing industrial scale reactor. Given the fact that fluidized bed reactor modelling is already practically feasible for a range of applications, some of the project time was invested into gaining practical experience with perhaps the two most appealing applications of simulation-based process design: virtual prototyping of new concepts and reactor optimization.

9.1 Virtual prototyping of new reactor concepts The ability of the fundamental modelling framework of CFD to numerically evaluate almost any conceivable process concept opens up a vast range of creative possibilities in chemical process design. In the case of second generation CO2 capture processes which are still at a relatively early stage of development, this potential for rapid virtual prototyping can prove to be particularly beneficial. Given the unique funding challenges faced by second generation CO2 capture processes, the rapid and economical screening of promising process concepts facilitated by simulation-based process design can add great value by focussing the limited amount of funded research only on the most promising concepts. Since current state of the art models can offer the most practical value in bubbling fluidized bed applications, this flow regime would appear to be the most logical first target for simulation-based process design of second generation CO2 capture processes. However, standard chemical looping, such as used in CLC, CLR and post combustion CO2 capture using solid sorbents, involves at least one riser reactor which is necessary to transport the solids upwards against gravity so that a loop can be established. Most chemical looping applications are essentially a circulating fluidized bed with another reactor in the downcomer section. Alternatives to the traditional chemical looping concept are of interest not only because riser flows using fine particles are still very difficult to model accurately, but primary because of the large practical challenges presented by this concept. The interconnected nature of the two reactors and the substantial amount of additional process equipment separating them (cyclones and loop seals) complicate process operation and increase capital costs, thereby making scale-up both technically challenging and expensive. The high pressure operation which is necessary to make CLC economically viable at least for gaseous fuels presents an especially difficult challenge. As a result, the scale-up of chemical looping technology has occurred at a fairly slow pace. For this reason, one paper was completed to test two concepts which avoid the complications of the chemical looping process: the Gas Switching Reactor (GSR) and the Internally Circulating Reactor (ICR) (Draft 5, [69]). Both concepts are very simple. In the GSR, the solids are kept in one reactor and alternatively exposed to the two different process gasses that would be fed to the two reactors in a chemical looping concept. The ICR, on the other hand, is a single bubbling fluidized bed which is split in two by a wall with two openings allowing for the circulation of solids between the two compartments. 67

The advantage of the GSR concept is that it replaces the complex interconnected solids loop with a simple valve system capable of feeding two different feed gasses and directing outlet streams in two different directions. This simplicity will make process scale up much easier and also allow for much easier high pressure operation. In the case of the ICR, the looping concept is preserved, but all the additional process equipment between the two reactors is removed, thus simplifying the concept and reducing capital costs. When looking at CLC reactions, both the GSR and ICR concepts sacrifice a small amount of CO2 capture efficiency and CO2 purity for a large reduction in process complexity and cost. One of the primary selling points of the CLC concept is that, as an oxy-fuel approach, it can theoretically achieve close to 100% CO2 capture whereas pre- and post-combustion approaches only achieve about 90% capture. As Figure 45 illustrates, it turns out that this large CLC advantage can be preserved even if the loop seals and other solids transport lines between the reactors are replaced by simple ducts between two reactor sections in the ICR process.

CO2 purity or capture efficiency (%)

100 99,5 99 98,5 98 97,5 97 96,5

CO2 purity

96

CO2 capture efficiency

95,5 95 0

2

4 6 Gas/solid leakage ratio

8

10

Figure 45: Performance if the ICR concept at different gas/solid leakage ratios (the ratio between the volumetric flowrates of gas and solids being transferred from one reactor section to the other through the simple ducts) (Draft 5, [69]). CO2 purity refers to the purity of the CO2 being sent to storage and CO2 capture efficiency refers to the percentage of CO2 captured (not emitted to the atmosphere).

It is clearly beneficial to minimize the gas/solid leakage ratio by intelligently designing the ducts between the two reactor sections in the ICR concept. This can be done by a very simple duct design shown on the right hand side of Figure 46 which ensures that both ducts are always filled with solids. In this way, the gas/solids leakage ratio can be kept around 1, giving CO2 capture efficiency which, according to Figure 45, remains virtually 100% despite the large simplifications achieved.

68

Figure 46: Two different duct configurations for the ICR concept (Draft 5, [69]). Note that the simulations were carried out in 2D and that the left and right boundaries of the domain are periodic. The central section of each domain represents one reactor section, while the left and right hand sections of the domain represent the other reactor section (joined by the periodic boundary).

A similar philosophy can be followed for the GSR concept where CO2 and N2 will mix for a short period of time after the feed gas is switched. This effect was experimentally tested in the hot reactor setup constructed during this project (Paper 8, [58]) and, as shown in Figure 47, only minimal amounts of CO2 capture efficiency is lost. Technically, the CO2 purity and CO2 capture efficiency can be further improved by inserting a purging stage between the oxidation and reduction stages, but it is highly doubtful whether this added complexity would merit the very small improvement left to be gained.

CO2 purity or capture efficiency (%)

99 98 97 96 95 94 93

CO2 purity

92

CO2 capture efficiency

91 90 89 0

2

4 6 Fuel time (min)

8

10

Figure 47: The CO2 purity and CO2 capture efficiency for different fuel times (the time over which a fixed flowrate of fuel was fed to the reactor during each cycle) (Paper 8, [58]).

69

Both the ICR and GSR concepts therefore appear to offer attractively simple solutions to the challenges faced by the chemical looping concept traditionally employed in second generation CO2 capture solutions based on fluidization. Also, since both concepts utilize bubbling fluidized beds, current state of the art models can be used to further investigate these concepts with good accuracy in a very economical manner. Finally, another study was completed to investigate the effect of introducing obstructions within a bubbling fluidized bed, both for the purpose of enhancing bubble-to-emulsion mass transfer and for introducing reactant gasses in a more dispersed manner (Paper 2, [52]). Such an arrangement would increase complexity, but also decrease the reactor size due to improved gas-solids contact. Results showed that obstructions in the bed could substantially increase reactor performance by improving gas-solid contact, but that the introduction of additional fuel gas through these obstructions would not have a meaningful positive impact. This first analysis concluded that, although obstructions in the bed can have a positive impact, the effect is not sufficiently large to merit the associated process complexity. Obstructions should therefore only be included in cases where the process itself demands it (e.g. CLR with integrated membranes for pure H2 production).

9.2 Process optimization For effective optimization of complex processes characterized by non-linear interactions, it is necessary to determine process behaviour over a wide parameter space. Very often, physical experimentation is simply not practical for such an investigation, especially when looking at design variables such as reactor size, shape or configuration. This is where simulation-based process design based on generic multiphase flow modelling can add great value. In second generation CO2 capture processes such as CLC, the reactors are typically the most complex and the most costly components. Thus, if one can optimize the reactors, the process itself should be close to optimal performance as well. Following this philosophy, a strategy which combines process flowsheeting and CFD was proposed to design a CLC system [76]. In this strategy, the performance required from the reactor (the degree of conversion achieved) is decided beforehand and used in the process flowsheet, after which reactors which can deliver this performance are designed by CFD. Naturally, a very wide range of reactor designs can realistically deliver a desired reactor performance. Recognizing this truth, a strategy based on a form of experimental design known as a central composite design was used to map out the variable combinations that would result in a given reactor performance (Conference 3, [61]). However, simply mapping out reactor performance in this manner will not enable reactor optimization before economic considerations are also accounted for. In order to address this factor, the aforementioned strategy was extended by also accounting for variations in the reactor cost through the parameter space (Conference 4, [62]). An example of the results from this strategy is given in Figure 48. In this case, all the combinations of fluidization velocity, reactor temperature and reactor pressure which would result in 99% conversion of the CO in the syngas used as fuel was determined. Subsequently, this information was combined with a simple example reactor cost function to calculate the reactor cost per rate of fuel conversion. Naturally, it is desired that the reactor throughput is high while the reactor cost is low, implying that the aforementioned ratio should be as low as possible. For the example in Figure 48, this would occur roughly at P = 0.5 (7.91 atm) and T = -1 (700°C) (please see Conference 4 [62] for details about this normalization of variables). 70

Figure 48: Curves used in the economic optimization of a reactor. Top left: All combinations of gas flow rate (U), reactor temperature (T) and reactor pressure (P) that would result in 99% conversion of the CO in the syngas used as fuel. Top right: The same figure with the gas flow rate converted to the syngas mass flow rate. Bottom left: A simple reactor cost function assuming an exponential increase in reactor cost as the operating pressure and temperature is increased. Bottom right: The economic performance curve of the reactor for all circumstances achieving 99% conversion. Please see Conference 4 [62] for details about the coded variables used for U, T and P.

It is certainly possible to devise more general strategies where the economic performance of the entire process (power plant or hydrogen plant) is optimized and the reactor performance is not decided beforehand. For example, if a reliable model of the entire process is available (e.g. a flowsheet model with a reasonably generic 1D model representing the reactors) an economic evaluation can be carried out for many different combinations of the wide range of parameters that define such a system. More experience is necessary to find the strategy which offers the desired level of sophistication without becoming excessively complex. In particular, the chosen strategy would depend on the fidelity and ease of use of the model (e.g. a 1D model can be integrated in a process flowsheet, but will be less generic, while a CFD model will be too slow for process flowsheeting, but will be more generic) and also on the system cost estimations 71

(e.g. determining the cost of electricity for many points within a given parameter space could be challenging and time consuming). This question of balance will be answered through practical experience over coming years.

9.3 Summary The use of generic CFD models opens up a very wide range of simulation-based process design applications. Two such applications; virtual prototyping of new concepts and process optimization, appear to be especially attractive. In the case of virtual prototyping, the design freedom offered by CFD modelling opens up the possibility to rapidly evaluate a wide range of ideas and concepts in order to select only the most promising candidates for physical experimentation. Practical experience has shown that this mindset is especially conducive to creative thought and can therefore lead to very exciting ideas which can ultimately lead to more practically and economically attractive processes. For process optimization, the strength of simulation-based process design is the ease with which the performance of a reactor or an entire process can be mapped out over a wide parameter space. If the costs associated with the reactor or process can also be determined across that parameter space, it becomes a very simple exercise to find the cost-optimized point within the parameter space. Such procedures can greatly accelerate and economize the scale-up and commercialization process.

72

Chapter 10:

Conclusions and recommendations

This final chapter will present conclusions and recommendation categorized under the following four headings: • • • •

Fluidized bed simulations cannot be painted with a common brush A range of modelling approaches is required to cover all applications Dedicated validation experiments are of central importance Simulation-based process design is a potential gamechanger

10.1 Fluidized bed simulations cannot be painted with a common brush The first general conclusion to be drawn is that the scale of the challenge posed by fluidized bed reactor simulations varies by many orders of magnitude over the multi-dimensional parameter space. It therefore makes a lot of sense to already engage industry through the easier applications while further model development gradually extends the range of safe model application. Generally speaking, the modelling problem rapidly reduces in difficulty as the particle size is increased, the influence of walls is decreased (wider and slower beds) and the kinetic rate is decreased. These criteria suggest that bubbling or turbulent fluidization present the most logical application for current state of the art models. Larger particle sizes significantly simplify the modelling problem because they reduce the modelling problems created by the clustering phenomenon in fluidized beds. As the particle size increases, larger and more permeable clusters are formed, implying that clusters can be captured on much coarser grids and that correct cluster resolution becomes less important. Larger grid sizes can be afforded because larger particles have longer relaxation times, thereby not requiring such sharp streamline curvature to be resolved before departing sufficiently from the streamline to join in the bulk of a cluster. Overall cluster resolution decreases in importance because the more permeable clusters formed by larger particles allow more flow through the cluster itself. These flows are governed by the generally mature set of closure laws implemented and do not depend on the degree of cluster resolution. Wall effects are problematic because the model coefficients governing the wall friction and particlewall restitution are largely unknown. This introduces a great deal of uncertainty in cases where walls are highly influential such as narrow fast risers. In wider bubbling fluidized beds, however, wall effects have a very limited influence and these uncertainties become largely insignificant. The kinetic rate influences the importance of the bubble-to-emulsion mass transfer on the overall reactor performance. If kinetics is very fast, bubble-to-emulsion mass transfer is almost completely dominant, implying that any error in the resolution of bubbles or clusters will lead directly to large errors in the overall reactor performance. For slower kinetics, however, the influence of bubble-toemulsion mass transfer reduces and the final solution becomes less sensitive to the accuracy of cluster resolution. Bubbling fluidized beds would generally run slower kinetics (resulting from material chemistry, mass transfer resistance within larger particles or pressurization), implying that simulations would become easier. Also, when fast kinetics are used in a bubbling bed, conversion will typically be virtually complete, implying that even large errors in bubble-to-emulsion mass transfer will have essentially no influence on the simulated reactor performance (except for very shallow beds). 73

This work quantified the influence of these effects for the TFM, but this should also be done for the DDPM and the fTFM. The DDPM has the potential to greatly extend the range of safe applicability, but will still be subject to the influences described above. For the fTFM, on the other hand, the above-mentioned influences will largely determine the sensitivity of the final solution to the closure laws which model sub-grid cluster formation. If cluster formation is highly significant (e.g. small particles and fast kinetics) or very complex (e.g. in wall regions) any errors in the closure models will translate directly to errors in the overall prediction. Much more work is necessary to quantify these influences and map out the regions over which different modelling approaches can be applied with high levels of confidence.

10.2 A range of modelling approaches is required to cover all applications It is highly unlikely that any single modelling approach will emerge as the out-and-out best candidate in the foreseeable future. The classic TFM, simple 2D approximations, the DDPM, the fTFM and phenomenological 1D models all have distinctive fundamental strengths and weaknesses which will greatly influence their attractiveness over the wide range of possible model applications and flow conditions. It is therefore recommended that modelling service providers become familiar with all of these approaches so that the most appropriate approach can be selected for each particular simulation. Informed selection and application of these approaches can have very large positive effects on simulation accuracy and cost. Meanwhile, a lot of room for focussed model development exists within the DDPM, fTFM and 1D frameworks. For the DDPM, numerical uncertainties in dense regions must be clarified and reduced. For example, the current dense packing treatment implemented in ANSYS FLUENT is largely unknown, blocks any access to the particle force balance and creates substantial mismatches between the Lagrangian and Eulerian fields. In addition, the improvements in dilute regions made in this project (transport of granular temperature and the implementation of the full stress tensor) should be implemented in a more user-friendly manner. The excellent cluster resolution capabilities of the DDPM also make it ideal for gradient-based filtering of the gas-solid contact on the cluster interface. Such filtering will allow grid independent solutions of reactor performance to be achieved on much coarser grids. Closure development for the fTFM is under way in various groups, but, due to the complexity of the clustering phenomenon, a large amount of testing and validation will be necessary before these models can be safely applied. Closures for bubble-to-emulsion mass transfer are the top priority at present since it will allow for reactive filtered simulations to be completed. Derivation of a model which accounts for gradients within the filtered variables could also be a worthwhile research focus. Phenomenological closures utilized in 1D modelling could benefit greatly from the rich flow information available from well-resolved multiphase flow models such as the TFM or DDPM. Studies aiming to improve phenomenological closures from well-resolved simulations are therefore highly recommended. The generality of 1D approaches also remains largely unknown and, if performance turns out to be reasonable, such modelling approaches can add great value to industry. Scientists specializing in applied multiphase flow modelling should be sure to keep up to date with developments in these different fields so that the best approach can always be selected for any given application. 74

10.3 Dedicated validation experiments are of central importance One of the most important conclusions that came out of this work is the importance of dedicated experimental campaigns where experiments are designed and executed for no purpose other than model validation. Such experiments can greatly accelerate the development of reactive multiphase flow models and, most importantly, provide industry with the confidence necessary to start utilizing these models in real applications. On the other hand, the seemingly easier pathway of comparing against published experiments which were conducted for reasons other than model validation turned out to be much less valuable and very time-consuming. Dedicated validation experiments should be carried out with the primary goal of evaluating model performance over as wide a parameter space as possible. Only in this way can model generality be properly evaluated. Since a model will inevitably be used for applications far beyond the conditions under which it was validated, such proof of generality is mandatory before the model can be safely applied. In addition, experiments should be so designed that the different sets of physical phenomena (hydrodynamics, species transfer, heat transfer and reactions) can be independently evaluated. If not, non-linear interactions between these different phenomena can make model results very difficult to interpret. Much was learned about the science of dedicated validation experiment design during this campaign, especially in the area of reactive experiments. The two most important lessons learnt were the following: 1. The reactor must be designed to properly evaluate the ability of the model to correctly capture bubble-to-emulsion mass transfer. This would probably be most effectively accomplished by a concentrated point injection of reactant into a dense fluidized bed so that the mass transfer limitation is accentuated as much as possible while simultaneously ensuring incomplete conversion so that meaningful species measurements can be taken. Simply using a slow reaction does not properly test the predictive capabilities of the model. 2. There is a trade-off between pseudo-2D reactors and 3D reactors. The pseudo-2D reactor constructed in this campaign allowed for convenient local species concentration measurements, but introduced a large amount of uncertainty due to the large influence of the walls on the hydrodynamics. Using a 3D cylindrical reactor would reduce such problems, but local species measurements towards the centre of the reactor (where the majority of reactant rises) would be much more challenging. Experience gained in this project suggests that 3D reactors equipped for specialized local data collection techniques are likely to be the most efficient option for dedicated validation of fluidized bed reactor models over coming years. Such dedicated model validation studies according to the standards outlined above will only be possible on the laboratory scale. However, when resolved modelling approaches (the TFM and also the DDPM when it has been sufficiently improved) have been properly validated, they can be safely used in the derivation and verification of large-scale models such as the fTFM and the 1D phenomenological approach.

75

10.4 Simulation-based process design is a potential gamechanger A thoroughly validated and highly generic reactive multiphase flow model has the potential to add great value to industry. The fundamental advantages of simulation-based process design over traditional methods in terms of process screening, optimization and scale-up cannot be disputed, but will remain untapped until models have been properly validated against dedicated experiments and proven in a number of real applications. It is crucial that this stage of development is reached as soon as possible. Two applications of simulation-based process design were investigated in this project: virtual prototyping of new concepts and reactor optimization. Both studies confirmed the promise of this approach by illustrating how progress from process conception through process scale-up to the ultimate deployment of an economically optimized commercial scale reactor could be greatly accelerated and economized. This promise is especially applicable to second generation CO2 capture technologies such as CLC and it is highly recommended that this method is further pursued. Judging by current trends, it appears unlikely that this crucial set of technologies will be ready for deployment when the policy environment finally turns favourable. Given the sustainability crisis facing the global economy in the 21st century, this is a situation that should be avoided at almost any cost.

76

Chapter 11:

Nomenclature

11.1 Abbreviations AR

Coded variable of the Aspect Ratio (Height/Width)

BP

British Petroleum

CCS

Carbon (Dioxide) Capture and Storage

CFD

Computational Fluid Dynamics

CLC

Chemical Looping Combustion

CLR

Chemical Looping Reforming

DDPM Dense Discrete Phase Model (as implemented in ANSYS FLUENT 13.0) DNS

Direct Numerical Simulation

EEG

Erneuerbare-Energien-Gesetz (Renewable Energy Act)

EIA

US Energy Information Administration

EROEI Energy Return on Energy Invested EWG

Energy Watch Group

FLTR

From Left To Right

fTFM

filtered Two Fluid Model

GW

Coded (dimensionless) variable of the Grid Width

GDP

Gross Domestic Product

Gtoe

Gigaton oil equivalent

ID

Inner Diameter

IEA

International Energy Agency

KTGF

Kinetic Theory of Granular Flows

MS

Mass Spectrometer

RMS

Root Mean Square

SBPD

Simulation-based process design

TFM

Two Fluid Model

77

11.2 Clarification of concepts Model: Mostly refers to the collection of sub-scale models typically used as a reactor model and not directly to the sub-scale models. Resolved: Direct resolution of transport phenomena on the mesoscale (cluster scale). Not the direct resolution of all transport phenomena in and around each particle. Frictional pressure: The solids stress tensor consists of three components: kinetic, collisional and frictional. In dense regions, the frictional component is normally dominant and the frictional pressure refers to the normal component of this part of the solids stress tensor.

78

Chapter 12:

References

[1] Angus Maddison. Historical Statistics of the World Economy: 1-2008 AD. 2008. http://www.ggdc.net/maddison/Historical_Statistics/horizontal-file_02-2010.xls [2] USDA. International Macroeconomic Dataset. http://www.ers.usda.gov/dataproducts/international-macroeconomic-data-set.aspx [3] John Williams. Shadow Government Statistics. 2012. http://www.shadowstats.com/alternate_data/unemployment-charts [4] Ylan Q. Mui. Americans saw wealth plummet 40 percent from 2007 to 2010, Federal Reserve says. 2012. http://articles.washingtonpost.com/2012-06-11/business/35461572_1_median-balancemedian-income-families [5] United States Census Bureau. Historical Income Tables: Households. 2012. http://www.census.gov/hhes/www/income/data/historical/household/ [6] Eurostat. GDP per capita growth rate. 2012. http://epp.eurostat.ec.europa.eu/tgm/table.do?tab=table&plugin=1&language=en&pcode=tsdec100 [7] GFN. Footprint for Nations. Global Footprint Network. 2012. http://www.footprintnetwork.org/en/index.php/GFN/page/footprint_for_nations/ [8] IEA. World Energy Outlook 2012. International Energy Agency. 2012. http://www.oecdilibrary.org/energy/world-energy-outlook-2012_weo-2012-en [9] BP. Energy Outlook 2030. British Petroleum. 2013. http://www.bp.com/extendedsectiongenericarticle.do?categoryId=9048887&contentId=7082549 [10] EIA. International Energy Outlook 2011. US Energy and Information Administration. 2011. http://www.eia.gov/forecasts/ieo/world.cfm [11] Werner Zittel, Jan Zerhusen, Martin Zerta. Fossil and Nuclear Fuels – the Supply Outlook. Energy Watch Group. 2013. http://www.energywatchgroup.org/fileadmin/global/pdf/EWGupdate2013_long_18_03_2013.pdf [12] Exxon Mobil. The Outlook for Energy. 2012. http://www.exxonmobil.com/Corporate/energy_outlook.aspx [13] Shell. New Lens Scenarios. 2013. http://www.shell.com/global/future-energy/scenarios/newlens-scenarios.html [14] MIT. Energy and Climate Outlook 2012. Massachusetts Institute of Technology. 2012. http://globalchange.mit.edu/research/publications/other/special/2012Outlook [15] Jessica Lambert, Charles Hall, Steve Balogh, et al. EROI of Global Energy Resources: Preliminary Status and Trends. State University of New York, College of Environmental Science and Forestry. 2012. http://www.dpuc.state.ct.us/DEEPEnergy.nsf/fb04ff2e3777b0b98525797c00471aef/a546c841171f7 79

a8485257ac90053565a/$FILE/R.%20Fromer%20Attachment%20%20EROI%20of%20Global%20Energy%20Resoruces.pdf [16] Jessica Lambert, Gail Lambert. Life, Liberty, and the Pursuit of Energy: Understanding the Psychology of Depleting Oil Resources. London: Karnak Books; 2013. [17] David J. Murphy, Charles A. S. Hall. Year in review—EROI or energy return on (energy) invested. Annals of the New York Academy of Sciences. 2010;1185:102-18. [18] Graham Palmer. Household Solar Photovoltaics: Supplier of Marginal Abatement, or Primary Source of Low-Emission Power? Sustainability. 2013;5:1406-42. [19] Matthew Kuperus Heun, Martin de Wit. Energy return on (energy) invested (EROI), oil prices, and energy transitions. Energy Policy. 2012;40:147-58. [20] BP. Statistical Review of World Energy. 2012. http://www.bp.com/sectionbodycopy.do?categoryId=7500&contentId=7068481 [21] Schalk Cloete. The Energy Collective. 2013. http://theenergycollective.com/posts/published/user/410661 [22] Ailun Yang, YiYun Cui. Global Coal Risk Assessment: Data Analysis and Market Research. World Resources Institute. 2012. http://pdf.wri.org/global_coal_risk_assessment.pdf [23] Global Status of CCS. Global CCS Institute. 2012. http://www.globalccsinstitute.com/publications/global-status-ccs-2012 [24] The Costs of CO2 Capture, Transport and Storage. European Zero Emissions Platform (ZEP). 2012. http://www.zeroemissionsplatform.eu/library/publication/165-zep-cost-report-summary.html [25] Gerard Wynn. The Growing Cost of Germany's Feed-In Tariffs. 2013. http://www.businessspectator.com.au/article/2013/2/21/policy-politics/growing-cost-germanysfeed-tariffs [26] Bruno Burger. Electricity Production from Solar and Wind in Germany in 2012. Fraunhofer Institute for Solar Energy Systems. 2013. http://www.ise.fraunhofer.de/en/downloads-englisch/pdffiles-englisch/news/electricity-production-from-solar-and-wind-in-germany-in-2012.pdf [27] Bert Metz, Ogunlade Davidson, Heleen de Coninck, et al. Carbon Capture and Storage. Intergovernmental Panel on Climate Change. 2005. http://www.ipcc.ch/pdf/specialreports/srccs/srccs_wholereport.pdf [28] Ming Zhao, Andrew I. Minett, Andrew T. Harris. A review of techno-economic models for the retrofitting of conventional pulverised-coal power plants for post-combustion capture (PCC) of CO2. Energy & Environmental Science. 2013;6:25-40. [29] Oxy Combustion with CO2 Capture. Global CCS Institute. 2012. http://www.globalccsinstitute.com/publications/co2-capture-technologies-oxy-combustion-co2capture

80

[30] M. Ishida, D. Zheng, T. Akehata. Evaluation of a chemical-looping-combustion power-generation system by graphic exergy analysis. Energy. 1987;12:147-54. [31] Juan Adanez, Alberto Abad, Francisco Garcia-Labiano, et al. Progress in Chemical-Looping Combustion and Reforming technologies. Progress in Energy and Combustion Science. 2012;38:21582. [32] Fontina Petrakopoulou, Alicia Boyano, Marlene Cabrera, et al. Exergoeconomic and exergoenvironmental analyses of a combined cycle power plant with chemical looping technology. International Journal of Greenhouse Gas Control. 2011;5:475-82. [33] Clas Ekström, Frank Schwendig, Ole Biede, et al. Techno-Economic Evaluations and Benchmarking of Pre-combustion CO2 Capture and Oxy-fuel Processes Developed in the European ENCAP Project. Energy Procedia. 2009;1:4233-40. [34] Schlumberger. Bringing Carbon Capture and Storage to Market. SBC Energy Institute. 2012. http://cdn.globalccsinstitute.com/publications/factbook-bringing-carbon-capture-and-storagemarket [35] Douglas N. Ball. Contributions of CFD to the 787 - and Future Needs. Boeing. 2008. http://www.hpcuserforum.com/presentations/Tucson/Boeing%20Ball%20IDC%20pdf.pdf [36] Masaru Ishida, Hongguang Jin. A new advanced power-generation system using chemicallooping combustion. Energy. 1994;19:415-22. [37] B. Kronberger, T. Pröll, H. Hofbauer, et al. Chemical-Looping Combustion: The GRACE Project. 50th IEA FBC Meeting. 2005. [38] Colin Henderson. Visit to 2nd International Conference on Chemical Looping, 26-28 Sep, Darmstadt, Germany. 2012. http://www.iea-coal.org.uk/site/2010/blog-section/blog-posts/visit-to2nd-international-conference-on-chemical-looping-darmstadt-germany [39] Iqbal Abdulally, Corinne Beal, Herbert E. Andrus, et al. Alstom’s Chemical Looping Prototypes Program Update. 11th Annual Carbon Capture, Utilization and Sequestration Conference. 2012. [40] Kim Johnsen, Kaare Helle, Tore Myhrvold. Scale-up of CO2 capture processes: The role of Technology Qualification. Energy Procedia. 2009;1:163-70. [41] J. T. Jenkins, S. B. Savage. Theory for the rapid flow of identical, smooth, nearly elastic, spherical particles. Journal of Fluid Mechanics. 1983;130:187-202. [42] D. Gidaspow, R. Bezburuah, J. Ding. Hydrodynamics of Circulating Fluidized Beds, Kinetic Theory Approach. 7th Engineering Foundation Conference on Fluidization 1992. [43] M. Syamlal, W. Rogers, T.J. O'Brien. MFIX Documentation: Volume 1, Theory Guide. Springfield: National Technical Information Service; 1993. [44] Y. Igci, A. T. Andrews, S. Sundaresan, et al. Filtered two-fluid models for fluidized gas-particle suspensions. AIChE Journal. 2008;54:1431-48.

81

[45] Yesim Igci, Sankaran Sundaresan. Constitutive Models for Filtered Two-Fluid Models of Fluidized Gas–Particle Flows. Industrial & Engineering Chemistry Research. 2011;50:13190-201. [46] D.M. Snider. An Incompressible Three-Dimensional Multiphase Particle-in-Cell Model for Dense Particle Flows. Journal of Computational Physics. 2001;170:523-49. [47] B. Popoff, M. Braun. A Lagrangian Approach to Dense Particulate Flows. 6th International Conference on Multiphase Flow. 2007. [48] J. F. Davidson, Harrison, D. Fluidized particles. 1963. [49] J. R. Grace. Generalized Models for Isothermal Fluidized Bed Reactors. Wiley Eastern. 1984. [50] J. Werther. Modelling and scale-up of industrial fluidized bed reactors. Chemical Engineering Science. 1980;35:372-9. [51] Schalk Cloete, Stein Tore Johansen, Shahriar Amini. An assessment of the ability of computational fluid dynamic models to predict reactive gas–solid flows in a fluidized bed. Powder Technology. 2012;215-216:15-25. [52] Robert Hommel, Schalk Cloete, Shahriar Amini. Numerical Investigations to Quantify the Effect of Horizontal Membranes on the Performance of a Fluidized Bed Reactor. ijcre. 2012;10. [53] Schalk Cloete, Abdelghafour Zaabout, Stein Tore Johansen, et al. Comparison of phenomenological and fundamental modelling approaches for predicting fluidized bed reactor performance. Powder Technology. 2012;228:69-83. [54] Schalk Cloete, Stein Tore Johansen, Shahriar Amini. Performance evaluation of a complete Lagrangian KTGF approach for dilute granular flow modelling. Powder Technology. 2012;226:43-52. [55] Schalk Cloete, Abdelghafour Zaabout, Stein Tore Johansen, et al. The generality of the standard 2D TFM approach in predicting bubbling fluidized bed hydrodynamics. Powder Technology. 2013;235:735-46. [56] Schalk Cloete, Stein Tore Johansen, Shahriar Amini. Evaluation of a filtered model for the simulation of large scale bubbling and turbulent fluidized beds. Powder Technology. 2013;235:91102. [57] Schalk Cloete, Stein Tore Johansen, Shahriar Amini. Investigation into the effect of simulating a 3D cylindrical fluidized bed reactor on a 2D plane. Powder Technology. 2013;239:21-35. [58] A. Zaabout, S. Cloete, S. Johansen, et al. Experimental demonstration of a novel gas switching combustion reactor for power production with integrated CO2 capture. Industrial & Engineering Chemistry Research. 2013;In press. [59] Schalk Cloete, Stein Tore Johansen, Marcus Braun, et al. Evaluation of a Lagrangian Discrete Phase Modeling Approach for Resolving Cluster Formation in CFB Risers. 7th International Conference on Multiphase Flow. 2010.

82

[60] Schalk Cloete, Stein Tore Johansen, Marcus Braun, et al. Evaluation of a Lagrangian Discrete Phase Modelling Approach for Application to Industrial Scale Bubbling Fluidized Beds. 10th International Conference on Circulating Fluidized Bed and Fluidized Bed Technology. 2011. [61] Schalk Cloete, Shahriar Amini. Mapping of the Operating Window of a Lab Scale Bubbling Fluidized Bed Reactor by CFD and Designed Experiments. 8th International Conference on CFD in the Oil & Gas, Metallurgical and Process Industries 2011. [62] Schalk Cloete, Shahriar Amini. Numerical evaluation of a pressurized CLC fuel reactor for process intensification. 2nd International Conference on Chemical Looping. 2012. [63] A. Zaabout, S Cloete, M. Van Sint Annaland, et al. An assessment of the ability of the TFM approach to predict gas mixing in a pseudo-2D bubbling fluidized bed. Fluidization XIV. 2013. [64] Schalk Cloete, Shahriar Amini. Reacting to Emissions. ANSYS Advantage. 2011. http://www.cavendishcfd.com/pdf/AA-V5-I1-Reacting-to-Emissions.pdf [65] Schalk Cloete, Stein Tore Johansen, Shahriar Amini. Grid independence behaviour of reactive TFM simulations: The effect of particle size. To be submitted. 2013. [66] Schalk Cloete, Stein Tore Johansen, Shahriar Amini. Grid independence behaviour of reactive TFM simulations: Detailed parametric study. To be submitted. 2013. [67] Schalk Cloete, Stein Tore Johansen, Shahriar Amini. Comparison of the grid independence behaviour of the TFM and DDPM in bubbling fluidized bed reactors. To be submitted. 2013. [68] Abdelghafour Zaabout, Schalk Cloete, Stein Tore Johansen, et al. Operating experience with a high-temperature pseudo 2D fluidized bed reactor designed especially for detailed data collection. To be submitted. 2013. [69] Schalk Cloete, Stein Tore Johansen, Shahriar Amini. Initial evaluation of two alternative approaches to traditional solids looping. To be submitted. 2013. [70] Schalk Cloete, Stein Tore Johansen, Shahriar Amini. The effect of frictional pressure, geometry and wall friction on the modelling of a pseudo-2D bubbling fluidized bed reactor. To be submitted. 2013. [71] Schalk Cloete, Stein Tore Johansen, Shahriar Amini. The parametric sensitivity of fluidized bed reactor simulations carried out in different flow regimes. To be completed. 2013. [72] Schalk Cloete, Abdelghafour Zaabout, Stein Tore Johansen, et al. Model validation against data from a high-temperature pseudo-2D CLC batch reactor. To be completed. 2013. [73] Schalk Cloete, Shahriar Amini, Stein Tore Johansen. On the effect of cluster resolution in riser flows on momentum and reaction kinetic interaction. Powder Technology. 2011;210:6-17. [74] Schalk Cloete, Shahriar Amini, Stein Tore Johansen. A fine resolution parametric study on the numerical simulation of gas-solid flows in a periodic riser section. Powder Technology. 2011;205:10311.

83

[75] Naoko Ellis, Min Xu, C. Jim Lim, et al. Effect of Change in Fluidizing Gas on Riser Hydrodynamics and Evaluation of Scaling Laws. Industrial & Engineering Chemistry Research. 2011;50:4697-706. [76] Schalk Cloete, Shahriar Amini. Design strategy for a Chemical Looping Combustion system using process simulation and Computational Fluid Dynamics. Progress in Computational Fluid Dynamics. 2012;12:80-91. [77] J. A. Murray, S. Benyahia, P. Metzger, et al. Continuum representation of a continuous size distribution of particles engaged in rapid granular flow. Physics of Fluids. 2012;24. [78] Hans Rune Gammelsæter. Modelling of structural effects on chemical reactions in turbulent flows. NTNU PhD thesis. 1996. [79] J. M. P. Q. Delgado. A critical review of dispersion in packed beds. Heat Mass Transfer. 2006;42:279-310. [80] J. J. Derksen. Scalar mixing by granular particles. AIChE Journal. 2008;54:1741-7. [81] J. J. Derksen. Scalar mixing with fixed and fluidized particles in micro-reactors. Chemical Engineering Research and Design. 2009;87:550-6.

84

Part 3: Collection of papers The relevant papers referenced in Part 2 of the thesis are presented here in the following order:

Journal papers: Paper 1:

An assessment of the ability of computational fluid dynamic models to predict reactive gas–solid flows in a fluidized bed [51]

Paper 2:

Numerical investigations to quantify the effect of horizontal membranes on the performance of a fluidized bed reactor [52]

Paper 3:

Comparison of phenomenological and fundamental modelling approaches for predicting fluidized bed reactor performance [53]

Paper 4:

Performance evaluation of a complete Lagrangian KTGF approach for dilute granular flow modelling [54]

Paper 5:

The generality of the standard 2D TFM approach in predicting bubbling fluidized bed hydrodynamics [55]

Paper 6:

Evaluation of a filtered model for the simulation of large scale bubbling and turbulent fluidized beds [56]

Paper 7:

Investigation into the effect of simulating a 3D cylindrical fluidized bed reactor on a 2D plane [57]

Paper 8:

Experimental demonstration of a novel gas switching combustion reactor for power production with integrated CO2 capture [58]

Conference papers: Conference 1: Evaluation of a Lagrangian discrete phase modelling approach for resolving cluster formation in CFB risers [59] Conference 2: Evaluation of a Lagrangian discrete phase modelling approach for application to industrial scale bubbling fluidized beds [60] Conference 3: Mapping of the operating window of a lab scale bubbling fluidized bed reactor by CFD and designed experiments [61] Conference 4: Numerical evaluation of a pressurized CLC fuel reactor for process intensification [62] Conference 5: An assessment of the ability of the TFM approach to predict gas mixing in a pseudo2D bubbling fluidized bed [63]

85

Draft papers: Draft 1:

Grid independence behaviour of reactive TFM simulations: The effect of particle size [65]

Draft 2:

Grid independence behaviour of reactive TFM simulations: Detailed parametric study [66]

Draft 3:

Comparison of the grid independence behaviour of the TFM and DDPM in bubbling fluidized bed reactors [67]

Draft 4:

Operating experience with a high-temperature pseudo 2D fluidized bed reactor designed especially for detailed data collection [68]

Draft 5:

Initial evaluation of two alternative approaches to traditional solids looping [69]

Draft 6:

The effect of frictional pressure, geometry and wall friction on the modelling of a pseudo-2D bubbling fluidized bed reactor [70]

Incomplete drafts: Appendix 1:

The parametric sensitivity of fluidized bed reactor simulations carried out in different flow regimes [71]

Appendix 2:

Model validation against data from a high-temperature pseudo-2D CLC batch reactor [72]

Supplementary papers: Appendix 3:

On the effect of cluster resolution in riser flows on momentum and reaction kinetic interaction [73]

Appendix 4:

A fine resolution parametric study on the numerical simulation of gas-solid flows in a periodic riser section [74]

86

Chapter 13:

Journal papers

13.1 Paper 1 An assessment of the ability of computational fluid dynamic models to predict reactive gas–solid flows in a fluidized bed [51]

87

13.2 Paper 2 Numerical investigations to quantify the effect of horizontal membranes on the performance of a fluidized bed reactor [52]

99

126

13.3 Paper 3 Comparison of phenomenological and fundamental modelling approaches for predicting fluidized bed reactor performance [53]

127

13.4 Paper 4 Performance evaluation of a complete Lagrangian KTGF approach for dilute granular flow modelling [54]

143

154

13.5 Paper 5 The generality of the standard 2D TFM approach in predicting bubbling fluidized bed hydrodynamics [55]

155

168

13.6 Paper 6 Evaluation of a filtered model for the simulation of large scale bubbling and turbulent fluidized beds [56]

169

182

13.7 Paper 7 Investigation into the effect of simulating a 3D cylindrical fluidized bed reactor on a 2D plane [57]

183

13.8 Paper 8 Experimental demonstration of a novel gas switching combustion reactor for power production with integrated CO2 capture [58]

199

210

Chapter 14:

Conference papers

14.1 Conference 1 Evaluation of a Lagrangian discrete phase modelling approach for resolving cluster formation in CFB risers [59]

211

220

14.2 Conference 2 Evaluation of a Lagrangian discrete phase modelling approach for application to industrial scale bubbling fluidized beds [60]

221

230

14.3 Conference 3 Mapping of the operating window of a lab scale bubbling fluidized bed reactor by CFD and designed experiments [61]

231

14.4 Conference 4 Numerical evaluation of a pressurized CLC fuel reactor for process intensification [62]

241

256

14.5 Conference 5 An assessment of the ability of the TFM approach to predict gas mixing in a pseudo-2D bubbling fluidized bed [63]

257

266

Chapter 15:

Draft papers

15.1 Draft 1 Grid independence behaviour of reactive TFM simulations: The effect of particle size [65]

267

15.2 Draft 2 Grid independence behaviour of reactive TFM simulations: Detailed parametric study [66]

295

15.3 Draft 3 Comparison of the grid independence behaviour of the TFM and DDPM in bubbling fluidized bed reactors

315

15.4 Draft 4 Operating experience with a high-temperature pseudo 2D fluidized bed reactor designed especially for detailed data collection

339

15.5 Draft 5 Initial evaluation of two alternative approaches to traditional solids looping [69]

359

376

15.6 Draft 6 The effect of frictional pressure, geometry and wall friction on the modelling of a pseudo-2D bubbling fluidized bed reactor [70]

377

Chapter 16:

Incomplete drafts

16.1 Appendix 1 The parametric sensitivity of fluidized bed reactor simulations carried out in different flow regimes [71]

407

16.2 Appendix 2 Model validation against data from a high-temperature pseudo-2D CLC batch reactor [72]

429

440

Chapter 17:

Supplementary papers

17.1 Appendix 3 On the effect of cluster resolution in riser flows on momentum and reaction kinetic interaction [73]

441

454

17.2 Appendix 4 A fine resolution parametric study on the numerical simulation of gas-solid flows in a periodic riser section [74]

455