universiti teknologi malaysia

1 downloads 0 Views 12MB Size Report
Oct 30, 2005 - Nama dan Alamat Pemeriksa Luar : Prof. Madya Dr. Mohd Zaid Bin Abdullah. School of Electrical & Electronic Engineering,. Universiti Sains ...
PSZ 19:16 (Pind. 1/97)

UNIVERSITI TEKNOLOGI MALAYSIA

BORANG PENGESAHAN STATUS TESIS♦ JUDUL: IMAGING OF SOLID FLOW IN A GRAVITY RIG USING INFRA-RED TOMOGRAPHY

SESI PENGAJIAN : 2004/2005 Saya

MOHD AMRI B MD YUNUS_______________________ (HURUF BESAR)

mengaku membenarkan tesis (PSM/Sarjana/Doktor Falsafah)* ini disimpan di Perpustakaan Universiti Teknologi Malaysia dengan syarat-syarat kegunaan seperti berikut : 1. 2. 3. 4.

Tesis adalah hakmilik Universiti Teknologi Malaysia. Perpustakaan Universiti Teknologi Malaysia dibenarkan membuat salinan untuk tujuan pengajian sahaja. Perpustakaan dibenarkan membuat salinan tesis ini sebagai bahan pertukaran antara institusi pengajian tinggi. ** Sila tandakan ( √ )



SULIT

(Mengandungi maklumat yang berdarjah keselamatan atau kepentingan Malaysia seperti yang termaktub di dalam AKTA RAHSIA RASMI 1972)

TERHAD

(Mengandungi maklumat TERHAD yang telah ditentukan oleh organisasi/badan di mana penyelidikan dijalankan)

TIDAK TERHAD Disahkan oleh

(TANDATANGAN PENULIS) Alamat Tetap: LOT 2008, JALAN JAMBU,

(TANDATANGAN PENYELIA) DR. SALLEHUDDIN B IBRAHIM Nama Penyelia

BATU 71/2, MERU, 41050 KLANG, SELANGOR DARUL EHSAN. Tarikh: CATATAN:

OKTOBER 2005 * ** ♦

Tarikh:

OKTOBER 2005

Potong yang tidak berkenaan Jika tesis ini SULIT atau TERHAD, sila lampirkan surat daripada pihak berkuasa/organisasi berkenaan dengan menyatakan sekali sebab dan tempoh tesis perlu dikelaskan sebagai SULIT atau TERHAD. Tesis dimaksudkan sebagai tesis bagi Ijazah Doktor Falsafah dan Sarjana secara penyelidikan, atau disertasi bagi pengajian secara kerja kursus dan penyelidikan, atau Laporan Projek Sarjana Muda (PSM)

“I hereby declare that I have read this thesis and in my opinion this thesis is sufficient in terms of scope and quality for the award of the degree of Master of Engineering (Electrical)”

Signature

: ..... ...................................................

Name of Supervisor

: DR. SALLEHUDDIN B IBRAHIM

Date

:

30/10/2005

4

BAHAGIAN A – Pengesahan Kerjasama*

Adalah disahkan bahawa projek penyelidikan tesis ini telah dilaksanakan melalui kerjasama antara _______________________ dengan _______________________

Disahkan oleh : Tandatangan

: …………………………………………… Tarikh : ……………

Nama

: ……………………………………………

Jawatan

: ……………………………………………

(Cop rasmi)

* Jika penyediaan tesis/projek melibatkan kerjasama BAHAGIAN B – Untuk Kegunaan Pejabat Sekolah Pengajian Siswazah

Tesis ini telah diperiksa dan diakui oleh:

Nama dan Alamat Pemeriksa Luar : Prof. Madya Dr. Mohd Zaid Bin Abdullah School of Electrical & Electronic Engineering, Universiti Sains Malaysia, Engineering Campus, Nibong Tebal, 14300 Penang. Nama dan Alamat Pemeriksa Dalam: Prof. Madya Dr. Ruzairi Bin Abd Rahim Fakulti Kejuruteraan Elektrik, Universiti Teknologi Malaysia, 81310 Skudai, Johor Darul Takzim. Nama Penyelia Lain (jika ada)

: ………………………………………………… ………………………………………………… ………………………………………………… …………………………………………………

Disahkan oleh Penolong Pendaftar di SPS: Tandatangan

: …………………………………

Nama

: GANESAN A/L ANDIMUTHU

Tarikh: ………………….

IMAGING OF SOLID FLOW IN A GRAVITY FLOW RIG USING INFRA-RED TOMOGRAPHY

MOHD AMRI B MD YUNUS

A thesis submitted in fulfilment of the requirements for the award of the degree of Master of Engineering (Electrical)

Faculty of Electrical Engineering Universiti Teknologi Malaysia

OCTOBER 2005

ii

“I declare that this thesis entitled “Imaging of Solid Flow in a Gravity Flow Rig using Infra-red Tomography” is the result of my own research except as cited in the references. The thesis has not been accepted for any degree and is not concurrently submitted in candidature of any other degree.”

Signature

: …………………………………………………

Name

:

Mohd Amri b Md Yunus

s

Date

:

30 October 2005

.5

iii

aan Mama en Papa

iv

ACKNOWLEDGEMENTS

Praise to Allah S.W.T., the Most Gracious, the Most Merciful, whose blessing and guidance have helped me through my thesis smoothly. There is no power nor strength save in Allah, the Highest and the Greatest. Peace and blessing of Allah be upon our Prophet Muhammad S.A.W. who has given light to mankind.

I would like to express my deepest gratitude to my supervisor Dr. Sallehuddin Ibrahim for his support and supervision. This research would not have been successful without his invaluable guidance, constant help as well as constructive criticisms and opinions throughout the research.

I would like to express my sincere thanks to Pangeran who provided guidance and assistance during the research. Also to the lab assistant, Abang Faiz thanks you for your assistance and support during my research.

Special thanks to my mother, father, brothers, sisters, and families for their assistance and sedulous guidance. Also, thanks to all my friends and syabab, whose sacrifices had made it possible.

Last but not least, to the Ministry of Science, Technology, and Innovations (MOSTI) and Universiti Teknologi Malaysia for providing the funds and allowing me to use the facilities during my research.

v

ABSTRACT

Information on flow regimes is vital in the analysis and measurement of industrial process flow. Almost all currently available method of measuring the flow of two-component mixtures in industrial pipelines endeavors to average a property of the flow over the pipe cross-section. They do not give information on the nature of the flow regime and they are unsuitable for accurate measurement where the component distribution is spatially or time varying. The overall aim of this project is to investigate the use of an optical tomography method based on infra-red sensors for real-time monitoring of solid particles conveyed by a rotary valve in a pneumatic pipeline. The infra-red tomography system can be divided into two distinct portions of hardware and software development process. The hardware development process covers the infra-red sensor selection, fixtures and signals conditioning circuits, and control circuits. The software development involves data acquisition system, sensor modeling, image algorithms, and programming for a tomographic display to provide solids flow information in pipeline such as concentration and velocity profiles. Collimating the radiated beam from a light source and passing it via a flow regime ensures that the intensity of radiation detected on the opposite side is linked to the distribution and the absorption coefficients of the different phases in the path of the beam. The information is obtained from the combination of two orthogonal and two diagonal light projection system and 30 cycles of real-time measurements. Those information on the flow captured using upstream and downstream infra-red sensors are digitized by the DAS system before it was passed into a computer for analysis such as image reconstructions and cross-correlation process that provide velocity profiles represented by 16 × 16 pixels mapped onto the pipe cross-section. This project successfully developed and tested an infra-red tomography system to display two-dimensional images of concentration and velocity.

vi

ABSTRAK

Maklumat tentang regim aliran adalah sangat penting di dalam analisis dan pengukuran aliran proses pengindustrian. Hampir kesemua kaedah pengukuran aliran gabungan dua komponen di dalam paip pengindustrian berfungsi untuk mendapatkan purata aliran merangkumi keratan rentas paip. Mereka tidak memberi maklumat asal kawasan aliran dan tidak sesuai untuk pengukuran tepat di mana taburan komponen berubah secara ruang atau masa. Matlamat utama projek ini adalah untuk mengkaji penggunaan kaedah tomografi optik berasaskan kepada penderia infra-merah untuk pengawasan masa-nyata partikel pepejal yang dialirkan oleh injap berputar di dalam satu paip pneumatik. Sistem tomografi infra-merah boleh dibahagikan kepada dua bahagian proses pembangunan iaitu perkakasan dan perisian. Proses pembangunan perkakasan meliputi pemilihan penderia infra-merah, peralatan dan litar penyesuaian isyarat, dan litar kawalan. Proses pembangunan perisian melibatkan sistem perolehan data, pemodelan penderia, algoritma imej, dan pengaturcaraan untuk paparan tomografi di dalam menghasilkan maklumat aliran pepejal di dalam laluan paip seperti profil tumpuan dan halaju. Penumpuan sinar pancaran daripada satu punca cahaya dan melalukannya di dalam kawasan aliran, memastikan kecerahan sinar telah dikesan pada bahagian yang bertentangan berkait kepada taburan dan pekali penyerapan fasa yang berbeza di sepanjang laluan pancaran. Maklumat diperoleh daripada gabungan dua ‘orthogonal’ dan dua ‘diagonal’ sistem projeksi dan 30 kitar pengukuran masa-nyata. Maklumat aliran yang diambil menggunakan penderia inframerah ‘upstream’ dan ‘downstream’ di digitalkan oleh sistem DAS sebelum memasuki sebuah komputer untuk analisis seperti pembinaan semula imej dan proses sekaitan-silang yang menghasilkan profil halaju yang dipetakan pada 16 × 16 piksel keratan rentas paip. Projek ini dengan jayanya telah membangunkan dan menguji satu sistem tomografi infra-merah untuk paparan imej dua-dimensi penumpuan dan halaju.

vii

TABLE OF CONTENTS

CHAPTER

1

2

TITLE

PAGE

TITLE PAGE

i

DECLARATION

ii

DEDICATION

iii

ACKNOWLEDGEMENTS

iv

ABSTRAK

v

ABSTRACT

vi

TABLE OF CONTENTS

vii

LIST OF TABLES

xii

LIST OF FIGURES

xiv

LIST OF ABBREVIATIONS

xxii

LIST OF APPENDICES

xxvi

INTRODUCTION

1.1 Background of Problems

2

1.2 Problem Statement

3

1.3 Significance and Objective of the Study

6

1.4 Scope of Study

9

1.5 Organization of the Thesis

9

TOMOGRAPHY SYSTEM

2.1 What is Process Tomography?

11

viii 2.2 Types of Process Tomography

14

2.2.1 Electrical Capacitance Tomography (ECT)

14

2.2.2 Electrical Impedance Tomography (EIT)

16

2.2.3 Ultrasonic Tomography

17

2.2.4 X-ray Tomography

18

2.2.5 Nuclear Magnetic Resonance Tomography (NMRT)

3

18

2.2.6 Positron Emission Tomography

19

2.2.7 Mutual Inductance Tomography

20

2.2.8 Microwave Tomography

21

2.2.9 Optical Tomography

22

2.3 Infra-red Characteristics

24

2.4 Summary

26

MATHEMATICAL MODELING

3.1 Introduction

27

3.2 Measurement Configuration

28

3.3 Sensor Modeling

39

3.3.1 Sensor Output during No-Flow Condition

41

3.3.2 Sensor Output during Flow Condition

41

3.4 Software Signal Conditioning

43

3.5 Image Reconstruction Algorithms

44

3.5.1 Spatial Domain

45

3.5.2 Frequency Domain

48

3.5.3 Frequency and Spatial Domain

50

3.5.4 Hybrid Reconstruction Algorithms

50

3.6 Reconstruction Image Error Measurements

51

3.7 Flow Velocity

53

3.7.1 Cross Correlation Method 3.8 Summary

53 56

ix

4

DEVELOPMENT

OF

INFRA-RED

TOMOGRAPHY

SYSTEM

4.1 Introduction

57

4.2 Development of the Measuring System

57

4.2.1 Selection and Preparation of Fiber Optic

59

4.2.2 Selection of Infra-red Transmitter and Receiver

60

4.2.3 Measurement Fixture Design

63

4.2.4 Circuit Design

65

4.2.4.1 Light Projection Circuit

65

4.2.4.2 Signal Conditioning Circuit

67

4.2.4.3 Digital Control Signal

71

4.2.4.4 Printed Circuit Board Design

72

4.2.5 Data Acquisition System (DAS)

74

4.2.6 Flow Rig

74

4.2.6.1 Calibration of the Flow Rig 4.4 Summary

5

76 77

FLOW MODELING

5.1 Modeling of Flowing Particles

78

5.1.1 Single Pixel Flow Model

78

5.1.2 Multiple Pixels Flow Model

81

5.1.3 Half Flow Model

83

5.1.4 Full Flow Model

84

5.2 Image Reconstruction Algorithms for Flow Models 5.2.1 Results of Reconstructed Model Images

85 88

5.2.1.1 Measurement of (Concentration Percentage) C

93

5.2.1.2 Measurement of NMSE

94

5.2.1.3 Measurement of PSNR

95

5.2.1.4 Measurement of Vmax

96

5.2.1.5 Single Pixel Flow Model

97

x

6

5.2.1.6 Multi Pixels Flow Model

98

5.2.1.7 Half and Full Flow Model

99

5.3 Comparison of Algorithms Performance

99

5.4 Conclusion from Results

101

5.5 Summary

101

CONCENTRATION

MEASUREMENT

AND

CONCENTRATION PROFILES

6.1 Introduction

102

6.2 Concentration Measurement

102

6.2.1

Experimental Results for Single Pixel and Multiple Pixels Flow

6.2.2 Experimental Results for Half Flow

115

6.2.3 Experimental Results for Full Flow

120

6.3 Concentration Profile 6.3.1

124

Experimental Results for Single Pixel and Multiple Pixels Flow

124

6.3.2 Experimental Results for Half Flow

129

6.3.3 Experimental Results for Full Flow

133

6.4 Summary

7

105

142

MEASUREMENT AND PROFILES OF VELOCITY

7.1 Introduction

143

7.1.1 Free Fall Motion

144

7.1.2 Falling with Air Resistance

145

7.2 Velocity Measurement (Sensor-to-Sensor)

147

7.2.1

Plastic Beads Flow

7.2.2

Discussion on Sensor to Sensor Cross Correlation

148

Results

149

7.3 Velocity Profile

152

7.3.1 Results of Velocity Profile

153

xi 7.3.2

Discussion on Pixel-to-Pixel Cross Correlation Results

7.4 Summary

8

REFERENCES

164 164

CONCLUSION AND RECOMMENDATIONS

8.1 Conclusion

165

8.2 Contribution to the Field of Tomography System

167

8.3 Recommendations for Future Work

168

171

xii

LIST OF TABLES

TABLE

DESCRIPTION

PAGE

3.1

x1’ values for 64 upstream and downstream sensors

33

4.1

Circuit description

73

4.2

Flow rig components

75

4.3

Measured flow rate for plastic beads at solid loading of flow indicator

77

5.1

Expected values of Vmax, C and Top for each flow model

87

5.2

Comparison of the time taken for each algorithm

100

6.1

Types of concentration measurement

103

6.2

Values of scaling factor, total voltage reading and error for a single pixel flow

6.3

Values of scaling factor, total voltage reading and error for multiple pixel flow

6.4

123

C av values for single pixel flow using the CFLBPi+s and HLBP algorithms

6.7

119

Values of scaling factor, total voltage reading and error for full flow

6.6

112

Values of scaling factor, total voltage reading and error for half flow

6.5

111

126

C av values for multiple pixels flow using the CFLBPi+s and HLBP algorithms

128

xiii TABLE

6.8

DESCRIPTION

PAGE

The count of Cn,b values at every quarters of 100% for flow rates of 27 gs-1 to 126 gs-1 using the CFLBPi+s and HLBP algorithms

6.9

132

The counts of Cn,b values at every quarter of 100% for flow rates of 27 gs-1 to 126 gs-1 using the CFLBPi+s and the HLBP algorithms

6.10

137

Tabulated Vf values for flow rates values of full flow using the CFLBPi+s and the HLBP algorithms

7.1

Information measurement

from

sensor-to-sensor

140

cross-correlation 151

xiv

LIST OF FIGURES

FIGURE

DESCRIPTION

PAGE

2.1

Sensor selection for process tomography

13

2.2

An electrical capacitance tomography system

15

2.3

The basic concept of NMRT

19

2.4

Block diagram of a typical EMT system

20

2.5

The electromagnetic spectrum

20

3.1

An object f ( x, y ) , and its projection pφ ( x1 ' ) , shown for angle φ °

28

3.2

The arrangement of fiber optic holes for each projection

29

3.3

The fixture cross section rear view

29

3.4(a)

Fiber optics configuration at 0˚

31

3.4(b)

Fiber optics configuration at 45˚

31

3.4(c)

Fiber optics configuration at 90˚

32

3.4(d)

Fiber optics configuration at 135˚

32

3.5(a)

The flow pipe area divided into a 8 × 8 pixels resolution map

3.5(b)

The flow pipe area divided into a 16 × 16 pixels resolution map

3.5(c)

35

The flow pipe area divided into 32 × 32 pixels resolution map

3.6

34

35

A flow chart for producing sensitivity map for views of x’ at 0˚ projection and 16 × 16 pixels resolution

37

xv FIGURE

3.7

DESCRIPTION

A view from the 18th emitter Tu,dx18 to the 18th receiver Ru,dx18

3.8

38

A beam (view) from Tu,dx03 to Ru,dx03 with a 16 × 16 pixels resolution

3.9

PAGE

40

Beam (view) from Tu,dx03 to Ru,dx03 blocked by object f5,2(x1,y1)0

42

3.10

Low pass convolution mask

47

3.11

Filter kernel in frequency domain

49

3.12

Principle of cross correlation flow measurement

54

4.1

A block diagram of the infra-red tomography system

58

4.2

Preparation of Fiber Optic

60

4.3

LED characteristics (a) Relative spectral emission versus wavelength (b) Radiation characteristic – relative spectral emission versus-half-angle

4.4

61

Photodiode characteristics. (a) Relative spectral sensitivity versus Wavelength (b) Directional characteristic – relative spectral intensity versus-half-angle

4.5

Mechanical drawing and dimension of the measurement fixture

4.6

63

64

A mechanical drawing with the appropriate dimensions for (a) Fiber optic connector (b) transmitter/receiver

64

4.7

Light emitting diode transmitter circuit

66

4.8

The first and second stages of the circuit

67

4.9

The filter and buffer circuit

68

4.10

The receiver output of Rx03

69

4.11

(a) Block diagram of NE5537 (b) The sample and hold circuit

70

4.12

Control signals produced by the PIC18F458 controller

71

4.13

PCB design of the tomography system

73

4.14

Gravity flow rig

75

4.15

A calibration graph

76

xvi FIGURE

5.1

DESCRIPTION

PAGE

(a) Pixel coordinate P8 (-2,1) on the image plane (b) Pixel coordinate P16 (-4,3) on the image plane (c) Pixel coordinate P32 (-7,6) on the image plane

5.2(a)

Sensors output (projection data) for a single pixel flow model at 8 × 8 pixels resolution

5.2(b)

82

Pixel coordinate P16(-4,3), P16(3,3), P16(-4,-4) and P16(3,4) on image plane

5.6

81

Sensors output (projection data) for multiple pixels flow model at 8 × 8 pixels resolution

5.5

80

Pixel coordinate P8(-2,2), P8(1,2), P8(-2,-2), and P8(1,-2) on image plane

5.4

80

Sensors output (projection data) for a single pixel flow model at 32 × 32 pixels resolution

5.3

79

Sensors output (projection data) for a single pixel flow model at 16 × 16 pixels resolution

5.2(c)

79

82

Sensors output (projection data) for multiple pixels flow model at 16 × 16 and 32 × 32 pixels resolutions

83

5.7

Left-hand side of the pipe filled with solid particles

83

5.8

Sensors output (projection data) for half flow model

84

5.9

The pipe is filled with solid particles

84

5.10

Sensors output (projection data) for full flow model

85

5.11

Results obtained using various image reconstruction algorithms for single pixel flow models at a resolution of 32 × 32 pixels

5.12

89

Results obtained using various image reconstruction algorithms for multiple pixels flow models at a resolution of 32 × 32 pixels

5.13

90

Results obtained using various image reconstruction algorithms for half flow models at a resolution of 32 × 32 pixels

91

xvii FIGURE

5.14

DESCRIPTION

PAGE

Results obtained using various image reconstruction algorithms for full flow models at a resolution of 32 × 32 pixels

5.15

92

Graphs representing values of C(%) readings for reconstructed flow model images

5.16

93

Graphs representing the NMSE readings (%) for reconstructed flow model images

5.17

94

Graphs of PSNR readings (dB) for the reconstructed flow 95

model image 5.18

of Vmax readings (volt) for reconstructed flow model images

5.19

96

Graphs of PT(s) readings for reconstructed flow model images

100

6.1

One cycle of measurement

103

6.2

The location of an iron rod representing a single pixel flow 105

6.3

The locations of four iron rods representing multiple pixels flow

6.4

A

106 graph

representing

the

differences

between

AsTx , Rx (x ' , φ ) and AsTx , Rx (x' , φ ) for a single pixel flow

measurement 6.5

108

A graph representing the differences between UTTx , Rx ( x ' , φ ) (upstream), DTTx , Rx ( x' , φ ) (downstream), and PTx , Rx ( x ' , φ ) for a single pixel flow

6.6

A

graph

representing

108 the

differences

between

UTTx , Rx ( x' , φ ) (upstream), DTTx , Rx ( x' , φ ) (downstream), and PTx , Rx ( x ' , φ ) for a single pixel flow

6.7

A

graph

representing

the

109 differences

between

AsTx , Rx (x ' , φ ) and AsTx , Rx (x' , φ ) for multiple pixels flow

113

xviii FIGURE

6.8

DESCRIPTION

A

graph

representing

UTTx , Rx ( x ' , φ ) (upstream),

the

PAGE

differences

DTTx , Rx ( x' , φ )

between

(downstream),

and PTx , Rx ( x ' , φ ) for multiple pixels flow 6.9

A

graph

representing

the

differences

113 between

UTTx , Rx ( x' , φ ) (upstream), DTTx , Rx ( x' , φ ) (downstream), and PTx , Rx ( x ' , φ ) for multiple pixels flow

6.10

114

Top and side views for a half flow model inside the 115

distribution pipe 6.11

A

graph

representing

the

differences

between

AsTx , Rx (x ' , φ ) and AsTx , Rx (x' , φ ) for half flow (27 gs-1)

6.12

116

A graph representing the differences between UTTx , Rx ( x ' , φ ) (upstream), DTTx , Rx ( x' , φ ) (downstream), and PTx , Rx ( x ' , φ ) for half flow (27gs-1)

6.13

116

A graph representing the differences between UTTx , Rx ( x' , φ ) (upstream), DTTx , Rx ( x' , φ ) (downstream), and PTx , Rx ( x ' , φ ) for half flow (27 gs-1)

117

6.14

Top view of full flow

120

6.15

A graph representing the AsTx , Rx (x ' , φ ) values for full flow (27 gs-1)

6.16

121

A graph representing the differences between UTTx , Rx ( x ' , φ ) (upstream),

DTTx , Rx ( x' , φ ) (downstream), and PTx , Rx ( x ' , φ )

for full flow (27 gs-1) 6.17

121

The regression line of output sensors versus measured flow rates

124

6.18

Concentration profiles for single pixel flow

126

6.19

Concentration profiles for multiple pixels flow

128

xix FIGURE

6.20(a)

DESCRIPTION

PAGE

Concentration profiles for half flow at a flow rate of 27 gs1

(CFLBPi+s and HLBP) (i) Cycle=3, buffer=45, (ii) Cycle

=11, buffer = 38, (iii) Cycle = 16, buffer =38, and PT = Processing Time 6.20(b)

130

Graphs of upstream and downstream Cn,b values for half flow using (i)CFLBPi+s and (ii)HLBP algorithms at a flow rate of 27gs-1

6.21(a)

131

Concentration profiles for full flow at a flow rate of 27 gs-1 (CFLBPi+s and HLBP) (i) Cycle=2, buffer=16, (ii) Cycle =8, buffer = 48, (iii) Cycle = 12, buffer =95, and PT = processing time

6.21(b)

135

Graphs of upstream and downstream Cn,b values for full flow using (i)CFLBPi+s and (ii)HLBP algorithms at a flow rate of 27 gs-1

6.22

136 -1

A Graph of Vf (V) versus the flow rates (gs ) using the CFLBPi+s algorithm

6.23

141

A graph of Vf (V) versus the flow rates (gs-1) using the HLBP algorithm

141

7.1

One cycle of measurement (254 buffers)

143

7.2

Free fall motion of two objects under influence of gravity 9.81 ms-2

7.3

An object falls under the influence of air resistance and the value of acceleration produced at each instance of time

7.4

145

Distance between upstream/downstream sensors with the rotary valve

7.5

144

146

Output signal for sensor 06 (a) Upstream and (b) Downstream, (c) The correlation function for upstream and downstream sensors at 49 gs-1 and cycle=2

7.6

148

Output signal for sensor 06 (a)Upstream and (b) Downstream, (c) The correlation function for upstream and downstream sensors at 93 gs-1 and cycle=4

148

xx FIGURE

7.7

DESCRIPTION

PAGE

Output signal for sensor 00 (a)Upstream and (b) Downstream, (c) correlation function for upstream and downstream sensors at 126 gs-1 and cycle=2

7.8

149

(a) Velocity profiles for solid particle flows at a flow rate of 49 gs-1 (i) Cycle = 1 ii) Cycle = 4 iii) Cycle = 6 and (b) Rmax(x,y) profiles at each measurement cycle respectively

7.9

153

Output signal for Pixel(2,-4) (a)Upstream and (b) Downstream, (c) The correlation function for upstream and downstream Pixel(2,-4) at 49 gs-1 and cycle=1

7.10

154

Output Signal for Pixel(-2,1) (a)Upstream and (b) Downstream, (c) The correlation function for upstream and downstream Pixel(-2,1) at 49 gs-1 and cycle=4

7.11

154

Output signal for Pixel(0,1) (a)Upstream and (b) Downstream, (c) The correlation function for upstream and downstream Pixel(0,1) at 49 gs-1 and cycle=6

7.12

155

(a) Velocity profiles for solid particle flows at a flow rates of 71 gs-1 (i) Cycle = 1 ii) Cycle = 3 iii) Cycle = 4 iv) Cycle = 6 v) Cycle = 8 vi) Cycle = 10 and (b) Rmax(x,y) profiles at each cycle measurement respectively

7.13

156

Output signal for Pixel(1,1) (a)Upstream and (b) Downstream, (c) The correlation function for upstream and downstream Pixel(1,1) at 71 gs-1 and cycle=1

7.14

157

Output signal for Pixel(11,0) (a)Upstream and (b) Downstream, (c) The correlation function for upstream and downstream Pixel(11,0) at 71 gs-1 and cycle=3

7.15

157

Output signal for Pixel(2,-1) (a)Upstream and (b) Downstream, (c) The correlation function for upstream and downstream Pixel(2,-1) at 71 gs-1 and cycle=4

7.16

158

Output signal for Pixel(-4,-1) (a)Upstream and (b) Downstream, (c) The correlation function for upstream and downstream Pixel(-4,-1) at 71 gs-1 and cycle=6

158

xxi FIGURE

7.17

DESCRIPTION

PAGE

Output signal for Pixel(4,-1) (a)Upstream and (b) Downstream, (c) The correlation function for upstream and downstream Pixel(4,-1) at 71 gs-1 and cycle=8

7.18

159

Output signal for Pixel(12,6) (a)Upstream and (b) Downstream, (c) The correlation function for upstream and downstream Pixel(12,6) at 71 gs-1 and cycle=10

7.19

159

(a) Velocity profiles for solid particle flows at a flow rates of 126 gs-1 (i) Cycle = 5 ii) Cycle = 6 iii) Cycle = 7 iv) Cycle = 8 v) Cycle = 10 vi) Cycle = 11 and (b) Rmax(x,y) profiles at each cycle measurement respectively

7.20

160

Output signal for Pixel(5,-1) (a) Upstream and (b) Downstream, (c) The correlation function for upstream and downstream Pixel(5,-1) at 126 gs-1 and cycle=5

7.21

161

Output signal for Pixel(0,-2) (a) Upstream and (b) Downstream, (c) The correlation function for upstream and downstream Pixel(0,-2) at 126 gs-1 and cycle=6

7.22

161

Output signal for Pixel(-5,3) (a) Upstream and (b) Downstream, (c) correlation function for upstream and downstream Pixel(-5,3) at 126 gs-1 and cycle=7

7.23

162

Output signal for Pixel(-5,3) (a) Upstream and (b) Downstream, (c) correlation function for upstream and downstream Pixel(-5,3) at 126 gs-1 and cycle=8

7.24

162

Output signal for Pixel(-3,-3) (a) Upstream and (b) Downstream, (c) The correlation function for upstream and downstream Pixel(-3,-3) at 126 gs-1 and cycle=10

7.25

163

Output signal for Pixel(-5,1) (a) Upstream and (b) Downstream, (c) The correlation function for upstream and downstream Pixel(-5,1) at 126 gs-1 and cycle=11

163

xxii

LIST OF ABBREVIATIONS

ADC

- Analog to digital conversion

AsTx , Rx (x ' , φ )

- The total average values for upstream and downstream sensors based on 30 measurement cycles before normalized

AsTx , Rx (x' , φ )

- The total average values for upstream and downstream sensors based on 30 measurement cycles after normalized

a

- Acceleration of gravity (ms-2)

B( x' , φ )

- Hybrid reconstruction algorithm constant value for each receiver

C

- Concentration percentage

CT

- Computerized Tomography

C av

- Total average concentration profile percentage value for 30 cycles

C n ,b

- Total concentration profile percentage value from each cycle (n) and buffer (b)

DAC

- Digital to analog conversion

DAS

- Data acquisition system

DI

- Digital input

DMA

- Direct memory access

DO

- Digital output

Dn

- Maximum line number of infra-red light beam

DT

- Total voltage reading from downstream sensors after normalized

DTTx , Rx ( x ' , φ )

- The total average values for each downstream sensor before normalized

xxiii DTTx , Rx ( x' , φ )

- The total average values for each downstream sensor after normalized

DT

- The total voltage reading from downstream sensors before normalized

Dn

- Maximum line number of infra-red light beam

d x, y (t − τ )

- Downstream sensor’s array profiles values

eU

- The upstream error

eD

- The downstream error

eU

- The normalized upstream error

eD

- The normalized downstream error

F −1

- 1-dimension inverse Fourier transform

FLBP

- Filtered Linear Back Projection

f(x, y)

- An object representations by a two-dimensional function

fˆ ( x, y )

- Approximation of the object function in volts

gn

- The pixel matrix value at the n position

KU

- Upstream scaling factor

KD

- Downstream scaling factor

K

- Overall scaling factor

K (ω )

- Filter kernel in the frequency domain

LBP

- Linear Back Projection

LED

- Light Emitted Diode

MFR

- Mass Flow Rates

M x ',φ (x, y )

- The sensitivity map of infra-red views ( V x ',φ (x, y ) ) before the pixels outside the flow pipe are zeroed

NMSE

- Normalized mean square error

N ( x' , φ )

- Normalized sensor reading during flow condition for each flow model

PSNR

- Peak signal to noise ratio

PT

- Processing time

PT

- The total voltage reading from expected values

PTx , Rx (x ' , φ )

- The expected value which has been rescaled

xxiv pφ ( x1 ' )

- Projection data for AB line

pφ ( x ' )

- Convolved projection data in time domain

Rmak

- Maximum cross-correlation function value

S φ ( x, y )

- Sensitivity map for each projection

S max ( x, y )

- Total distribution of the infra-red light beam in a specific



rectangle S φ ( x, y )

- Normalized sensitivity map for each projection

s

- Distance(m)

Top

- Total number of pixels occupied by any infra-red light from all infra-red transmitters

t

- Time (s)

UT

- The total voltage reading from upstream sensors before normalized

UT

- Total voltage reading from upstream sensors after normalized

UTTx , Rx ( x' , φ )

- The total average values for each upstream sensor before normalized

UTTx , Rx ( x' , φ )

- The total average values for each upstream sensor after normalized

u

- Initial velocity (ms-1)

u x, y (t )

- Upstream sensor’s array profiles values

V

- Measured velocity value

Vt

- Calculated velocity value (theoretically)

VTX

- Digital signal for infra-red transmitter circuit

VSup

- Digital signal for upstream sample and hold circuit

VMuxup

- Digital signal for upstream sample and hold circuit

VSdown

- Digital signal for downstream sample and hold circuit

VMuxdown

- Digital signal for downstream sample and hold circuit

VTrig

- Digital signal inserted into TGIN data acquisition system terminal

VBurst

- Digital signal inserted into XPCLK data acquisition system terminal

xxv V refTx , Rx (x ' , φ )

- Sensor reading for view from emitter Tx to receiver Rx during no-flow (V)

Vcal

- The standardized/calibrated value of each view that was assumed to be 5V

VsTx , Rx (x' , φ )

- Amplitude of signal loss from receiver from Tx to Rx view

VTx , Rx (x ' , φ )

- Received signal from transmitter during flow-condition

V x ',φ ( x, y )

- The sensitivity map of infra-red views ( V x ',φ (x, y ) ) before the pixels outside the flow pipe are zeroed

v( x, y )

- Velocity profile

wn

- The convolution-filter mask value at the n position

x’

- Detector coordinate in the investigated area

x1’

- Detector coordinated in the scaled investigated area

x’1

- The coordinate of AB line in x’ plane

(x1, y1)

- Scaled spatial domain

(x1’, y1’)

- Scaled projection domain

x1'0,1...t

- The receivers’ positions in the scaled projection domain, where t = 10

μ

- Permeability

σ

- Conductivity

τm

- Transit time

φ

- Projection angle

%e

- Error of velocity value compared to the calculated velocity value

xxvi

LIST OF APPENDICES

APPENDIX A.I

TITLE

PAGE

The sensitivity map of infra-red views ( V x ',φ ( x, y ) ) before exemption processes at 32 × 32 pixels resolution and 45° projection

A.II

180

The sensitivity matrices for infra-red light views after exemption processes at 32 × 32 pixels resolution and 45° projection

181

A.III

The total and normalized sensitivity

182

B

The final output of receiver Rx03 (downstream and upstream) and controller output signals

185

C

Picture of FVR-C9S controller

186

D.I

Results of flow models image reconstruction for 8 × 8 and 16 × 16 pixels resolution

187

D.II

Quantity and quality data tabulation

195

E

Publications relating to the thesis

198

F.I

DriverLINX programming

199

F.II

Image reconstruction programming

203

F.III

Velocity-cross correlation method

209

G.I

Experimental Results for Half Flow (Concentration Measurement)

G.II

Experimental Results for Full Flow (Concentration Measurement)

G.III

216

Experimental Results for Half Flow (Concentration Profile)

G.IV

211

220

Experimental Results for Full Flow (Concentration Profile)

228

CHAPTER 1

INTRODUCTION

Wilhelm Roentgen discovered x-rays in the year 1895, his discoveries contributed to the most important diagnostic methods in modern medicine. Since then, it is possible to look through into both non-living and living things without cutting the certain area of the subject by taking X-ray radiography (Ellenberger et al., 1993). This method of projection is far from being a perfect image of the real subject since the images were a superposition of all planes normal to the direction of X-ray propagation. In the 30’s conventional tomography was the tomographic method using the X-ray radiation and gave possibility to restore information of 2D and 3D images (William and Beck, 1995).

The word ‘tomography’ is derived from the Greek words, where ‘tomo’ meaning ‘to slice’/’section’ and the word ‘graphy’ means image. In the year 1970 all the possibilities in the 30’s became true when this technique utilized the x-rays to form images of tissues based on their x-ray attenuation coefficients. However, this technique does not stop at the medical studies area and it was successfully developed into the industrial field and commonly known as the Industrial Process Tomography (IPT). This technique aims to measure the location concentration, phase proportions, and velocity measurement (Chan, 2003) retrieved from the quantitative interpretation of an image or, more likely, many hundreds of images corresponding to different spatial and temporal conditions using direct measurement/real time due to the dynamic changes of internal characteristic.

2 There are many parameters such as 2D and 3D images, velocity, and Mass Flow Rates (MFR) which can be retrieved from the tomography visualizing techniques within the processor or unit operation. Hence, the latter parameters give the information of the distributions of material in a pipeline. Therefore, from the knowledge of material distribution and material movement, a mathematical model can be derived and it can be used to optimize the design of the process (Tapp et.al, 2003).

1.1

Background of Problems

Process Tomography has become one of the vast growing technologies nowadays The tomographic imaging of objects offers a unique opportunity to unravel the complexities of structure without the need to invade the object (Beck and Williams, 1996). It is a diversification from the original research on x-ray tomography, which focused on how to obtain 2-D cross-section images of animals, human, and non-living things (Syed Salim, 2003). Process Tomography can be applied to many types of processes and unit operation, including pipelines (Neuffer et al., 1999), stirred reactors (Wang et.al, 1999), fluidized bed (Halow and Nicoletti, 1992), mixers, and separator (Alias, 2002). Process tomography is an essential area of research involving flow imaging (image reconstruction) and velocity measurement. For example in the research that was carried out by Ibrahim (2000), the Linear Back projection (LBP) algorithm which was originally designed for x-ray tomography was used to obtain the concentration profiles of bubbles in liquid contained in a vertical flow rig. This project investigated the two-phase flow (solid particle and air) using a vertical pneumatic conveyor flow rig.

Flow imaging usually involved obtaining images of particles and gas bubble (Yang and Liu, 2000) and the measurements can be either done using on-line (real time) or off-line. For on-line measurement, there are many performance aspects that must be considered such as hardware performance, data acquisition (signal interfacing), and algorithm performance. Limited numbers of measurement affect the

3 quality of images obtained. The input channel of the data acquisition system has to be increased with the increase in the number of sensors used.

LBP algorithm is the most popular technique that was originally applied in medical tomography. Research conducted by Chan (2003) improved flow imaging using 16 alternating fan-beam projections with an image reconstruction rate of 20 fps, but this image reconstruction rate not is sufficient to achieve an accurate measurement of velocity. Generally, this project performed an investigation on how to improve the sensing method developed by Abdul Rahim (1996) which used fiber optics in flow visualization. Instead of using one light source, this project focused on using individual light source meaning one infra-red LED emitter for one photodiode. This method was then combined with an infra-red tomography system which consist of a hardware fixture, a signal conditioning system, and a data acquisition system by synchronizing the whole process operation.

Furthermore, image reconstruction in the spatial domain and frequency domain were investigated for this project. Generally, the information retrieved from the measurement system can be used to determine both the instantaneous volumetric and velocity of solids over the pipe cross section.

1.2

Problem Statement

The process tomography system requires the knowledge of various discipline such as instrumentation, process, and optics to assist in the design and development of the system. Generally, the solutions to the problems that were carried out in this project are: •

Development of a suitable sensor configuration for the selected infra-red emitter and receiver. Design of the fixture must be able to avoid the infra-red sensor from being exposed to any kind of ambient light (day light, lamp etc) and placed around the boundary of the pipe so that light emitted from the

4 emitter will be the only one that is in contact with the solid particle in the pipeline. •

Determine the best infra-red emitter and receiver based on the physical nature of the design of material that is involved in the transmission of the infra-red light, light emission, spectral characteristic, sensor radiation characteristic, receiver respond, optical power, and availability from suppliers.



Selecting suitable signal conditioning and electronic controller. The characteristic of the component used will determine the whole measurement result, such as power consumption, offset current, input impedance, slew rate, and common mode input voltage range (Tan, 2002).



Increasing the number of sensor measurement (128 pairs of infra-red transmitter and receiver for upstream and downstream planes). The number of measurement and projection angle subsequently affect the quality of the image reconstructed (Ibrahim, 2000).



Synchronization of the data acquisition with the circuitry operation. A digital controller with sufficient of memory, easy programming language, programmable, stable, and has a high operation speed.



The programming language that drive and control the interface between the hardware developed must be compatible with the application programming language in Windows environment.



Implementation of the image reconstruction algorithm and velocity measurement. The image reconstruction estimated the distribution of material within the pipe which would provide the measured sensor output and the velocity measurement provide the solid particles velocities values.

The idea based on Hartley et al. (1995) and Chan (2003) where the method of research covers:

5 •

Two orthogonal parallel projections those are perpendicular to each other.



The design of the system started with the aim of flow imaging.



The output of several sensors for each projection are multiplexed, in order to minimize the system complexity and cost.



Hartley’s system (Hartley et al., 1995) made use of 8 × 8 sensors, in which each projection has 8 views for image reconstruction, but when larger number of views are needed it has to be determined off-line since the transputer being used was slow.

In conjunction with the previous research, the solutions required are listed as follow: •

The SFH485P infra-red LED transmitter and the SFH203P photodiode receiver selected have a matching wavelength at 990nm, a flat top surface for full light collimation before it is distributed using fiber optics, fast switching characteristic, and low optical power.



The appropriate technique of constructing the signal conditioning circuit where it is very important to convert the amount of incident infra-red light using the photodiode to a suitable voltage level. Then a sample and hold circuit will be used to hold the measured signal.



Increase the numbers of view/measurement by optimizing the time required to capture 128 sensor channels, using a data acquisition system with 64 analog input channels.



Synchronization between signal conditioning and data acquisition, using a PIC controller where the operation between the data acquisition system and circuitry operation that involved settling time for hold and sample must be configured to make sure data obtained from upstream and downstream sensors can be differentiate.

6 •

The Microsoft Visual C++ 6.0 was selected because the C language has the advantages of being small size, fast, support modular programming, and memory efficient (Bronson, 1999). Microsoft Visual C++ 6.0 is a powerful language with a standard user interface and enables device independent program.



A software driver for real-time data acquisition in the Microsoft Window environment called DriverLINX provided by Keithley customized to support the data acquisition system interface system between the software and hardware developed.



Solving the forward and reverse problem based on the projection theorem. The forward problem provides the theoretical output of each sensor under noflow and flow conditions when the sensing area is considered to be twodimensional and the inverse problem estimates the distribution of material within the pipe which would provide the measured sensor outputs (Ibrahim et al., 1999).



Numerous image reconstruction technique adapted in the tomography, such as Linear Back projection and Fourier reconstruction. In this study the reconstruction, covered image reconstruction in the spatial domain, frequency domain, and the hybrid approach (Ibrahim, 2000).



1.3

The application of cross correlation technique for velocity measurement.

Significance and Objective of the Study

Uchiyama et al. (1985) pointed out that the use of thermography is an inappropriate technique for measuring the temperature distribution in flames, as the infra-red radiation received by the sensor is the line integral of the emitted radiation along the optical path. Infra-red radiation, having wavelengths which are much

7 longer than visible light, can pass through dusty regions of space without being scattered. This means that we can study objects hidden by gas and dust in the infrared, which we cannot see in visible light (Mass, 1972). These are the advantages using infra-red light where the dust or gas that is produced or fetched by the conveyed particles does not affect the measuring systems.

Studies have shown that both contrast and spatial resolution of optical images are affected by the optical properties of the background medium, and high absorption and scattering are generally beneficial. Based on these observations, wavelengths shorter could be profitable for optical measuring systems (Taroni et al., 2004). Xrays, gamma-rays, and ultraviolet-light have a shorter wavelength but the problems arise on how to handle properly this kind of material because it’s dangerous to living things.

Research by Ibrahim et al. (1999) has proved that the use of fiber optic can enhance the image resolution with the purpose of measuring the concentration and velocity of gas bubbles in a vertical water column. Chan (2003) utilized the concept of fan beam switching mode to increase the total projections, image resolution and the total number of measurements to analyze images of solid particle flow. Pang (2004) developed an optical tomography system to perform real-time mass flow rate by using two local networked PC and five programs. Based on those researches, the tomography system in this project can enhance the image resolution, increase the total number of measurements in order to image the flow of solid particles and perform velocity measurement based on the use of fiber optic and parallel beam switching mode between the measurement planes using one PC and one programs (upstream and downstream planes).

The objectives of this investigation are:

1) To become familiar with the concept of process tomography, and associated sensors.

2) To understand the application of data acquisition system and tomographic imaging reconstruction.

8

3) To study the interaction between the collimated infra-red light and the targeted object (which is dropped into the flow pipe).

4) To solve the forward and reverse problems (Ibrahim, 2000).

5) To calculate the velocity of dropping particle using results from the cross correlation method (Plaskowski et al., 1997) and free fall motion.

6) To design a hardware system for the infra-red tomography system.

7) To incorporate the signal conditioning (circuitry operation) with the data acquisition system by synchronizing the signal conditioning with the data sampling processes.

8) To determine the better reconstruction algorithm for flow imaging between spatial domain, frequency domain, and hybrid image reconstruction.

9) To implement a measurement system that will obtain data from the infra-red sensors for concentration and velocity measurements for various flow rates.

10) To test this system on a pneumatic flow conveyor by distributing solid particles into a vertical pipe and to investigate the concentration and velocity profiles using the experimental data that have been obtained for various flow rates.

9 1.4

Scope of Study

The aim of the study is to investigate the flow regimes (image reconstruction) and flow velocity in pipe due to dropping particles and velocity measurement (Kaplan, 1993) using a conveyor flow rig. The scope of study includes:

1) Absorption by the emission of infra-red light.

2) Flow rates, concentration profiles, and velocity of dropping particles.

3) Signal conditioning and data acquisition system.

4) Process modeling: includes sensors fixture, flow rig model, signal conditioning circuit’s design and software development.

5) Image reconstruction algorithm and cross-correlation method.

6) Thesis writing.

1.5

Organization of the Thesis

Chapter 1 presents an introduction on process tomography research’s background problem, problem statement, significance and objective and scope of study.

Chapter 2 presents a brief review of process tomography, types of process tomography such as electrical capacitance, electrical impedance, ultrasonic, x-ray, nuclear resonance magnetic, positron emission, mutual inductance, microwave, and optical tomography.

Chapter 3 presents a discussion on the relationship between projection and object functions, modeling of the infra-red optical system sensor, software signal

10 conditioning,

image

reconstruction

algorithm,

reconstructed

image

error

measurement, and velocity of flow.

Chapter 4 discusses a presentation on the development of infra-red tomography measuring system.

Chapter 5 presents a discussion on the single pixel flow, multiple pixels, half, and full flow modeling, image reconstruction algorithm for flow models, results of reconstructed model images, comparison of algorithms performance, and conclusion from the results.

Chapter 6 presents a discussion on the concentration measurement and the concentration profiles.

Chapter 7 presents a discussion on the introduction of free fall and air resistance theory, velocity measurement, and velocity measurement.

Chapter 8 presents the conclusion and the recommendations for future work.

CHAPTER 2

TOMOGRAPHY SYSTEM

2.1

What is Process Tomography? The use of “Process Tomography” is analogous to the application of medical

tomographic scanners to examine the human body, but applied to an industrial process (tank, pipe, etc.) because there is a widespread need for the direct analysis of the internal characteristics of process plants in order to improve the design and operation of equipment. The computerized tomography (CT) methods used in medical imaging provide useful means for obtaining instantaneous information on distribution of components on a cross section of the pipe, and thus lead to the possibility of a much more accurate measurement (Beck et al., 1986). Process tomography can be applied to many types of processes and unit operations, including pipelines, stirred reactors, fluidized beds, mixers, and separators. Depending on the sensing mechanism used, it is non-invasive, inert, and, non-ionizing. It is therefore applicable in the processing of raw materials: in largescale and intermediate chemical production; and in the food biotechnology areas. Process Tomography will improve the operation and design of processes handling multi-component mixtures by enabling boundaries between different components in a process to be imaged in real-time using non-intrusive sensors (Dyakowski, 1996). Information on the flow regime, vector velocity and component concentration distribution in process vessels and pipelines will be determined from

12 the images .The basic idea is to install a number of sensors around the pipe or vessel to be imaged. There are two types of sensor: “hard-field” and “soft-field” sensors. A “hard-field” sensor is equally sensitive to the parameters it measured inside and outside this measurement region (Chan, 2003). For a “soft-field” sensor, sensitivity of the measured parameter depends on the position in the measurement volume, as well as on the distribution of parameters inside and outside the region. For this reason, image reconstruction is much simpler for “hard-field” sensors such as gamma-ray sensors than for “soft-field” sensors such as capacitance sensors (Isaksen, 1996; Chan, 2003). This sensor reveals information on the nature and distribution of components within the sensing sector. The sensor output signals depend on the position of the component boundaries within their sensing zones. Most tomographic techniques are concerned with abstracting information to form a cross-sectional image. A computer is used to reconstruct a tomographic image of the cross-section being observed by the sensors. This will provide, for instance, identification of the distribution of mixing zones in stirred reactors, interface measurement in complex separation processes, and measurements of two-phase flow boundaries in pipes with applications to multiphase flow measurement. The image data can be analyzed quantitatively for subsequent use to improve process control or to develop models to describe individual processes. The basic components of any process tomographic instrument/system are hardware (sensors, signal/data control) and software (signal reconstruction, display and interpretation facilities, and generation of output control signals to process hardware). The sensor system is at the heart of any tomographic technique. The basis of any measurement is to exploit differences or contrast in the properties of the process being examined. A variety of sensing methods can be used based on measurements of transmission, diffraction, or electrical phenomena. Whilst most devices employ a single type of sensor, there are a number of opportunities for multimode systems using two (or more) different sensing principles. The choice of sensing system will be determined largely by:

13 •

The nature of the components contained in the pipeline, vessel, reactor, or material being examined (principally whether they exist as a solid, liquid, gas or a multi-phase mixture, and if so in what proportions).



The information sought from the process (steady-state, dynamic, resolution, and sensitivity required) and its intended purpose (laboratory investigations, optimization of equipment, process measurement, or control).



The process environment (ambient operation conditions, safety implications, ease of maintenance, etc).



The size of the process equipment and the length-scale of the process phenomena being investigated. The spatial resolution required, speed, and robustness would need to be taken

into account by the potential user when selecting a sensor system. Whichever sensing system is used to acquire the data, the next stage of any tomographic imaging system is to process that raw data using an appropriate image reconstruction algorithm run on suitable computer hardware. Figure 2.1 shows the strategy of selecting a sensor system.

Figure 2.1: Sensor selection for process tomography (Beck and Williams, 1996) However, process tomographic instrumentation must be relatively low cost and be able to make measurements rapidly, using an array of non- invasive sensors placed around the periphery of a process vessel (Beck and Williams, 1996) where it

14 is possible to image the concentration and movement of components inside the measurement area. The subsequent stage of any tomographic imaging system is to process the acquired data using an appropriate image reconstruction (Xie, 1995). There are two types of tomographic reconstruction algorithms: analytic (e.g. Fourier inversion) and algebraic (e.g. algebraic reconstruction technique (ART)) (Kak and Slaney, 1999). The choice of the reconstruction algorithm is dependent on the tomography technique selected (William and Beck, 1995). For an example, the Transmission tomography technique (e.g. X-rays) used the analytic reconstruction algorithms (Fourier inversion and Filtered back-projection) and algebraic reconstruction technique for multiphase flow imaging, mixing study and fluidized bed imaging (Xie, 1995).

2.2

Types of Process Tomography Many flow maps were constructed on the basis of results obtained by flow

visualization and the measurement of void or pressure fluctuations by using a point sensor. The main drawbacks of such methods are the application of such sensors, which disturb the flow field, and the use of subjective criteria to distinguish the boundary between various patterns (Dyakowski, 1996). There are several types of sensing method in process tomography, where the sensing technique used does not disturb the nature of the flow field. We will discuss the main process tomography used in the industry generally.

2.2.1

Electrical Capacitance Tomography (ECT) Capacitance sensing methods are suitable for imaging flows consisting of two

components of very different permittivity such as water/oil, gas/oil and solids/gas flows. Capacitance sensors have the advantages of being non-invasive, inexpensive, simple to construct, and fast in measurement speed (Huang et al., 1989). Basically, the capacitance flow imaging system consists of capacitance electrode sensors, a data collection system, and image reconstruction computer. Figure 2.2 shows the ECT

15 system generally where the sensor is made by mounting metal plates on the outer surface of an insulating pipeline/vessel.

Figure 2.2: An electrical capacitance tomography system The data collection system measures the capacitance between the possible numbers of sensors pair and then fed it into the computer for further analysis. One of the early research conducted by (Huang et al., 1989), study the flow imaging by using eight-capacitance electrode to perform a ‘body scan’ of a fluid-conveying in a horizontal pipe. From the research test result shows that the images produced by the prototype system agree qualitatively with the physical models but still have several limitations in terms of its resolution and error of the linear approximation. The resolution limitation problem occurred because the electrode size must be big enough to maximize the signal to noise ratio (SNR), while the error of the linear approximation happened because of the changes in permittivity distribution produced a distortion of the sensing field but it can be minimized by using a stray-immune measurement circuit (Jaworski and Dyakowski, 2001). This type of sensor is also sensitive to the particles passing near to the gap between the electrode and the conveyor wall (Hammer and Green, 1983). Compared with a tomography system that uses electromagnetic radiation like optical this technique cannot produce high definition image boundaries and the sensitivity for the measured properties is not constant within the investigated area.

16 2.2.2

Electrical Impedance Tomography (EIT) The resurgence of research into medical electrical impedance tomography

about 20 years ago was soon accompanied by a parallel development in process impedance tomography (Wang et al., 2002). Electrical impedance tomography systems also include electrical tomography, impedance tomography impedance imaging

resistance

imaging,

electrical

resistance

tomography,

impedance

tomography, and applied potential tomography. All the method mentions are actually methods of imaging the distribution of conductivity or permittivity within a volume. The basic stages of producing an impedance image are: •

The collection of a set of N independent transfer impedances and



The solution of an inverse problem in order to produce an image from the set of impedance.

In practical applications, a set of electrode is assigned around a vessel or human body, then the impedance transfer is measured by inducing currents and measuring the voltages at the non induced current electrode. If the number of electrodes used is 16 then the total independence transfer impedance measurement is 104 (N(N-3)/2) (Dickin and Wang, 1995). But, the medical and process application is very different and have many constraints. For example, the geometry is usually well controlled in process tomography where the electrode is usually placed on a rigid and regular structure. However, electrodes placed around human segments will have an irregular geometry and the shape may change with time and the dynamic range of permittivity of values for conductivity and permittivity in process tomography is much greater than the medical diagnostic application. The limited number of measurements and the presence of electrical noise would cause difficulties in obtaining solutions (Dyakowski, 1996), where the number of electrodes affect the total independent measurement that will be used to reconstruct images. The system design must also have a facility to either increase or decrease the amplitude of injected currents to optimize the signal to noise ratio. The accuracy of the EIT technique is restricted by its complexity in sensor modeling, noise reduction and image reconstruction. However, there are several reasons why electrical impedance is still under investigation and applied in certain types of

17 condition including low cost, high speed of data collection (25 fps), and the material characterization is possible in this case. For example, research on detecting leakage of buried pipelines where the field measurement (Electrical Resistance) have allowed user to detect a simulated conductive leak from an insulating tube (Jordana et al., 2001).

2.2.3

Ultrasonic Tomography The basic idea of ultrasonic sensors is quite simple: they transmit acoustic

waves (frequency of 18 kHz to 10 MHz) and receive them after interaction of the ultrasonic wave and the investigated process (Hauptmann et al., 2002) and interaction may then be sensed to yield information about the object or field. Ultrasonic process tomography systems are concerned with obtaining two or threedimensional images from the information and projections gathered from the required equipment including ultrasonic generator, transducer to transmit and receive ultrasonic waves, and computerized image-processing system. The advantages of this system can be summarized as non-invasive measurement, in-line measurement, rapid response, excellent long-term stability, and high resolution (Hauptmann et al., 2002). Ultrasonic tomography methods have a broad spectrum of applications in industry including process control, testing, cleaning, machining, welding, and many other types of mixing flow measurement. Ultrasonic tomography is difficult to operate in gas-solids flow (pneumatic conveyors) because of the dynamic nature of the flow, relatively low propagation velocity of sound imposed demanding temporal constrains on data acquisition and ultrasonic measurement of the solids distribution is dependent on the complex interaction between the transmitted ultrasound and the flow (Brown et al., 1995). Furthermore, it is difficult to collimate and Problems occur due to reflections within enclosed spaces, such as metal pipes (Daniel, 1996). As the suspended solids’ concentration fluctuates, the ultrasonic beam is scattered and the received signal fluctuates in a random manner about a mean value (Mohd Hafiz, 2002).

18

2.2.4

X-ray Tomography The X-ray imaging technique can be divided into two categories; X-ray

transmission radiography and X-ray transmission tomography. This technique is important to the processes where there is a substantial density difference between the components (Beck, 1995). For X-ray transmission radiography, the image is static and it does not provide a continuous imaging of the object. In X-ray transmission tomography, the objective is to obtain images or slices through an object at different depth and are obtained by carefully computed and controlled relative motion of the X-ray source and the detector during the exposure. This technique is also known as computerized tomography. The hardware used in the transmission tomography is big and bulky because heavy shielding is required to collimate beams and for safety. It is also not suitable for flow visualizing because the material involved has a low attenuation coefficient (Tapp et al., 1998). An X-ray tomography system consisting of 60keV X-ray source and an X-ray detector has been developed to investigate flow structures of circulating fluidized beds. The system can measure average solids concentration up to 20Vol-% in a tube with 0.19m inner diameter with a minimum spatial resolution of 0.2mm. The results contained are reliable within an error range of about 5% (Grassler and Wirth, 1999).

2.2.5

Nuclear Magnetic Resonance Tomography (NMRT) The nuclear magnetic resonance tomography or the magnetic resonance

imaging has enabled researcher to visualize the structure of the body’s soft tissues using a radiation-based technique. Figure 2.3 depicts the basic steps how the nuclear magnetic resonance tomography works. The MRI methods is most suitable for imaging two or higher component flows that contains water as one of the components, this makes the NMRT not viable for imaging the gas-solid component flow.

19

Figure 2.3: The basic concept of NMRT The behavior of hydrogen atoms contained in subject/material of interested is the key to MRI. Figure 2.3(a) shows the atomic nuclei (hydrogen atom) spin randomly around the magnetic axes. When the material is exposed to a strong magnetic field, the nuclei of the hydrogen atom all lined up, like iron filling scattered around a magnet as shown in Figure 2.3(b). When the nuclei is hit by the radio waves (Figure 2.3(c)) the spin axes deflected, and Figure 2.3(d) shows that as the nuclei realign in the magnetic field they emit radio signal and it is picked up by the MRI scanner to create an image. MRI methods typically provide better spatial resolution and specificity than the other methods at the expense of imaging time, equipment cost, geometric, and material constraints (Gibbs and Halls, 1996). In addition, the cost of NMRT system is expensive due to the cost of magnet and presently it is only suitable for laboratory investigations ad not suitable for imaging flows in large process vessels or pneumatic conveyor (McKee, 1995).

2.2.6

Positron Emission Tomography Positron emission tomography (PET) is based upon the distribution of activity

of positron emitters. The fundamental measurement is the coincident detection of two almost non-parallel photons, which are emitted as result of the annihilation of a positron with an electron. A function of position and time in relation to the emitters is

20 obtained from external coincidence measurements. The imaging systems make use of either circular geometry, with rings of sensors around the flow to be imaged, or position sensitive detectors, such as gamma cameras (Hawkesworth et al., 1986) or multi-wire proportion chambers. The advantage of PET imaging is that it pinpoints the locations of individual particles in the medium and not bulk masses as in X-ray tomography. The disadvantages include health hazards and as well as the cost of the detectors. Although a major strength of sufficient access is possible for the detection equipment, to date its application has been restricted to that of a research tool, rather than for process monitoring (Parker and McNeil, 1996).

2.2.7

Mutual Inductance Tomography Electromagnetic tomography (EMT) is based on the measurement of complex

mutual inductance. This method is relatively new and is so far unexploited for process tomography applications (Peyton, 1995). This form of EMT could more accurately be termed as mutual inductance tomography and can extract data on permeability (μ) and conductivity (σ) distributions. A typical EMT system is shown in Figure 2.4.

Figure 2.4: Block diagram of a typical EMT system

21 The principle operation of EMT can be split into four generic operations (Peyton et al., 1999): •

Excitation of the region of interest with an energizing filed.



Distortion of the field contours resulting from the material positioned within the object space.



Boundary measurement of the resulting peripheral field values.



Image reconstruction, often termed the inverse problem, which involves converting the measurements back into an image of the original material distribution.

The EMT systems are dependent to the configurations of sensor array, operating frequencies, and the types of object material to be measured (Peyton, 1995). For example, an EMT system developed by Yu et al. (1993) employs a parallel excitation field when the object is empty and concerns the detection of electrical conductive and ferromagnetic material. The images obtained from the system are, on the whole representative of the real object distribution; however, the quality is restricted due to the limited number of projections and the reconstruction algorithms such as linear or filtered back-projection method do not accurately take into account the considerable variants in the actual distributions of the magnetic flux caused by the presence of objects (Peyton, 1995). However, high cost of hardware required to generate high intensity magnetic fields capable of penetrating large objects (Daniels, 1996).

2.2.8

Microwave Tomography Tomographic imaging by microwaves can make use of the diffraction

properties of electromagnetic fields, this contrast with capacitive, resistive, and inductive methods of tomography which are based on line-integral effects of their fields. Therefore microwave will have advantages in imaging objects where the material boundaries, or grain size have dimensions appropriate to the microwave wavelengths employed (Plaskowski et al., 1997). However, the mechanism of interaction between microwaves and materials are rather complicated (Bolomey,

22 1995). Microwave sensors have been considered for use in industrial testing (Nyfor and Vainekainen, 1989). The possible industrial application for microwave tomography is conveyed products (Bolomey, 1995). Conveyed products, especially when they are not too thick, offer a convenient configuration for microwave inspection and the products are often primary products to be used for further transformation, e.g. paper or wood. (Bolomey and Pichot, 1991). With respect to other existing inspection modalities, microwaves offer noncontacting capabilities, no thermal or ionizing effects, relative immunity against environmental constrains as well as significant contrasts with respect to factors such as humidity and temperature. (Bolomey, 1995).

2.2.9

Optical Tomography Optical systems can be used where the conveying object is transparent or

opaque to the incident optical radiation. It involved projecting a beam of light source (e.g. LED, infra-red, or halogen bulb) through some medium from one boundary point and detecting the level light received at another boundary using a sensor (e.g. phototransistor, photomultiplier, or photodiode) (Abdul Rahim, 1996). For the type of projection used in the system, it can be parallel beam (orthogonal) projection, rectilinear projection, fan beam projection or mix projection among them (Pang, 2004). The voltage generated by the sensor is related to the amount of attenuation in the path of the light beam, caused by the flow regime using a proper electronic circuit where the circuit is capable of measuring physical parameters that contain information of the flow regime. The analogue signal then is converted into digital form before being passed into a measurement system in order to perform the analysis (e.g. image reconstruction). The advantages of optical sensors are (Jackson, 1995): •

The technique is non-intrusive.



Response times are negligible.



Small wavelengths can potentially provide high resolution.

23 •

The propagation of light through media can be functionally described.



Emitters and detectors for various absorption/transmission windows are readily available.



Measurements are immune to electrical noise/interference.



At low light energies it is intrinsically safe.

The disadvantages of using light are: •

Many materials forming the object fields are opaque either due to absorption or multiple scattering.



When considering applications in the process industries many reactor and pipes cannot be made of transparent materials or be permitted to have a windows or lead-throughs for safety reasons.



With inhomogeneous fields multiple reflection/refraction may make a functional interpretation of the received projection very difficult.

Optical tomography also can give good spatial resolution of 1mm by using small lens or optical fiber (Green et al., 1996; Abdul Rahim et al., 1996; Ibrahim et al. 1999; Ramli et al., 1999). Furthermore, Yang and Szuster (1996) had been developed a low-cost serial data transmission link using transputer link adaptor chips and optical fibers for data communication between sensing electronics and an image reconstruction computer. Zeni et al. (2000) proposed a new technique based on optical tomography able to reconstruct the doping profiles in semiconductor wafers starting from reflected intensity measurements, taken at infrared wavelengths. In the research, several numerical simulations have shown the effectiveness of the proposed approach. In particular, the reconstruction of typical shallow doping profiles, generated by a process simulator, has been performed with relatively high accuracy. Feng et al. (2002) presented a visualization of a 3D gas density distribution, which arose from a safety analysis of a high-temperature gas cooler reactor. The density distribution is reconstructed from holographic interferograms using the techniques of computed tomography. In the reconstruction process, a new reduced bandlimit technique is used in the filtered-backprojection algorithm to deal with the

24 truncated projection data with noise. The results reconstructed from 12 view directions are verified qualitatively by an oxygen molarity detector. It shows that the filtered-back projection algorithm, integrated with the reduced bandlimit technique, can reconstruct the 3D density distribution from the truncated and noisy projections, and that holographic interferometry is a non-disturbing and powerful tool in flowvisualization for 3D gas flows. Rzasa and Plaskowski (2002) discussed a design and application study of a prototype optical tomography for the detection and definition of shapes of gas bubbles moving in a liquid. In the applied method of measurements, the column is exposed to a homogeneous light beam and next the ray is detected with the use of optical wave guided detectors. Spatial reconstruction of the bubble shapes is done on the basis of the signals coming from two perpendicular systems of detectors. In this design, the bubble shape is approximated by an ellipsoid with suitably determined semi-axes and with the use of genetic algorithms based on the neuron network. A novel optical fiber process tomography (OFPT) probe is developed by Yan et al. (2005); in which, eight optical fiber sensor units are uniformly distributed around the cylindrical container. Each unit comprises three optical fiber collimators, one photo detector and one optical window. The window is specially designed to make the light pass through and the container waterproof. All devices in the probe are simply and easily collimated owing to the exact calculation and precision manufactory. In the research, the genetic algorithms (GAs) are used. The experimental results show that the novel OFPT system can measure gas–solid twocomponent distributions.

2.3

Infra-red Characteristics Human eyes are detectors, which are designed to detect visible light wave (or

visible radiation). Visible light is one of the few types of radiation that can penetrate less-opaque material. There are forms of light (or radiation) which human cannot see

25 as human beings can only see a very small part of the entire radiation called the electromagnetic spectrum as shown in Figure 2.5.

10-1 Å 1014 GHz

10-2 Å 1012 GHz

1Å 1010 GHz

1 μm 108 GHz

0.1 mm 106 GHz

1 cm 104 GHz

1m 100 GHz

100 m 1 GHz

10 km 10 MHz

103 km 100 kHz

105 km 1 kHz

107 km 10 kHz

109 km 0.1 kHz

Figure 2.5: The electromagnetic spectrum The electromagnetic spectrum includes gamma rays, X-rays, ultraviolet, visible, infra-red, microwaves, and radio waves. The only difference between these different types of radiation is their wavelength or frequency. Wavelength increases as frequency (as well as energy and temperature) decreases from gamma rays to radio waves. All of these forms of radiation travel at the speed of light (186,000 miles or 300,000,000 meters per second in a vacuum). Infra-red radiation lies between the visible and microwave portions of the electromagnetic spectrum. Infra-red waves have wavelengths longer than visible and shorter than microwaves, and have frequencies which are lower than visible and higher than microwaves. Infra-red is broken into three categories: near, mid and far-infra-red. Near-infra-red refers to the part of the infra-red spectrum that is closest to visible light and far-infra-red refers to the part that is closer to the microwave region. Midinfra-red is the region between these two. There are two methods of infra-red signals: i.

Direct infra-red signal-direct infra-red signal needs a clear of sight to make connection. The most familiar direct analysis device is the TV remote control in which a connection is made by transmitting data using two different intensities of infra-red light to present the 1s and 0s. Infrared light is transmitted in a 30-degree cone giving some flexibility in the orientation of the equipment, but not much. Some advantages exist with direct connections, one of which is the range, which is usually less than 3 feet. Also, because it needs clear line of sight, the equipment must be pointing towards the general area of the receiver or the connection is lost.

26 ii.

Diffuse infra-red signal-diffuse infra-red signal operates by flooding an area with infra-red light; in much the same way as a conventional light bulb illuminates a room. It bounces off the walls and ceiling so that a receiver can pick up the signal regardless of orientation. The advantage of this signal is the freedom of its movement, but it is restricted within a certain range, such as room. It does not go through a wall.

From the discussion above the advantages of infra-red light can be summarized as; simple to implement, low power consumption, low circuitry costs, invulnerability to interference from traditional source, signal cannot be jammed (diffused), and high noise immunity. While the disadvantages are; it cannot penetrate a solid object, it requires direct line of sight (small half angle) and its response range is typically 3 feet.

2.4

Summary From the above a brief introduction on process tomography, a brief discussion on

how to choose the sensing method, a discussion on the types of process tomography and a discussion on the characteristics of infra-red have been presented.

CHAPTER 3

MATHEMATICAL MODELING

3.1

Introduction Image reconstruction is the application of a particular transform in order to

obtain image from projections such as the Radon Transform, Linear Back Projection Transform, Filtered Linear Back Projection (FLBP), and Hybrid Reconstruction Algorithms (Ibrahim, 2000). Line Integral, as the name implies, represents the integral of some parameter of the object along a line. This section will discuss a typical example of attenuation of incident infra-red light to a photodiode (Vanzetti, 1972), as they transversed through the circumference of dropping particles. The twodimensional model of infra-red attenuation constant and a line integral represent the total attenuation suffered by a beam of infra-red as it travels in a straight line through a modeled object. The coordinate systems defined in Figure 3.1 are used to describe line integrals and projections. The object is represented by a two-dimensional function f(x, y) and each line integral is represented by the projection angle φ and a detector position x’ parameters (Kak and Slaney, 1999). The equation of AB in Figure 3.1 is x cos φ + y sin φ = x '

.

(3.1)

where: ⎡ x' ⎤ ⎢ y '⎥ = ⎣ ⎦

⎡cos φ sin φ ⎤ ⎡ x ⎤ ⎢− sin φ cos φ ⎥ ⎢ y ⎥ ⎣ ⎦⎣ ⎦

(3.2)

28

Figure 3.1: An object f ( x, y ) , and its projection pφ ( x'1 ) , shown for angle φ °

where: pφ ( x'1 )

= projection data for AB line.

x’1

= the coordinate of AB line in x’ plane.

The projection data can be written in discrete form as (Brown et al., 1999): pφ ( x' ) = ∑ ( f ( x' cos φ − y ' sin φ , x' sin φ + y ' cos φ ))Δy '

(3.3)

y'

3.2

Measurement Configuration In medical tomography, they are many projections between 0˚ and 180˚, with

the lowest interval being 1˚. In this project, the projection employs a set of sensors (16×16) arranged in orthogonal and diagonal parallel position with each having 16 sensors, equally placed outside a pipe with 82 mm outer diameter and 78 mm inner diameter at 0˚, 45˚, 90˚, and 135˚ projections. The orthogonal projection has a total of 32 sensors and the diagonal projection has 32 sensors. Hence, for both upstream and downstream position the total number of sensors used is 128 pairs of optical sensors.

29 Each light transmitter made use of an infra-red LED connected to a fiber optic. Each light receiver consists of a photodiode (having a flat top diameter of 5.5mm) linked to a fiber optic. Figure 3.2 shows the arrangement of fiber optic holes for each projection.

Figure 3.2: The arrangement of fiber optic holes for each projection Each part (downstream/upstream) consists of a combination of two orthogonal (0˚, 90˚) and diagonal (45˚, 135˚) projections. The diagonal projections were arranged in a similar manner to the orthogonal projections. The light emitted and received from the sensors using fiber optic was collimated using 1.5 mm diameter holes, as shown in Figure 3.3. The flow pipe is mapped onto 16 × 16 pixels (Figure 3.3). Each pixel has a dimension of 4.875 mm × 4.875 mm (78 mm/16 sensors). The fiber optic (with an outer diameter of 2.3 mm) was placed at the centre of each pixel.

Figure 3.3: The fixture cross section rear view

30 The 64 sensors are labeled Tux0 to Tux63 for the upstream emitters and Rux0 to Rux63 are designated for the 64 upstream receivers whereas Tdx0 to Tdx63 are the designated for the downstream 64 emitters and Rdx0 to Rdx63 for the 64 downstream receivers. The sensors are arranged numerically between four projections (0°, 45°, 90°, and 135°) as shown in Figure 3.4. A controller unit using PIC controls the projections using a generated burst clock where it simultaneously switched on the emitters (128 pairs) for both upstream and downstream. The overall frequency of the emitters is set to 88.03 Hz. The interaction from emitted infra-red light caused the receivers (photodiodes) to produce a small current that is proportional to the intensity of the infra-red ray. The conditioning circuit converted the current into a voltage signal. The DAS (data acquisition system) card KPCI-1802HC with 64 channels and 333k sample per second was used to capture the data. Since the DAS can only read 64 readings at a time, the system was configured so that the DAS read the first 64 upstream readings and after that, it takes readings from the 64 downstream sensors. Details of the hardware operation are discussed in Chapter 4. The overall investigated area tabulated by using scale from (-528, 495) to (495, -528) having a total of 1024 × 1024 points both named x1 and y1, while the 78 mm diameter pipe is mapped onto 512 × 512 points scaled from (-272,239) to (239,272). By default, the pipe area is divided into 16 sections giving each section 32 points (512/16) and the fiber optic was centrally positioned at each section. Since each collimated hole size is 1.5 mm and each section size is 4.875 mm with 32 points, giving new scale of 1.5/4.875 × 32 ≈ 10 points. The important part in the modeling is to give the x’ reference to overall investigated area, x’ can be described as the infra-red light’s beam path from emitter to receiver. Table 3.1 shows the tabulated x’ and scaled x1’. Assumptions made to simplify the modeling are: i.

The light (individual projections) transmitted from the fiber optic transmitter that travelled through the pipe area is assumed to be dispersed only to the matching fiber optic receiver and not to the adjacent fiber optic (Figure 3.8).

ii.

The total number of sections is 32 with 16 temporary and 16 permanent sections, where the temporary sections represent the pixels/areas outside the flow pipe and the permanent sections represent the pixels/areas outside the flow pipe.

31

Figure 3.4(a): Fiber optics configuration at 0˚, where u and d stand for upstream and downstream respectively

Figure 3.4(b): Fiber optics configuration at 45˚, where u and d stand for upstream and downstream respectively

32

Figure 3.4(c): Fiber optics configuration at 90˚, where u and d stand for upstream and downstream respectively

Figure 3.4(d): Fiber optics configuration at 135˚, where u and d stand for upstream and downstream respectively

33

Table 3.1: x1’ values for 64 upstream and downstream sensors x’ -16 -15 -14 -13 -12 -11 -10 -9 -8 -7 -6 -5 -4 -3 -2 -1

Receiver Ru,dtemporary0 Ru,dtemporary1 Ru,dtemporary2 Ru,dtemporary3 Ru,dtemporary4 Ru,dtemporary5 Ru,dtemporary6 Ru,dtemporary7 Ru,dx0,16,32,48 Ru,dx1,17,33,49 Ru,dx2,18,34,50 Ru,dx3,19,35,51 Ru,dx4,20,36,52 Ru,dx5,21,37,53 Ru,dx6,22,38,54 Ru,dx7,23,39,55

x1’ rescale -517 to -508 -485 to -476 -453 to -444 -421 to -412 -389 to -380 -357 to -348 -325 to -316 -293 to -284 -261 to -252 -229 to -220 -197 to -188 -165 to -156 -133 to -124 -101 to -92 -69 to -60 -37 to -28

x’ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Receiver x1’ rescale Ru,dx8,24,40,56 -5 to 4 Ru,dx9,25,41,57 27 to 36 Ru,dx10,26,42,58 59 to 68 Ru,dx11,27,43,59 91 to 100 Ru,dx12,28,44,60 123 to 132 Ru,dx13,29,45,61 155 to 164 Ru,dx14,30,46,62 187 to 196 Ru,dx15,31,47,63 219 to 228 Ru,dtemporary8 251 to 260 Ru,dtemporary9 283 to 292 Ru,dtemporary10 315 to 324 Ru,dtemporary11 347 to 356 Ru,dtemporary12 379 to 388 Ru,dtemporary13 411 to 420 Ru,dtemporary14 443 to 452 Ru,dtemporary15 475 to 484

The x1’ coordinates/nodes will be used as references, at every 4 projections, where they also represent the receivers’ positions in the scaled projection domain (x1’, y1’) to simplify the solution of forward problem in order to obtain the output of each sensor under no-flow and flow conditions. The forward problem will produce sensitivity matrices for each upper and lower plane and since both of them are assumed identical, the following discussion will cover both planes. By using the principle of projection discussed in Section 3.1, the relationship between scaled spatial domain (x1, y1) and scaled projection domain (x1’, y1’) could be seen, the pipe area is assumed to be a scaled image/object plane f(x1, y1). The image plane is divided into small rectangles where the dimension is equal to the decided sensitivity matrix’s dimension (Chan, 2003). When light travels through the divided rectangle, it gives the sensitivity matrices a value that represents the amount of point on the light’s beam for the corresponding elements in image plane. The sensitivity matrices can be achieved from the relationship of Equation 3.1, scaled spatial coordinates (x1, y1), and projection parameters (x1’, φ ).

The process was implemented for 0˚, 45˚, 90˚, and 135˚ projections with different sensitivity element data names and the result was stored in “file.dat” format. The reason why temporary x1’ required is that certain values of x1’ at 45˚ and 135˚

34 projections exceed the real x1’, for example at point (239, 239) and at 45˚, x1’ = 239 × cos45˚ + 239 × sin45˚ = 338. However, the temporary x1’ will be neglected later as the pixels outside the flow pipe will be zeroed. Sensitivity map resolution is an important design aspect and therefore, different mapping resolutions provide opportunities to investigate the accuracy of image reconstruction for different sizes of following particles (Chan, 2003). This project investigated three different resolutions: i.

8 pixels × 8 pixels.

ii.

16 pixels × 16 pixels

iii.

32 pixels × 32 pixels.

Figures 3.5(a), (b), and (c) show how the flow pipe area ((x,y)) is divided in each resolution. The sensitivity map of infra-red views ( V x ',φ (x, y ) ) before the pixels outside the flow pipe are zeroed at 32 × 32 pixels resolution and at 45° projection is shown in Appendix A.I.

Figure 3.5(a): The flow pipe area divided into a 8 × 8 pixels resolution map

35

Figure 3.5(b): The flow pipe area divided into a 16 × 16 pixels resolution map

Figure 3.5(c): The flow pipe area divided into 32 × 32 pixels resolution map After the pixels outside the flow pipe are zeroed, the investigated area can be narrowed down into 512 × 512 coordinates/points, where the image plane is divided into n × n small rectangle (pixels), where n is the desired dimension of sensitivity map. The programming of the sensitivity matrices was carried out using the

36 Microsoft Visual C++ software program. From the previous discussion the x1’ values points out the position of the receivers (Ru,d) and the size of the infra-red light beam/view parameters. By counting the total number of x1’ values that matches the condition expressed in Equation 3.4, the program subsequently measures the total number of infra-red light beam/view occupied in the xy rectangle as shown in Figure 3.7. The values obtained provide a relative measurement regarding how many points in a small rectangle are occupied by a specific light beam/view. The sensitivity matrices for infra-red light views after the pixels outside the flow pipe are zeroed at 32 × 32 pixels resolution and 45° projection is given in Appendix A.II. Equation 3.4 represents the algorithm in Figure 3.6 in a mathematical form (Chan, 2003). ⎧ ⎪⎪ E x , y + = 1 M x ',φ ' (x, y ) = ∑∑ E x , y ⎨ ly lx ⎪ ⎪⎩ E x , y + = 0 my mx

x1' 0,1...t == x1 cos φ + y1 sin φ if and

(x1 + 16 )2 + ( y1 + 16 )2

≤ 256

(3.4)

else

where: M x ',φ ( x, y ) = sensitivity map for view of x’ at φ projection after the pixels

outside the flow pipe are zeroed. E x, y

= the element used to represent the individual xy rectangle. The values of the element are obtained according to the condition in Equation 3.3.

lx

⎛ ⎛ 512 ⎞ ⎛ − 1⎟ × ⎜ x + = − 272 + ⎜⎜ ⎜ ⎠ ⎝ ⎝⎝ n

n ⎞⎞ ⎛ ⎟⎟ + ⎜ x + 2 ⎠ ⎟⎠ ⎝

mx

⎛ ⎛ 512 ⎞ ⎛ n ⎞⎞ ⎛ n⎞ = − 272 + ⎜⎜ ⎜ − 1⎟ × ⎜ x + + 1⎟ ⎟⎟ + ⎜ x + ⎟ 2 ⎠⎠ ⎝ 2⎠ ⎠ ⎝ ⎝⎝ n

ly

⎛ ⎛ 512 ⎞ ⎛ n ⎞⎞ ⎛ n ⎞ = 239 − ⎜⎜ ⎜ − 1⎟ × ⎜ − ( y + 1)⎟ ⎟⎟ + ⎜ − ( y + 1)⎟ ⎠ ⎝2 ⎠⎠ ⎝ 2 ⎠ ⎝⎝ n

my

⎛ ⎛ 512 ⎞ ⎛ n ⎞⎞ ⎛ n ⎞ = 239 − ⎜⎜ ⎜ − 1⎟ × ⎜ − ( y + 1) + 1⎟ ⎟⎟ + ⎜ − ( y + 1)⎟ ⎠⎠ ⎝ 2 ⎠ ⎠ ⎝2 ⎝⎝ n

x1'0,1...t

= the receivers’ positions in the scaled projection domain, where

n⎞ ⎟ 2⎠

t = 10. Both x and y are in the range from -8 to 7 for n = 16 (Figure 3.5(b)), -4 to 3 for n = 8 (Figure 3.5(a)) and -16 to 15 for n = 32 (Figure 3.5(c)). Both x1 and y1 are in the range from -272 to 239.

37

Figure 3.6: A flow chart for producing sensitivity map for views of x’ at 0˚ projection and 16 × 16 pixels resolution

38 Figure 3.7 shows how infra-red light ray is projected from emitter Tu,dx18 to receiver Ru,dx18 with a resolution of 16 × 16 pixels at the 45˚ projection. Figure 3.7 also depicts the values displayed for each rectangle, which represent the total numbers of infra-red light occupied in the rectangle or the total number of x1y1coordinates that is equal to the x1’ value of Tu,dx18. From Figure 3.7 we can see that the element in pixel E-5,-4 contains the value of 217. Since the image reconstruction algorithm is based on Equation 3.1, each of the corresponding rectangle will be determined mathematically and it picks only one x’ that contains certain element values at every projection. For example, the rectangle that is positioned at (-5,-4) and 45˚ projection, resulted in x’ = -5 × cos45˚ + -4 × sin45˚ = -6. Each projection has its own sensitivity map represented by a sensitivity matrix. The equation for the above conditions can be represented as: S φ (x, y ) = M x ',φ (x, y ) for x' = x cos φ + y sin φ

(3.5)

where: S φ ( x, y )

= sensitivity map for each projection.

M x ',φ ( x, y ) = sensitivity map for view of x’ at φ projection.

Both x and y are in the range from -8 to 7 for n = 16 (Figure 3.5(b)), -4 to 3 for n = 8 (Figure 3.5(a)) and -16 to 15 for n = 32 (Figure 3.5(c)).

Figure 3.7: A view from the 18th emitter Tu,dx18 to the 18th receiver Ru,dx18

39 However, the measurement did not provide the net contribution percentage of a specific infra-red light beam in a specific rectangle, because the distribution of the infra-red light beam in half of the rectangle is non-uniform. Therefore, the normalized sensitivity map can be determined based on the contribution of each infra-red light beam/view in a specific rectangle. The total and normalized sensitivity shown in Appendix A.III are based on the following equation. S max ( x, y ) =

∑ S φ ( x, y )

(3.6)

ο

φ = 0, 45,90 and 135

S φ ( x, y ) =

S φ ( x, y )

S max ( x, y )

(3.7)

where: S max (x, y ) = total distribution of the infra-red light beam in a specific rectangle. S φ ( x, y )

= sensitivity map for each projection.

S φ ( x, y )

= normalized sensitivity map for each projection.

Both x and y are in the range from -8 to 7 for n = 16 (Figure 3.5(b)), -4 to 3 for n = 8 (Figure 3.5(a)) and -16 to 15 for n = 32 (Figure 3.5(c)).

3.3

Sensor Modeling The inverse problem estimates the distribution of material within the pipe

which provides the measured sensor outputs (Ibrahim, 2000). As stated in Ibrahim’s research, the path length model and the optical attenuation model had been employed in optical tomography, due to the different techniques of signal conditioning system where AC coupling measures the output voltages proportional to the particle flow rates (Ibrahim et al., 1999 ). The current project was based on the measurement of the light intensity (Chan, 2003), and as such the optical path length model is not utilized. The investigated flowing particle is a solid material which has a high absorbance characteristic (Chan 2003) and as such the optical attenuation model also not suitable.

40 Basically, this project models the sensor output based on the isocentric sets of parallel projections, where individual projections are having the same infra-red light beam width along a straight line and the conveyed particles produce a deterring effect to the beam. The attenuation factor for air is assumed to be zero while the attenuation factor for solid particle is assumed to be one (Chan, 2003) and all incident light on the surface of solid particle are fully absorbed by the solid particle (Chan, 2003). Since this project applies parallel projection, only one example of individual projection is required to complete the modeling. Figure 3.8 shows the beam (view) from emitter Tu,dx03 to receiver Ru,dx03 with a 16 × 16 pixels resolution, and the maximum line number for the infra-red light beam is obtained from: Dn = x1'left − x1' right

(3.8)

where: Dn

= maximum line number of infra-red light beam.

x1’left

= left boundaries of the transmitter/receiver.

x1’right = right boundaries of the transmitter/receiver.

Figure 3.8: A beam (view) from Tu,dx03 to Ru,dx03 with a 16 × 16 pixels resolution

41 3.3.1

Sensor Output during No-Flow Condition The previous discussion clearly shows that the infra-red light is totally

received by the sensor and the total line numbers is fixed for every view at the decided resolution. Each line represents a step of received voltage’s amplitude in digital value (Chan, 2003) according to Vstep = 5V/Dn. Therefore, the sensor output during the no-flow condition is equal to the standardized value of each view that was assumed to be 5 Volts, and can be represented as: V refTx , Rx ( x ' , φ ) = Vcal

(3.9)

where: VrefTx , Rx (x ' , φ ) = sensor reading for view from emitter Tx to receiver Rx during

no-flow (V). Vcal

= the standardized/calibrated value of each view that was assumed to be 5V.

3.3.2

Sensor Output during Flow Condition

Based on the above principle, the flowing particle can block the passage of light from the transmitter to the receiver. Hence, this will result in the output sensor being directly proportional to the total of light line being absorbed. Two main aspects in flow modeling are: i.

Determination of coordinates (xl, yl) which represent the positions of modeling object.

ii.

Comparison of each coordinate of the receiver, xl’ (Table 3.1) with coordinates xl, yl using Equation 3.1.

Each corresponding coordinate can be considered as a light line. Figure 3.9 illustrates modeling for a single pixel flow (fx,y(x1,y1)0). For this flow model, each pixel can be fixed to contain 32 × 32 = 1024 x1,y1 coordinates (Section 3.2) whereas the location of pixel model at x,y planes is f-5,2(x1,y1)0. It can be observed that for this single pixel flow model, as a whole the location of fx,y(x1,y1)0 can prevent light from

42 being received by receiver Ru,dx03. Therefore, if the total number of light lines are computed, it will be 10 × 32 = 320 light lines but only a value of 10 is needed to represent the total maximum light lines (Section 3.2). Hence, a sum checker is required for this computation. The equation below represents the above condition (Chan, 2003): DTx , Rx ( x' , φ ) = ⎧ x1' 0 ,1....t == x1 cos φ + y1 cos φ ⎪Sum+ = 1 ⎫ ⎪c'[t ][Tx, Rx] == 1⎬ if and f x , y ( x1, y1) > 0 ⎭ ⎪ and c'[t ][Tx, Rx] == 0 239 239 ⎪⎪ ∑ ∑ Sum ⎨Sum == 10 else if Sum > 10 y1= −272 x1= −272 ⎪Sum+ = 0 else ⎪ ⎪ ⎪ ⎪⎩ VmTx , Rx ( x' , φ ) =

DTx , Rx ( x' , φ ) Dn

× VrefTx , Rx

(3.10)

(3.11)

where: DTx, Rx (x' , φ ) = total line of infra-red light beam for each receiver/sensor.

Sum

= total infra-red light beam sum-checker.

c’[t][Tx,Rx] = a series of variables used to check whether there is another object at the same path. VmTx, Rx ( x' , φ ) = sensor reading during flow condition (V).

Dn

= maximum line number of infra-red light beam.

Figure 3.9: Beam (view) from Tu,dx03 to Ru,dx03 blocked by object f5,2(x1,y1)0

43 3.4

Software Signal Conditioning The image reconstruction is the qualitative representation of sensors signal

reading converted from the data acquisition system. Direct integration of the modeled signal into the image reconstruction algorithm will not provide the best output due to the non-uniform output signal during no-flow condition. The non-uniform signal obtained during no-flow condition also occurred in most process tomography measurement like ECT, EIT, PET, and ultrasonic tomography (Xie, 1995). The improvement of the digitised modeled output signal depends on two methods. The methods are software and hardware. Software is the implementation of the solution through the programming of a general-purpose processor/computer that offered flexibility. Hardware is the implementation of the solution through custom hardware. It has the advantage of speed and size, but the major disadvantages is a longer design and implementation time. This project improved the non-uniform measurement by hardware method controlling the emitter current (see sub-section 4.2.4.1) and software method-the value of differential measurement was used because the lower limit of the reading is zero (no infra-red light was received by the photodiode) and the upper limit can be predetermined in no-flow condition. In sub-section 3.3.1 the expected sensor’s voltage during no-flow is treated as the reference voltages while in experiment, the reference voltage was obtained from a calibration processes. Equation 3.12 is used to determine the signal loss from projections. VsTx , Rx ( x' , φ ) = VrefTx , Rx (x' , φ ) − VTx , Rx (x' , φ )

(3.12)

where: VsTx, Rx (x' , φ )

= amplitude of signal loss from receiver from Tx to Rx view.

VrefTx , Rx ( x' , φ ) = receiver signal of no-flow condition.

VTx, Rx (x' , φ )

= received signal from transmitter during flow-condition.

The use of differential measurement in this project resulted in a similar condition of projection data compared to the previous research carried out by Chan (2003). The output signals from the circuit are proportional to the received light, which means a higher flow volume will results in a lower output signal. Equation

44 3.11 has inversed the relation of flow to sensors output (voltage) where signals used to reconstructed the image have become proportional to the volume flow.

3.5

Image Reconstruction Algorithms Image processing is an application of a particular process to an image in order

to obtain another image. Depending on the physical principle of a sensing system, the reconstructed image, in essence, may contain information on the cross-sectional distribution of the constituent parameter of the object; for example, the permittivity distribution for an electrical capacitance system (Xie, 1995). In Section 3.1, we have discussed that the projection data pφ (x ' ) is the sum of line integrals of the normal straight-line paths (Hoyle et al., 1995). The projection data pφ (x ' ) will be the base data for the image to be reconstructed. Therefore, the value of the projection data is equals to the measured amplitude of signal loss from receiver from Tx to Rx view: pφ ( x ') = VsTx , Rx ( x' , φ )

(3.13)

where: VsTx, Rx (x' , φ )

= amplitude of signal loss from receiver from Tx to Rx view.

pφ (x ' )

= projection data.

For a given point (x,y) of the image plane, and for a direction of the projection

φ,

a

projection

value

is

corresponding,

and

is

placed

at

x' = x cos φ + y sin φ . For all projections, the contributions to the point (x,y) are

summed and multiplied with its computed sensitivity. This mechanism is called back projection. It is also called both the summation method and the back projection method. Now the discrete back projection can be written as (Brown et al., 1999): fˆ ( x, y ) =



M −1 N −1

∑ ⎢⎣∑ Vs

m =0

n =0

Tx , Rx

(x' n , φ m ) ⋅ Sφ (x, y )Δx'⎤⎥Δφ m



(3.14)

where: fˆ ( x, y )

= approximation of the object function in volts.

VsTx , Rx ( x ' n , φ m ) = amplitude of signal loss of receiver from Tx to Rx view that is

45 equal to projection data. S φ m ( x, y )

= normalized sensitivity map for each projection.

φm

= the m-th projections angle.

x' n

= receiver n-th position.

Δφ

= angular distance between projection and the summation extends over all the M-th projection.

N

= total number of receivers.

M

= total number of projections. The programming code of back projection is given in Appendix F.II.

Reconstructing the image in this way, will however, result in a blurred image (Badea, 2000). To improve the quality of the image, this project has categorized the reconstruction algorithm methods into four sections: i.

Spatial domain - The domain in which an image is represented by intensities at given point in space. This is the most common representation for image data.

ii.

Frequency domain - The domain in which an image is represented by sum of periodic signals with varying frequency

iii.

Frequency and Spatial domain – The combination between frequency and spatial domain.

iv.

Hybrid reconstruction algorithm – The hybrid reconstruction algorithm (Ibrahim, 2000) integrated with all above reconstruction methods.

3.5.1

Spatial Domain Once the reconstructed image has been obtained using the Linear Back

projection, from the reconstructed image two important things expected are that the quality of the reconstructed image can be improved and how many empirical information can be extracted from the reconstructed image automatically. That is why we need image processing to improve the image prior to further analysis or display. Image processing corrects errors in the image by the use of certain

46 techniques. As we discussed before an image/object can be represented by the function f(x,y), that represents light intensity. This mean that the value of f at any point (x,y) is proportional to the intensity or grey level (Plaskowski et al., 1997). In this project the number of pixels depends on 2n where n = decided resolution and the intensity is 256 = I = 2m = 28. The number of rows and columns (and consequently the number of pixels) changes with the application of the image and the availability of sensing methods to provide the required solution. The spatial domain technique implies the enhancement using filtering techniques. The main objective is to highlight the flow pattern that is to improve the separation of the objects from the background. The technique required certain convolution mask, and by using the correct convolution mask, the prominent peaks may appear in the object intensity display. A typical 3 × 3 convolution-filter mask is:

⎡ w4 w3 w2⎤ 1⎢ w5 w0 w1⎥⎥ ⎢ N ⎢⎣ w6 w7 w8⎥⎦

(3.15)

where: 8

N = ∑ wi i =0

The coefficients w0 to w8 can take any real value. The coefficient w0 is centered on the current pixel being examined and this pixel is modified and takes up a new intensity value, which is dependent on the intensity values of the neighboring pixels and the coefficients w1 to w8. The process required extract of 3 × 3 area of pixels in a matrix frame of an image, where the centre pixel is P0 as shown below:

⎡ P 4 P3 P 2⎤ ⎢ P5 P0 P1⎥ ⎥ ⎢ ⎢⎣ P6 P7 P8⎥⎦ where the value of each element P0…P8 denotes the physical property of the material in the object space. In practice, this is measured as an intensity level. So let suppose the intensity level matrix for the above pixel matrix is:

47

⎡i 4 i3 i 2⎤ ⎢i5 i 0 i1 ⎥ ⎥ ⎢ ⎢⎣i 6 i 7 i8 ⎥⎦ where i0 to i8 correspond to the intensity level of pixels P0 to P8 respectively. By convoluting the 3 × 3 convolution-filter mask with the pixel matrix of intensity values i0 to i8, resulting in the equation below: 8

g ′0 =

∑ wngn n =0 8

(3.16)

∑ wn n =0

where:

wn

= The convolution-filter mask value at the n position.

gn

= The pixel matrix value at the n position.

This operation is performed on each and every pixel in the matrix, and as the result the new pixel matrix will have a new intensity level. This project implies low pass filtering. The low pass filtering is used for suppressing noise. Noise in general has a higher spatial frequency spectrum than normal component, and may effectively be smoothed by simple low pass filtering (Graham et al., 1997). Figure 3.10 shows the low pass convolution mask. ⎡1 ⎢9 ⎢1 ⎢ ⎢9 ⎢1 ⎢⎣ 9

1 9 1 9 1 9

1 9 1 9 1 9

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦

Figure 3.10: Low pass convolution mask

48 3.5.2

Frequency Domain In the spatial domain, the image is retrieved using the discrete back projection

and then the image is enhanced using low pass and high pass convolution masks. Although the convolution mask has improved the image quality but it is done after the image has been reconstructed (Plaskowski et al., 1997), but in the frequency domain the projection data is filtered using certain windowing function/filter kernel before the image is reconstructed (Brown et al., 1999). This method is also called the CBP method (Convolution Back Projection) and it is the most commonly used method in computer tomography for parallel beam projection data (Herman, 1980). The reason for this is the ease of implementation combined with good accuracy and can be implemented both in spatial/time and frequency domains. If the method is implemented in the time domain, the filter kernel has to be inversed transform into time domain and it can be slightly difficult. The frequency domain offers the implementation of filter kernel and the basic kernel ramp is called the ramp function. First each projection data has to be transformed into the Fourier transform. The fastest way to calculate the Fourier transform of this projection is to use the FFT algorithm. This algorithm requires only 5/2NlogN floating point operations to calculate the spectrum of the projection (Badea, 2000). The transformed projection data is filtered by multiplying the filter kernel with the FFT transformed projection data, convolution in the spatial/time domain is multiplication in the frequency domain and can be described by the following equation (Brown et al., 1999):

[

]

pφ (x') = F −1 [K (ω )] * VsTx , Rx (x' , φ )

(3.17)

[

(3.18)



]

pφ (ω ) = K (ω )F VsTx, Rx ( x' , φ ) ∗

where: pφ ( x') ∗

= convolved projection data in time domain.

VsTx, Rx (x' , φ ) = amplitude of signal loss of receiver from Tx to Rx view that is

equal to projection data. F −1

= 1-dimension inverse Fourier transform.

49

K (ω )

= filter kernel in the frequency domain.

The programming code of the FFT is given in Appendix F.II .Each measurement in a projection has been arranged based on the order of the receiver. The decided filter kernels are Modified Ramp function with exponential falloff, Hamming and, Hanning. Equations of the mentioned filter kernel are: N −1 ⎧ A ⎪e × 0.75 for n ≤ K R (n ) = ⎨ 2 ⎪⎩0 otherwise

(3.19)

where:

K R (n ) = Modified Ramp filter kernel with exponential falloff

N

= Order of the filter/projection data bandwidth

A

=

− (N − n ) × (N − n )

(4.8)2

.

2π ⎧ ⎪α + (1 − α ) cos K H ( n) = ⎨ N −1 ⎪⎩0

N −1 2 otherwise for n ≤

(3.20)

where:

K H (n ) = Hanning and Hamming filter kernel

α

= for Hanning kernel α =0.5, and in the Hamming kernel α =0.54.

Figure 3.11 shows the filter kernel in the Fourier/frequency domain Filter Kernel

H(w)

1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 -8

-3

Ramp with exponential falloff Hamming

Hanning

2

7

frequency w

Figure 3.11: Filter kernel in frequency domain

50 The filtering stages of the filtered back projection image can be summarized as below: i.

Each row of the input image is converted into the frequency domain via FFT,

ii.

Filtered with the divided filter kernel (Ramp, Hamming, and Hanning),

iii.

Converted back into spatial domain via an inverse FFT, and

iv.

The image is reconstructed using Equation 3.14.

3.5.3

Frequency and Spatial Domain The smoothing filter (filter kernel) used in the domain frequency is capable of

suppressing the highest spatial frequency and reduced the noise cause by aliasing effect (Webb, 1992). However, if the number of sensors used is low, a small amount of aliasing effect will appear (Shung et al., 1992) and leaves a significant spoke effect on the reconstructed image conclude from using the linear back projection algorithm (Equation 3.14). Therefore, in order to reduce the aliasing effect a filtering process is applied to the reconstructed image that have been filtered using the filters in the frequency domain. The method can be described as follow: i.

Projection data is filtered using the filtering method in frequency domain and a filter kernel.

3.5.4

ii.

The image is reconstructed using the linear back projection algorithm

iii.

The image is filtered using a 3 × 3 low-pass filter convolution mask.

Hybrid Reconstruction Algorithm This reconstructed algorithm is based on the research done by Ibrahim (2000).

The algorithm determines the position of projection data and improved the reconstructed image by zeroing the empty space while reconstructing an image. The algorithm assumes the values from the sensors, as zero if no object exists in the pixel or one if an object is present. If any sensor output is equal to zero, then all the pixels passed through by the sensor view will be assumed zero and ignored.

51

A signal to noise parameter was used, in which the value of signal loss under a certain threshold value might be caused by the noise and light reflection effect (Chan, 2003). Furthermore, every method of image reconstruction used in this project was combined with the Hybrid reconstruction algorithm. The mathematical representation of the Hybrid reconstruction algorithm can be described as follow: fˆ ( x, y ) =



M −1 N −1

∑ ⎢⎣∑Vs

m =0

n =0

Tx , Rx

(x' n , φ m ) ⋅ Sφ (x, y ) ⋅ B(x' n , φ m )Δx'⎤⎥Δφ m



(3.21)

in which:

B( x' n , φ m ) = 0 ⇒ VsTx , Rx ( x' n , φ m ) < VTh B( x' n , φ m ) = 1 ⇒ VsTx , Rx (x' n , φ m ) ≥ VTh

(3.22)

where: fˆ ( x, y )

= an approximation of the object function in volts.

VsTx , Rx ( x ' n , φ m ) = amplitude of signal loss of receiver from Tx to Rx view that is

equal to the projection data. S φ m ( x, y )

= normalized sensitivity map for each projection.

B( x' , φ )

= hybrid reconstruction algorithm constant value for each receiver.

VTh

= threshold voltage.

φm

= the m-th projections angle.

x' n

= receiver n-th position.

Δφ

= angular distance between the projection and the summation extends over all the M-th projection. = total number of receivers.

M

= total number of projections.

3.6

N

Reconstructed Image Error Measurements The difference image between the original and reconstructed images gives the

quantitative measurement and qualitative measurement that compare the quality of the reconstructed image with the original (Chan, 2003). The same approach used by

52 Chan, whereas the measure of normalized mean square error (NMSE) and the measure of peak signal to noise ratio (PSNR) The NMSE between the original image and the reconstructed image can be used as a quantitative measurement of the closeness of the reconstructed image to the original image while the PSNR is a measure of the peak error. The formula for the NMSE is given by: n −1 2

n −1 2

∑ ∑ [ f (x, y ) − fˆ (x, y )] NMSE =

x =−

2

n n y =− 2 2 n −1 2

(3.23)

n −1 2

∑ ∑ f ( x, y )

x=−

2

n n y =− 2 2

where f (x, y ) = original image.

fˆ (x, y ) = reconstructed image.

n

= image resolution.

while PSNR is given by ⎡ ⎢ ⎢ ⎢ PSNR = 10 log10 ⎢ ⎢ 1 ⎢ 2 ⎢n ⎣

⎤ ⎥ ⎥ 2 f ( x, y )max ⎥ ⎥ dB n n −1 −1 2 2 ) 2 ⎥ ∑n ∑n f (x, y ) − f (x, y ) ⎥ ⎥ x=− y =− 2 2 ⎦

[

(3.24)

]

where f (x, y )max is the maximum value obtained from the original image. The low value of NMSE means the reconstructed image contains a low error and an inverse relationship compared to PSNR where the higher value obtained using PSNR shows that the quality of reconstructed image obtained is better. The comparison between NMSE and PSNR for reconstructed images of each flow model in each resolution provided a conclusion where the closer resolution to the size of flow object will provide the lowest NMSE and highest PSNR.

53 3.7

Flow Velocity

The velocity of solid object can be obtained by measuring the known distance between two points over the measured transit time. In some cases, it may be possible to use conventional single-component such as orifice plates, venturi meters and turbine meters to measure velocity (Plaskowski et al., 1997). However, in general these measurements can only be used for two-component flows when the dispersed component is a small quantity of gas and liquid, or very small quantity of nonabrasive and nonblocking solid. Even in these circumstances, it is often difficult to predict the accuracy of the measurement with any certainty. The velocity measurement techniques for two-component flows are divided into two main groups. The Doppler and interference fringe methods and the tracer methods. The tracer methods measurement can measure the velocity component by using either recognizable markers that can be injected into the flowing stream and their progress timed between two points, or signal due to natural disturbances can be detected at two points in the flow stream and the time difference determined by cross-correlation techniques. This project made use of the second tracer methods technique, where the infra-red light intensities received by the infra-red receiver measures the level of disturbance for upstream and downstream sensors.

3.7.1

The Cross Correlation Method

Two sets of sensors are positioned upstream and downstream to measure the infra-red light intensities which are converted into voltage signal that in proportional to the natural flow of object. The output signal of each sensor is therefore modulated by spatial and temporal variations in the detected property of the passing object, in an apparently random manner. However, assuming that the pattern of the conveyed discontinuity remains unaltered as its travels between the sensors, then the output signal generated by the downstream sensor will be a time-delayed replica of the upstream sensor’s output. Figure 3.12 shows the principle of the cross-correlation method of velocity measurement.

54

Figure 3.12: Principle of cross correlation flow measurement

If the x(t) and y(t) represent the signals from the upstream and downstream sensors respectively, the time delay between the output signals of the two sensors can be found by computing the cross correlation of their time records x(t) and y(t) over a measurement period. The cross correlation function is given as: T

1 u (t )d (t − τ )dt T →∞ T ∫ 0

R(τ ) = lim

(3.25)

The peak in the R(τ ) versus time plot correspond to the most probable time required for the flow to travel between the upstream and downstream sensors, t m . Therefore, the mean flow velocity can be obtained from the following expression: V =

L

τm

(ms ) −1

(3.26)

where L is the spacing between upstream and downstream sensors. The infra-red tomography system applies the cross correlation using two methods: sensor to sensor cross correlation and pixel to pixel cross correlation. For the sensor-to-sensor cross correlation the procedures involved are: i.

There are 64 upstream sensors and 64 downstream sensors.

55 ii.

Each sensor has their pair in order to measure the velocity measurement using the point-to-point cross correlation method (Azrita, 2002).

while pixel to pixel cross correlation method involved: i.

The sensitivity for each pixel in 16 × 16 pixels resolution is calculated according to Equation 3.7.

ii.

Analogue flow noise signals generated by the flowmeter’s two arrays of sensors are converted into digital form and sensor’s array profiles is determined, which are then used to enable their cross correlation function to be computed where the converted signals are multiplied with the respective sensitivity map value in x,y coordinates and can be described by the following mathematical equations:

⎛ M −1 ⎡ N −1 ⎤ ⎞ u x , y (t ) = ⎜⎜ ∑ ⎢∑ [VTux , Rux ( x' n , φ m ) ⋅ Sφm ( x, y )]Δx'⎥Δφ ⎟⎟(t ) ⎦ ⎠ ⎝ m =0 ⎣ n =0

(3.27)

VTux , Rux ( x' n , φ m ) = VTux , Rux (x' n , φ m ) − VrefTux , Rux (x' n , φ m )

(3.28)

⎛ M −1 ⎡ N −1 ⎤ ⎞ d x , y (t − τ ) = ⎜⎜ ∑ ⎢∑ [VTdx , Rdx ( x' n , φ m ) ⋅ Sφm ( x, y )]Δx'⎥Δφ ⎟⎟(t − τ ) ⎦ ⎠ ⎝ m = 0 ⎣ n =0 VTdx , Rdx ( x' n , φ m ) = VTdx , Rdx (x' n , φ m ) − VrefTdx , Rdx ( x' n , φ m )

(3.29) (3.30)

where: u x, y (t )

= Upstream sensor’s array profiles values.

d x , y (t − τ ) = Downstream sensor’s array profiles values.

S φ m ( x, y ) iii.

= Normalized Sensitivity map values for x,y coordinates. The cross correlation function of these two signals (matching x,y coordinates) is computed and the peak position of the resulting coreralogram is detected to give an estimated of the droplet transit time between the two sensors. The time delay of the maximum value of the cross correlation function gives an indication of the flow transit time between upstream and downstream pixels. This process is repeated for all the pixels enabling the actual flow velocity profile ( v( x, y ) ) to be obtained except for the pixel that contains zero value of S φm ( x, y ) , the

v(x, y ) automatically to be valued zero.

56 The programming code for the sensor-to-sensor cross correlation and pixel-to-pixel cross correlation is given in Appendix F.III.

3.8

Summary

The modeling for infra-red tomography system, which consists of 64 pairs of upstream and downstream sensors, is elaborated in Section 3.2. The sensor output during no-flow condition has been explained in Section 3.3. The linear back projection and filtered linear back projection algorithms in the spatial and frequency domains are discussed in Section 3.4. Finally, Section 3.5 discussed the concept of flow velocity.

CHAPTER 4

DEVELOPMENT OF INFRA-RED TOMOGRAPHY SYSTEM

4.1

Introduction This chapter focused on the development of hardware. The development of

hardware elaborated on the criteria used in selecting infra-red sensors and its arrangement around an industrial flow pipe, how control signals are used to synchronized the operation of the whole circuit and how the signals received by photodiode receivers can be translated using electronic circuits.

4.2

Development of the Measuring System This section focused on the development of an optical tomography system

based on investigations on previous optical tomography systems in which emphasis is given to the use of parallel projection application as carried out by Dugdale (1994), Abdul Rahim (1996), and Ibrahim (2000). This method proved to be able to produce concentration profile analysis (Abdul Rahim, 1996) (Dugdale, 1994) (Ibrahim et al., 1999), and velocity (Ibrahim, 2000). Although the capability of optical sensors can be enhanced using the fan beam method (Chan, 2003) for gas-solid flow imaging system nevertheless it affects the measurement time interval for other parameters such as velocity and mass flow rate. The number of measurements must be increased in which each upstream and downstream sensing arrangement with 64 measurements respectively provide a total

58 of 128 measurements to form two image frames in real time. Hence measurement is realized by synchronizing the projection system, signal processing by signal conditioning and data acquisition system, and data manipulation taking into account of the frequency response and time response i.e. factors which influenced the measurement signal. Figure 4.1 shows an illustration of the infra-red tomography system which can be divided into three parts: sensor configuration, signal conditioning and data acquisition system, and a computer.

Figure 4.1: A block diagram of the infra-red tomography system As a whole the system operation is controlled by a digital timing controller. From Figure 4.1, the sensing array is arranged in a parallel manner to form two upstream and downstream arrays resulting in four parallel projections (at angles of 0°, 45°, 90°, and 135°) with 16 pairs of sensors for each projection. The upstream sensing array and the downstream array is placed 0.1 meter apart. This distance was chosen due to the sampling rate value used in the experiment (Section 7.2). In the second part both peak voltages received by photodiodes are equal to a level of infrared light intensities and consequently relative to the two different components (solid and air) in the process/pipe traversed by the infra-red beam. The analog signal is processed by a signal conditioning circuit. Finally this analog signal is converted into a digital signal by the data acquisition system, in which this part is also controlled by the digital controller’s synchronization signals. The other part of the tomography system will be discussed in detail in the other sections.

59

4.2.1

Selection and Preparation of Fiber Optic Fiber optic is chosen as the infra-red beam transmitter based on two aspects:

firstly, it maximized the total number of views obtained at each projection and secondly, it maximized light transmission in and out of the fiber (Abdul Rahim, 1996). The fiber optic is made of polymethyl-metharlyte coated with polymer cladding and it has a protective sheath with an overall diameter of 2.3 mm (the plastic fiber has an inner radius of 1 mm). As discussed in Section 3.2, each infra-red light transmitter and receiver with a diameter of 5.9 mm is inserted into a special construction of rod shape meant to collimate light and prevent the light transmitter/receiver system from being exposed to external light such as lamp light or sun ray. Then light from each receiver is collimated by the rod towards the flow regime section using fiber optic. Both ends of fiber optic transmitter and receiver are treated to form lenses using heat from a candle light source (Abdul Rahim, 1996). The procedure in forming the lenses is as follows: i.

Remove cautiously a small section of the fiber optic cladding using a knife, as shown in Figure 4.2(a).

ii.

Ensure that 2 mm of the fiber optic is exposed without the cladding as shown in Figure 4.2(b).

iii.

The end of the fiber optic is exposed to candle heat for several seconds until a convex surface is formed at the end of the fiber optic as shown in Figure 4.2(c).

60

Figure 4.2: Preparation of Fiber Optic

4.2.2

Selection of Infra-red Transmitter and Receiver A fiber optic can provide a high resolution if it is used as a light transmitter

(Ibrahim et al., 1999) and any electrical interference can be isolated from the process (Abdul Rahim, 1996). Furthermore this will influence the selection of components and the design of the tomography system, in which the project as a whole emphasized the selection of transmitter and receiver based on the physical construction, cost, high radiant brightness, and the switching speed. Three potential light sources have been identified as laser, laser diode, and LED (light emitting diode). Laser is the most powerful and coherent light source. However several problems can be expected when using a laser source in that it is costly and the source is difficult to control. Laser diode is smaller in size and it is cheaper compared to laser but the overall cost when applied to the tomography system is not reasonable. Therefore, this system made use of LEDs although LED has a smaller brightness compared to the previous two light sources. However, LED has the advantages in that it can be found in various physical forms, light, longer life span, reasonable, and it does not produced a high operating temperature.

61 The LED light source which is chosen is the Ga-As (Gallium Arsenide) Infrared SFH485P LED. It is chosen due to its low cost of RM2.67 for each LED. It has a flat surface which enables infra-red light to be transmitted directly to the fiber optic (short length of epoxy resin) with a half angle of ±40º. This will minimize the length of fiber optic connection transmitter and receiver which have to be constructed. Besides, it can be activated by DC operation or pulses in the forward voltage direction. It can also transmit sufficient radiative brightness in the near infra-red area with a peak wavelength of 880 nm as illustrated in Figures 4.3(a) and 4.3(b) which depict a high radiant slope due to a narrow beam. φ(half angle) = ± 40º

(a) (b) Figure 4.3: LED characteristics (a) Relative spectral emission versus wavelength (b) Radiation characteristic – relative spectral emission versus-half-angle Among the factors considered in selecting a receiver is responsivity (current output for specified radiant power) at the desired wavelength, noise limit i.e. the smallest signal that can be detected, response time rate, and leakage current. A sensor which can convert light into electrical signal can be categorized into 4 categories: photovoltaic cells, phototransistor, photodiode, and photomultiplier. Photovoltaic cells, phototransistor, and photodiode are sensors made of semiconductors whereas photomultiplier are made of tubes. A photomultiplier amplifies the initial photocurrent signal having an extremely low noise (Abdul Rahim, 1996) by increasing the speed of accelerating electron to the surface successively (dynodes), in which the increase in electron is easily injected. A photomultiplier has an advantage in that it has a high quantum

62 effectiveness when it operates at a high speed (a rise time of 2 ns). However, a photomultiplier is costly, bulky, and requires a stable high voltage source, as the tube gain increases exponentially with the applied voltage. A semiconductor sensor has several advantages such as a rapid response time, smaller leakage current, and a bigger surface area which ensures that a higher resolution will be achieved in the tomographic image (Hartley et al., 1995). A Photovoltaic semiconductor receiver has a bigger sensitive area i.e. a bigger physical size compared to a phototransistor and a photodiode. As such a photovoltaic cell is not suitable. Furthermore, a photodiode is more suitable compared to a phototransistor because a phototransistor has a lower responsivitiy, it has a poor linearity and it is sensitive to temperature. Basically, a photodiode consists of a combination of p-n semiconductor intrinsic material which forms an inverted p-n junction in which photon is exposed to appropriate energy, which will produce photocurrent which is proportional to illumination. The rate of spectral response depends on the semiconductor intrinsic material which forms a photodiode (Vanzetti, 1972). The addition of an intrinsic layer to the photodiode structure can increase the spectral response range by expanding the depletion region and this new structure is called a PIN photodiode. Hence by taking into account of the above factor the SFH203P silicone PIN photodiode was selected to be combined with the SFH485 emitter. It has a short switching time, low noise, and low capacitance. Figure 4.4(a) shows the spectral sensitivity versus wavelength in which the peak is at 880 nm which is equal to the emission peak from the SFH485P whereas Figure 4.4(b) illustrates the spectral sensitivity versus the half-angle of SFH203P.

63

(a) Figure 4.4:

(b)

Photodiode characteristics. (a) Relative spectral sensitivity versus

Wavelength (b) Directional characteristic – relative spectral intensity versus-halfangle

4.2.3

Measurement Fixture Design This section discussed two aspects of the measurement fixture design around

the pipe and the design of the connection between the fiber optic and the receiver/ transmitter. Among the points being emphasized in the design process is the material used and the mechanical knowledge. Figure 4.5 depicts the optical fiber sensor mechanical drawing. This fixture consists of a PVC plate having a dimension of 112 × 112 × 12 mm. The size of the hole is based on the pipe size being used which has an external diameter of 82 mm and an internal diameter of 78 mm. The fiber optic has an overall diameter of 2.3 mm whereas the fiber optic core has a diameter of 1.5 mm. As shown in Figure 4.4, two holes having a dimension of 82 mm and 78 mm are formed at the centre of the PVC plate surface each having a depth of 7 mm and 5 mm. At each corner surface, a total of 16 slots are needed for each fiber optic and hence each slot has a space of 78/16 = 4.875 mm. At each central point of this slot, a hole with a diameter of 2.3 mm is formed with a depth of 10 mm and then it is connected to a hole with a diameter of 1.5 mm until it reaches the flow regime area.

64

Figure 4.5: Mechanical drawing and dimension of the measurement fixture Figure 4.6(a) shows a mechanical drawing of a connector for fiber optic and transmitter/receiver. The connector is made of a rod with a diameter of 12 mm and a length of 20 mm, in which a hole is formed to enable any fiber optic transmitter/receiver to be inserted into it as depicted in Figure 4.6(b). Then a hole of 2.3 mm diameter is made in the middle to enable the fiber optic to be inserted. As the system is meant to monitor the flow of gas-solid, the connector does not have any seal.

(a)

(b)

Figure 4.6: A mechanical drawing with the appropriate dimensions for (a) Fiber optic connector (b) transmitter/receiver

65

4.2.4

Circuit Design The circuit design can be divided into four parts: infra-red light projection,

signal conditioning for light reception, sample and hold circuit and digital timing controller. The infra-red light projection circuit consists of a group of infra-red LED triggers which can produce a high current source and acts as a current variable for each infra-red LED. The signal conditioning circuit comprises a transformation circuit which can change the signal received from the photodiode into a proportional voltage. The sampling circuit enables a distinction to be made between the upstream and the downstream sensors. The overall system operation is controlled by digital timing using a PIC programmable controller in which digital timing is programmed to produce infra-red light projection unit signal, sampling circuit signal and the signal to synchronize the data acquisition system operation in order to acquire voltages from the sampling circuit which then distinguished the signal between upstream and downstream sensors.

4.2.4.1 Light Projection Circuit This circuit consists of 128 identical transmitting circuit drivers as shown in Figure 4.7, which are used to individually trigger each infra-red LED. The emitters send infra-red light pulses signal which can accommodate a much higher current which then produce an enhanced radiation intensity. This circuit is similar to that developed by Chan (2003) where the NPN ZTX689B transistor is selected to obtain a high switching speed. The triggering current for this circuit has identical characteristics to that of the Darlington transistor and can deliver a constant current up to 3 A to the collector or an impulse current up to 8 A and can accommodate a switching frequency up to 1 MHz. This transistor replaced LED triggers which normally consist of a combination of a switching transistor and a high current transistor (Zetex, 1994).

66

Figure 4.7: Light emitting diode transmitter circuit The impulse signal from the controller can be sent to all light projection triggering circuits simultaneously where it can provide a HIGH signal of 5 V or a LOW signal of 0 V to the transistor base. This signal then produced a base current Ib which switched on the transistor. A variable resistor, Rb, is used to control the sum of current flow in the transistor base and directly controlled the current flow in the emitter (Chan 2003). A fixed resistor Rc is used to protect the emitter from overcurrent when Ic exceeded the maximum forward current allowed by the emitter and the duty cycle of the pulsed signal does not exceed 25% (Hartley et al., 1995). From the data sheet, the maximum forward current Icmax is 100 mA and the emitter forward voltage VF is 1.5 V. The current can be at its maximum level during saturation mode and Rc can be determined based on the following formula: Rc = =

Vcc − Vce, sat − VF I c max

(4.1)

5V − 0.5V − 1.5V 100m

= 30Ω where Vce,sat = transistor saturated voltage = 0.5 V VF

= emitter forward voltage = 1.5 V.

Hence practically, a resistor having a value of 33 Ω is chosen. A coupling capacitor Cp was used to improve the signal, which suffered from ripples caused by the power supply.

67

4.2.4.2 Signal Conditioning Circuit Signal received by the photodiode will be converted into voltages by a signal conditioning circuit. A total of 128 identical circuits were constructed in which 64 were for the upstream sensors whereas the rest were for the downstream sensors. The circuits processed the signal in a parallel manner and therefore signals from the photodiode can be processed simultaneously. The first stage of the circuit consists of a current-to-voltage converter (preamplification), the second stage comprises an amplifier, the third is signal conditioning circuit and the fourth stage consists of a buffer. For the current-tovoltage converter, the photodiode is configured in the photovoltaic mode (Chan, 2003). This configuration converted the variable input current variable (from the photodiode output) into a proportional output voltage (Graeme, 1995) as in Figure 4.8.

Figure 4.8: The first and second stages of the circuit The signal conditioning circuit utilized the TLE2072 operational amplifier which has a high input impedance characteristics, twice the bandwidth (10 MHz), three times the slew rate (45 V/us) and a low noise voltage of 11.6 nV/ Hz ) compared to TL082 and TL072. The circuit in Figure 4.8 shows a pre-amplification circuit where the photodiode is operated in the differential-input converter configuration, the current Irx which is produced when the p-n photodiode junction is exposed to infra-red light, flows from cathode to anode based on the equation:

68 I Rx =

Radiant flux energy Flux

(4.2)

where: IRx = current produced by the zero biased photodiode with unit in amperes/cm2. Flux = the photodiode’s sensitivity with unit in watts/amperes, which is equal to 0.25 W/A for the SFH203P photodiode. A resistor (R2) having an identical value to that of R1 is added. It produced a differential-input current-to-voltage converter where it improved the noise sensitivity and the DC offset error, which converts the current of the photodiode to voltage according to the equation: Vout = I Rx × R1

(4.3)

In order to reduce the ripple effect from the power supply, capacitors C1 and C2 is added in a parallel manner at the +15 V and –15 V sources. The signal then fed into an inverting amplifier with a gain of –5.1 as shown in Figure 4.8. In the signal conditioning circuit (third stage), an inverting amplifier with a low pass filter (first order, -20 db/decade) circuit has the purpose of filtering high noise frequencies due to ambient conditions. Figure 4.9 shows the schematic diagram for the filter and buffer circuit.

Figure 4.9: The filter and buffer circuit Prior to this, the signal from the circuit is amplified with a gain Ac which can be obtained from Equation 4.4 in which R5 and R6 is 1 kΩ and 75 kΩ respectively. Ac = −

R6 R5

(4.4)

69 =−

75kΩ = −6.25 12kΩ

Given that the cut off frequency is 2.08 kHz (will be discussed in the next section) therefore C3 can be determined from the equation C3 =

1 2πR 6 fc

(4.5)

Hence C3 is chosen as 1nF. Then the filtered signal entered the buffer amplifier with the aim of providing a small output impedance (Bell, 1990). Figure 4.10 shows the receiver output Rx03 at this part, which used an impulse frequency of 88.03 Hz. The diagram shows that the signal takes 920 us to reach steady state response.

Figure 4.10: The receiver output of Rx03 In the final stage of the circuit which is the sample and hold circuit, signal can be sampled using a sample and hold amplifier, NE5537, manufactured by Philips Semiconductor. The block diagram of the NE5537 is a closed loop non-inverting unity gain sample and hold system (Figure 4.11(a)). The input buffer amplifier supplies the current necessary to charge the hold capacitor (C4), while the output buffer amplifier closes the loop so that the output voltage is identical to the input

70 voltage (with consideration for input offset voltage, offset current, and temperature variations which are common to all sample-and-hold, be they monolithic, hybrid, or modular).

Figure 4.11: (a) Block diagram of NE5537 (b) The sample and hold circuit When the sampling switch is opened (in the hold mode), the clamping diodes closes the loop around the input amplifier to keep it from being overridden into saturation. The switch control is driven by external logic level via a timing sequence remote from the sample-and-hold device. Figure 4.11(b) shows the switch control has a floating reference, referred to as the logic reference which makes the sampleand-hold device compatible to several types of external logic signals (TTL, PMOS, and CMOS). The switching device operates at a threshold level of 1.4 V. The signal sampled and stored by capacitor C4 then will be charged by MOSFET switch ZVN2016A which is connected in parallel manner. When the high signal (5V), is applied to the Gate connection at the MOSFET, it will force current Ihold to discharge to earth and vice versa if the signal at the MOSFET gate is low (0V), current Ihold will be stored by C4 (Philip semiconductor, 1994). The selection of C4 must be optimum if the value is too small then the holding period would not be not long enough but if it’s too big the charging time will be too long. Appendix B shows the final output of receiver Rx03 (downstream and upstream).

71 4.2.4.3 Digital Control Signal The digital control signal can be divided into three functions: digital signal for infra-red transmitter circuit, VTX; digital signals for sample and hold circuit; VSup, VMuxup, VSdown, and VMuxdown, and digital signal for synchronizing the data acquisition system with the signal conditioning system. Figure 4.12 shows the signals produced using a programmable controller PIC18F458. A time interval of 1420us was set for Tmin and this resulted in the frequency of signal VTX to be 1/(8 × 1420us) = 88.03 Hz, and given to all 128 transmitter circuits of upstream and downstream. This instruction of control signal operation explained how the DAS input can distinguish between the upstream and downstream set of data, because both upstream and downstream output with the same number will be connected together to the same DAS input. For example both output from Rx00upstream and Rx00downstream are connected together with the DAS input terminal number 0 and so on. Consider the signal of TX and divided the signal into two-cycles reference period (Period0 and Period1).Both period are divided into 8 parts with an equivalent time (Tmin). Referring to Period0 VSup is HIGH or 5 V (sample) and VMuxup is LOW or 0 V (set) compared to the VSdown signal is LOW (hold) and VMuxdown is HIGH (reset). At this point the upstream output signal is sampled into the DAS.

VTx

Tmin

VSup

VMuxup VSdown VMuxdown VTrig VBurst

0

TC0

1

TC1

2

Figure 4.12: Control signals produced by the PIC18F458 controller

72 VTrig and VBurst inserted into the TGIN and XPCLK DAS terminal respectively where VTrig acts as starter to the data acquisition process, while VBurst as the externalburst-clock. The DAS system was set to start at the falling edge of VTrig and converts the data at the rising edge of VBurst signal from channels 0 to 63 continuously and stopped when all 254 buffers were filled (254 VBurst rising-edge). Therefore from the control signals, even numbered period TC0,2,4….126, each represents the time conversion for upstream buffers (each buffer contains data from channel 0 to 63) and odd numbered period TC1,3,5….125. Each represents the time conversion for downstream buffers. The minimum time selected for each conversion is 4us. Therefore if every buffer has 64 conversions TCmin will be 64 × (4u+1u)s = 320 us in this case each small part of the signal was set to be 60 us (Tmin). The conversion period of TC0,1,2….254 must be higher than TCmin. As a conclusion the maximum sampling frequency is 1/(8 × 60us) = 2.08 kHz. If this sampling frequency is used, it will cause the signal to fill up the buffers very fast but the application is not fast enough to process the buffers, in another word the second buffer will be filled too early. Consequently, the DAS system will send a stop message. To solve this problem Tmin selected for this project is 1420 us. The sampling frequency is 1/ (1420us × 8) = 88.03 Hz, and TC0, 1, 2, 3…256 period is 6 × 1420us = 8.52ms. Appendix B shows the digital control output signals.

4.2.4.4 Printed Circuit Board Design After designing the circuit, the layout is designed by using Protel 99 SE. Protel 99 SE is a complete 32-bit electronics design for Window 95/98/NT. Protel 99 SE provided a completely integrated suite of design tools that easily to design from basic concept to final board layout. Due to the complexity of the circuit and requirement of an expandable system, all the layout circuit is designed on PCB. Figure 4.13 shows the PCB circuits and Table 4.1 describes the PCB components

73

(a)

(b)

(c)

(d)

(e)

Figure 4.13: PCB design of the tomography system Table 4.1: Circuit description Figure 4.12

4.2.5

Description

(a)

Controller Circuit

(b)

Light Projection Circuit (64 transmitter)

(c)

8-channels Receiver Circuit (upstream and downstream)

(d)

Connecter Circuit for KPCI 1802HC

(e)

Photodiode and Infra-red LED circuit (placed around the pipe)

Data Acquisition System (DAS)

74

There are many practical ways of transferring an output signal from the hardware system to the computer; several popular methods are using serial port, parallel port, USB port, RS232, and Ethernet. For example the transputer interfacing system used by Hartley (et al., 1995), converted the analog output voltages from the sensors into a digital form and passed them into a transputer system. This method requires long conversion time compared to an industrial standard computer data acquisition system (DAS) that has been used in this project. Among the general functions provided by industrial standard DAS include the multiple channels switching capacity, analog to digital conversion (ADC), digital to analog conversion (DAC), digital input (DI), digital output (DO), operation of computer interrupt and direct memory access (DMA), and storage of acquired data. The DAS used in this project is Keithley Metrabyte KPCI1802HC with a maximum conversion rate of 333k sample/s. The analog output from the sensors is connected to the analog input channel (CH00-CH63), while VBurst is connected to the timing/clock channel (XPCLK) and VTrig is connected to the external trigger channel (TGIN). The data acquisition board is controlled using a software driver called DriverLINX. DriverLINX is a language and hardware independent Dynamic Link Library designed to support the hardware manufacture’s high-speed analog and digital data-acquisition board in Windows (Keithley, 2001). The programming code of the analog input operation is given in Appendix F.I.

4.2.6

Flow Rig The components of a gravity flow rig is shown in Figure 4.14 and listed in

Table 4.2. The conveyed material consists of plastic bead with 3.3 mm mean size. An automatic loader is used to convey the material from the tank through a material vane and will be stopped when the hopper is full. Then solid material is fed down from the hopper into the pipe through a measurement system/sensing area. The flow rate of the solid materials is controlled by the rotary valve linked to an AC motor. The speed of the rotation is set by the control unit.

75

A three-phase motor driver controlled the AC motor. This controller has 33 programmable functions which can change the AC motor functions such as base frequency, acceleration time, and torque boost. Appendix C shows the picture of a FVR-C9S controller. The distance between the rotary valve and upstream sensors is 1.4 meters, while the distance between the upstream and downstream sensors is 0.1 meter.

Figure 4.14: Gravity flow rig Table 4.2: Flow rig components Figure 4.13

Component

Figure 4.13

Component

(a)

Tank

(g)

Barrel

(b)

Pipeline

(h)

Filling hose

(c)

AC motor

(i)

Barrel control wire

(d)

Rotary valve

(j)

Suction air hose

(e)

Hopper

(k)

Automatic vacuum loader

(f)

Material feed vane

76 4.2.6.1 Calibration of the Flow Rig Calibration relating the solid flow rate and the flow indicator is established manually by the collection of flowing material over a recorded time of 15 seconds (Azrita, 2002). Figure 4.15 shows the calibration result graph. By using the Microsoft Excell software a linear trendline/regresion line is fitted to the data giving a R2 coefficient of 0.994, a slope of 21.189 gs-1, and an interception at the y-axis at 4.372. The equation of the linear trendline is given by:

(

)

Flow Rate gs −1 = 21.189 × Flow Indicator − 4.372

(4.6)

Measured Flow Rate, (g/s) r

Calibration Curve of Gravity Flow Rig 450 400 350 300 250 200 150 100 50 0

y = 21,189x - 4,3672 2

R = 0,994

0

5

10

15

20

25

Flow Indicator Experimental

Linear (Experimental)

Figure 4.15: A calibration graph By using Equation 4.6, the measured mass flow rate for plastic beads at solid loading of flow indicator is summarized as shown in Table 4.3.

77 Table 4.3: Measured flow rate for plastic beads at solid loading of flow indicator

4.4

Flow Indicator

Measured Flow Rate (gs-1)

1

16.822

2

38.011

3

59.200

4

80.389

5

101.578

6

122.767

7

143.956

8

165.145

9

186.334

10

207.523

11

228.712

12

249.901

13

271.090

14

292.279

15

313.468

16

334.657

17

355.846

18

377.035

19

398.224

20

419.413

Summary An infra-red tomography measuring system has been discussed in Section

4.1.

CHAPTER 5

FLOW MODELING

5.1

Modeling of Flowing Particles Three resolution images were selected to be tested i.e. resolutions of 8 × 8, 16

× 16, and 32 × 32 pixels. Hence it is vital to select and determine which resolution and which reconstruction algorithm have the most optimum performance, quality, and quantity. The reconstruction algorithm which has been identified will be utilized in the next part i.e. for concentration measurement (Chapter 6). Four models were considered i.e. a single pixel flow model, multiple pixels flow model, a half flow model and a full flow model.

5.1.1

Single Pixel Flow Model

For each resolution a pixel is taken as the object representation in an image plane. The single pixel model for each resolution is listed below: i.

An 8 × 8 pixels resolution with a pixel area size of 95.06mm2 ((pipe diameter/resolution)2 = (78/8)2). The percentage occupied by a single pixel compared to the total area is 1.98% ((pixel area size/measurement area size) = (95.06/ π(78/2)2)) at the pixel is located at (-2,1)(x,y).

ii.

A 16x16 pixels resolution with a pixel area size of 23.77 mm2. The percentage occupied by single pixel compared to the total area is 0.497% and the pixel is located at a coordinate of (-4,3).

79 iii.

A 32 × 32 pixels resolution with a pixel area size of 11.50 mm2. The percentage occupied by a single pixel compared to the total area is 0.241% and the pixel is located at a coordinate of (-7,6).

Those models can test the accuracy of the reconstruction algorithm which was executed in different resolutions. Figure 5.1 shows the pixel position for each model and Figure 5.2 depicts the sensors output (projection data) for each resolution, which was computed as elaborated in sub-section 3.3.2.

8×8

16 × 16 (b)

(a) (-2,1)

(-4,3)

32 × 32 (c) 5Volts (-7,6)

0 Volts Figure 5.1: (a) Pixel coordinate P8 (-2,1) on the image plane (b) Pixel coordinate P16 (-4,3) on the image plane (c) Pixel coordinate P32 (-7,6) on the image plane

Voltage amplitude (Volts)

5

Projection Data for a Single Pixel Flow Model at 8 × 8 pixels resolution

4 3 2 1 0 0

4

8

12 16 20 24 28 32 36 40 44 48 52 56 60 Receiver (Upstream and Downstream) Figure 5.2(a): Sensors output (projection data) for a single pixel flow model at 8 × 8

pixels resolution

80

Voltage amplitude (Volts)

5

Projection Data for a Single Pixel Flow Model at 16 × 16 pixels resolution

4 3 2 1 0 0

4

8 12 16 20 24 28 32 36 40 44 48 52 56 60 Receiver (Upstream and Downstream)

Figure 5.2(b): Sensors output (projection data) for a single pixel flow model at 16 × 16 pixels resolution

Voltage amplitude (Volts)

5

Projection Data for a Single Pixel Flow Model at 32 × 32 pixels resolution

4 3 2 1 0 0

4

8 12 16 20 24 28 32 36 40 44 48 52 56 60

Receiver (Upstream and Downstream) Figure 5.2(c): Sensors output (projection data) for a single pixel flow model at 32 × 32 pixels resolution

81

5.1.2

Multiple Pixels Flow Models To test various aliasing effects, four object representation pixels are placed on

the 8 × 8 pixels resolution image plane in which each pixel has an area of 95.0625 mm2 respectively at coordinates (-2,2), (1,2), (-2,-2), and (1,-2) as shown in Figure 5.3. Figure 5.4 shows the sensors output (projection data) for multiple pixels flow model at 8 × 8 pixels resolution. The 16 × 16 and 32 × 32 pixels resolutions were tested using four pixels each having a size identical to the size of a single pixel flow model at a resolution of 16 × 16 (23.77 mm2). It was placed at coordinates of (-4,3), (3,3), (-4,-4) and (3,-4). Figure 5.5 shows pixels locations and Figure 5.6 shows the sensors output (projection data) for 16 × 16 and 32 × 32 pixels resolutions, which are also calculated as elaborated in sub-section 3.3.2.

5 Volts

(-2,2)

(1,2)

(-2,-2)

(1,-2) 0 Volts

Figure 5.3: Pixel coordinate P8(-2,2), P8(1,2), P8(-2,-2), and P8(1,-2) on image plane

82

Voltage amplitude (Volts)

5

Projection Data for Multiple Pixels Flow Model at 8 × 8 pixels resolution

4 3 2 1 0 0

4

8 12 16 20 24 28 32 36 40 44 48 52 56 60

Receiver (Upstream and Downstream) Figure 5.4: Sensors output (projection data) for multiple pixels flow model at 8 × 8 pixels resolution

5 Volts (-4,3)

(3,3)

(-4,-4)

(3,-4) 0 Volts

Figure 5.5: Pixel coordinate P16(-4,3), P16(3,3), P16(-4,-4) and P16(3,-4) on image plane

83

Projection Data for Multiple Pixels Flow Model at 16 × 16 and 32 × 32 pixels resolutions

Voltage amplitude (Volts)

5 4 3 2 1 0 0

4

8 12 16 20 24 28 32 36 40 44 48 52 56 60 Receiver (Upstream and Downstream)

Figure 5.6: Sensors output (projection data) for multiple pixels flow model at 16 × 16 and 32 × 32 pixels resolutions

5.1.3

Half Flow Model The left-hand side of the measurement area is filled with solid particles as

shown in Figure 5.7. Figure 5.8 shows the sensors output (projection data), which is also the result of sub-section 3.3.2. 5 Volts

0 Volts Figure 5.7: Left-hand side of the pipe filled with solid particles

84 Projection Data for Half Flow Model

Voltage amplitude (Volts)

5 4 3 2 1 0 0

4

8 12 16 20 24 28 32 36 40 44 48 52 56 60 Receiver (Upstream and Downstream)

Figure 5.8: Sensors output (projection data) for half flow model

5.1.4

Full Flow Model In this model, the whole pipe contains solid particles as shown in Figure 5.9,

in which no light can penetrate the flow medium. It produced sensors output (projection data) equal to the reference voltage as shown in Figure 5.10. 5 Volts

0 Volts Figure 5.9: The pipe is filled with solid particles

85 Projection Data for Full Flow Model

Voltage amplitude (Volts)

5 4 3 2 1 0 0

4

8 12 16 20 24 28 32 36 40 44 48 52 56 60 Receiver (Upstream and Downstream)

Figure 5.10: Sensors output (projection data) for full flow model

5.2

Image Reconstruction Algorithm for Flow Models Data projection resulting from the absorption factor function in each flow

model which has been discussed in Section 5.1, was derived using several image reconstruction algorithms as listed below i.

LBP (Linear Back Projection) (Section 3.5).

ii.

SFLBP (Spatial Filtered Linear Back Projection) (sub-section 3.5.1).

iii.

FFLBPi (Frequency Filtered Linear Back Projection-Ideal) (subsection 3.5.2).

iv.

FFLBPh (Frequency Filtered Linear Back Projection - Hamming) (sub-section 3.5.2).

v.

CFLBPi+s (Combined Filtered Linear Back Projection - Ideal + Spatial) (sub-section 3.5.3).

vi.

CFLBh+s (Combined Filtered Linear Back Projection - Hamming + Spatial) (sub-section 3.5.3).

vii.

HLBP (Hybrid Linear Back Projection) (sub-section 3.5.4).

86 viii. ix.

HSFLBP (Hybrid Linear Back Projection) (sub-section 3.5.4). HFFLBPi (Hybrid Frequency Filtered Linear Back Projection-Ideal) (sub-section 3.5.4).

x.

HFFLBPh (Hybrid Frequency Filtered Linear Back Projection Hamming) (sub-section 3.5.4).

xi.

HCFLBPi+s (Hybrid Combined Filtered Linear Back Projection - Ideal + Spatial) (sub-section 3.5.4).

xii.

HCFLBh+s (Hybrid Combined Filtered Linear Back Projection Hamming + Spatial) (sub-section 3.5.4).

NMSE (Normalized mean square error) provides the quantitative information regarding the spreading effect of objects in the reconstructed image (Chan, 2003). If NMSE provides a high output, this means that the error between the reconstructed images and the flow model image is large, and on the other hand the lower the NMSE percentage value the closer the reconstructed image to the flow model. Meanwhile PSNR (Peak signal-to-noise ratio) is the measurement of square maximum image value which is reconstructed and divided by the sum of square differences between the reconstructed images and the flow model value in the logarithmic function. Relatively if the PSNR output value is high, the reconstructed images will be improved.

The time period measurement required by each image reconstruction algorithm is calculated by computing one cycle period of the programming command (i.e. depending on the processor speed being used) and then multiplied by the sum of cycle periods needed to complete a subroutine containing the codes involving the image reconstruction algorithm. In this project measurement of the time period represented the time period required to reconstruct two image frames (upstream and downstream). Meanwhile the quantitative measurement was carried out between the measurement of maximum values Vmax which represented maximum values on a spot and measurement of C (concentration percentage) which represented the percentage value of material distribution in a flow crosssection area (Chan, 2003). Measurement of C can be expressed as:

87 n −1 2

n −1 2

∑ ∑ fˆ (x, y ) C=

x=−

n n y =− 2 2

(5.1)

Vref × Top

where: fˆ (x, y ) = Approximation of the object function in volts.

n

= image resolution.

Vref

= the predicted maximum value when the sensor is fully blocked by an object which equal to 5 Volt.

Top

= total number of pixels occupied by any infra-red light from all infra-red transmitters.

As expounded by Chan (2003) Vmax and C values can be predicted for each model. Table 5.1 shows the predicted values of Vmax and C for each of the above flow models. These values are useful in evaluating the performance of each image reconstruction algorithm. Table 5.1: Expected values of Vmax, C and Top for each flow model Model Type Single Pixel Flow Model Multiple Pixels Flow Model Half Flow Model Full Flow Model

Resolution

Vmax

C

Top

8×8

5.0V

1.67%

60

16 × 16

5.0V

0.45%

220

32 × 32

3.10 V

0.12%

856

8×8

5.0V

6.67%

60

16 × 16

5.0V

1.82%

220

32 × 32

5.0V

1.87%

856

8×8

5.0V

50.00%

60

16 × 16

5.0V

50.00%

220

32 × 32

5.0V

50.00%

856

8×8

5.0V

100%

60

16 × 16

5.0V

100%

220

32 × 32

5.0V

100%

856

88

5.2.1

Results of Reconstructed Model Images Figures 5.11, 5.12, 5.13, and 5.14 respectively show the results for each

image reconstruction algorithm for each flow model having a resolution of 32 × 32 pixels whereas the results for the 8 × 8 and 16 × 16 pixels resolutions are provided in the Appendix D.I. As a whole it can be observed how the results from the expected data projection values and the data projection function ( x' , φ ) , have its distribution rate changed on the image resolution plane for each resolution. This will affect the image reconstruction algorithm which has been selected and will be discussed in the next subsections.

89

LBP: NMSE: 801.9% C: 4.22% PT:5.3ms PSNR:16.90db Vmax:3.097V

SFLB: NMSE: 491.6% C: 4.17% PT: 6.09ms PSNR:16.13db Vmax:2.22V

iv

FFLBPh: NMSE: 201.1% C: 2.43% PT:6.141ms PSNR:13.18db Vmax:1.0V

vii

HLBP: NMSE: 173.1% C: 0.14% PT:0.36ms PSNR:23.05db Vmax:3.10V

x

HFFLBPh: NMSE: 0% C: 0% PT: 0.56ms PSNR: 0db Vmax: 0V

iii

ii

i

FFLBPi: NMSE: 272.2% C: 3.17% PT:6.141ms PSNR:13.37db Vmax:1.20V

vi

v

CFLBPi+s: NMSE: 240.6% C: 3.14% PT:7.67ms PSNR:12.21db Vmax:0.99V

viii

HFSLBP: NMSE: 86.67% C: 0.14% PT:1.44ms PSNR:13.29db Vmax: 0.67V

xi

HCFLBPi+s: NMSE: 0% C: 0% PT:1.28ms PSNR: 0db Vmax: 0V

0 Volts

CFLBPh+s: N: 179.0% C: 2.41% PT:7.67ms PSNR:11.93db Vmax:0.825V

ix

HFFLBPi: NMSE: 0% C: 0% PT:0.56ms PSNR:0db Vmax:0.0V

xii

HCFLBPh+s: NMSE: 0% C: 0% PT:1.28ms PSNR: 0db Vmax: 0V

5 Volts

Figure 5.11: Results obtained using various image reconstruction algorithms for single pixel flow models at a resolution of 32 × 32 pixels

90

i

LBP: NMSE: 319.3% C: 18.98% PT:5.3ms PSNR:13.02db Vmax:4.99

iv

FFLBPh: NMSE: 118.1% C: 10.33% PT:6.14ms PSNR:8.26db Vmax:1.757V

vii

HLBP: NMSE: 25.08% C: 1.38% PT:0.42ms PSNR:24.07db Vmax:4.99V

x

HFFLBPh: NMSE: 57.79% C: 0.52% PT:0.75ms PSNR:11.36db Vmax:1.76V

ii

SFLBP: NMSE: 247.0% C: 18.80% PT:6.09ms PSNR:11.56db Vmax:3.72V

iii

FFLBPi: NMSE: 161.4% C: 14.19% PT:6.141ms PSNR:8.53db Vmax:2.12V

v

CFLBPi+s: NMSE: 157.7% C: 14.19% PT:7.668ms PSNR:7.10db Vmax:1.78V

vi

CFLBPh+s: NMSE:115.9% C: 10.26% PT:7.668ms PSNR:6.80db Vmax:1.47V

viii

HFSLBP: NMSE: 52.15% C: 1.38% PT: 1.64ms PSNR: 11.25db Vmax: 1.65V

ix

HFFLBPi: NMSE:51.30% C: 0.52% PT: 0.75ms PSNR:13.51db Vmax: 1.76V

xii

xi

HCFLBPi+s: NMSE: 75.84% C: 0.52% PT:2.81ms PSNR:1.97db Vmax:0.683V

HCFLBPh+s: NMSE:79.50% C: 0.52% PT:2.81ms PSNR:0.13db Vmax:0.57V

0 Volts

5 Volts

Figure 5.12: Results obtained using various image reconstruction algorithms for multiple pixels flow models at a resolution of 32 × 32 pixels

91

ii

i

LBP: NMSE: 42.56% C: 80.53% PT:5.3ms PSNR:7.50db Vmax 4.99V

SFLBP: NMSE: 42.29% C: 80.11% PT: 6.089ms PSNR:7.52db Vmax: 4.99V

iv

v

FFLBPh: NMSE: 35.21% C: 43.72% PT: 6.141ms PSNR: 3.35db Vmax: 2.82V

CFLBPi+s: NMSE: 33.49% C: 59.96% PT: 7.668ms PSNR: 6.05db Vmax: 3.75V

vii

viii

HLBP: NMSE: 1.35% C: 49.22% PT: 2.45ms PSNR: 22.49db Vmax: 4.99V

HFSLBP: NMSE: 3.23% C: 48.94% PT: 3.62ms PSNR: 18.69db Vmax: 4.99V

x

HFFLBPh: NMSE: 22.86% C: 26.45% PT: 2.75ms PSNR: 5.23db Vmax: 2.82V

xi

HCFLBPi+s: NMSE:11.27% C:35.75% PT: 4.76ms PSNR: 10.78db Vmax: 3.74V

0 Volts

iii

FFLBPi: NMSE: 33.11% C: 60.28% PT: 6.141ms P: 6.11db Vmax: 3.760V

vi

CFLBPh+s: NMSE:36.00% C:43.49% PT:7.67ms PSNR:3.13db Vmax:2.78V

ix

HFFLBPi: NMSE: 8.69% C: 35.96% PT:2.75ms PSNR:11.9db Vmax:3.76V

xii

HCFLBPh+s: NMSE:25.4% C:26.3% PT:4.76ms PSNR:4.65db Vmax: 2.78V

5 Volts

Figure 5.13: Results obtained using various image reconstruction algorithms for half flow models at a resolution of 32 × 32 pixels

92

ii

i

LBP: NMSE: 0.85% C: 98.99% PT: 5.3ms PSNR: 21.49db Vmax: 4.99V

SFLBP: NMSE: 1.40% C: 98.98% PT: 6.09ms PSNR: 19.31db Vmax: 4.99V

iv

FFLBPh: NMSE: 21.81% C: 53.73% PT:6.141ms PSNR: 2.05db Vmax: 2.71V

CFLBPi+s: NMSE: 8.10% C: 73.86% PT: 7.668ms PSNR: 9.19db Vmax: 3.75V

vi

CFLBPh+s: NMSE: 22.88% C: 53.46% PT: 7.67ms PSNR: 1.85db Vmax: 2.71V

viii

HFSLBP: NMSE: 1.40% C: 98.48% PT:6.09ms PSNR:19.31db Vmax:4.99V

x

HFFLBPh: NMSE: 21.81% C: 53.73% PT: 6.14ms PSNR: 2.05db Vmax: 2.71V

FFLBPi: NMSE: 7.11% C: 74.24% PT: 6.141ms PSNR:9.76db Vmax: 3.75V

v

vii

HLBP: NMSE: 0.85% C: 98.99% PT: 5.3ms PSNR: 21.49db Vmax: 4.99V

iii

ix

HFFLBPi: NMSE: 7.11% C: 74.24% PT: 6.14ms PSNR: 9.76db Vmax: 3.75V

xi

HCFLBPi+s: NMSE: 8.10% C: 73.86% PT: 7.668ms PSNR: 9.19db Vmax: 3.75V

0 Volts

xii

HCFLBPh+s: NMSE:22.88% C:53.46% PT: 7.67ms PSNR: 1.85db Vmax: 2.71V

5 Volts

Figure 5.14: Results obtained using various image reconstruction algorithms for full flow models at a resolution of 32 × 32 pixels

93 5.2.1.1 Measurement of (Concentration Percentage) C Referring to Table 5.1 it can be expected that at each resolution the value of C will increase linearly with the increase in objects representing the flow. The graphs in Figure 5.15 representing each value of C flow model, resulting from the use of various image algorithms for each resolution through the data tabulation in the Appendix D.II. As a whole the higher the sum of flow modeling objects the closer value of C obtained compared to the expected C value and this is the effect from the algorithm used i.e. the linear back projection (LBP) algorithm. It can be observed that the highest difference between the ideal value, 1.67% (Table 5.1) and the computed value of C is 4.22% (Figure 5.11(i)) which occurred during single pixel flow modeling at a resolution of 8 × 8 pixels resolution. This is due to the distribution of the infra-red light optical absorption (projection data) along the projection path being taken into account in the computation of C whereas the object is on the intersection path of the infra red light optical absorption (projection data) produced by each projection. However the value of C can be approximated by executing the hybrid algorithms.

Pure algorithms

Hybrid algorithms

Figure 5.15: Graphs representing values of C(%) readings for reconstructed flow model images For the flow modeling each resolution with the exception of the single pixel flow model at 8 × 8 pixels resolution, the values of C obtained from the algorithms listed in Section 5.2 which experienced reduction will reach near the expected value when the hybrid algorithms were used. If the filtering in the spatial domain is used,

94 the C value does not change significantly but any algorithm involving filtering in the frequency domain will produce a significant difference in the value of C resulting from a reduction in projection data reading values.

5.2.1.2 Measurement of NMSE Values obtained from the NMSE measurement are inversely proportional to the effectiveness level of the algorithms that were used. Figure 5.16 shows graphs representing each NMSE flow model measurement value plotted using the data tabulated in the Appendix D.II, resulted from the use of various image reconstruction algorithms at each resolution. This value is obtained by comparing the expected image value with the value from reconstructed images. As a whole the value of NMSE will be reduced depending on the increase in the resolution and the total flow objects. Pure algorithms

Hybrid algorithms

Figure 5.16: Graphs representing the NMSE readings (%) for reconstructed flow model images From Figure 5.16 it can be seen that the value of NMSE will be reduced drastically if the hybrid algorithms were used because the hybrid algorithms removed the smearing affect. The HBLP algorithm at each flow and resolution can provide an NMSE reading of zero which shows that the image produced by image modeling is accurate. Besides hybrid algorithms, algorithms used for the 32 × 32 pixels resolution resulted in low and optimum NMSE values as depicted in Figure 5.12 is

95 the CFLBPi+s reconstruction algorithm due to the drop of projection data values when using the frequency domain filter.

It can be observed from Figure 5.12 that for multiple pixels flow model, the value of NMSE is the reverse such as that discussed previously, but this also is expected if it involved a single object, because the lowest single object size being effectively detected by the projection is (78 mm/16)2 = 23.77 mm2. However the cross-section of the single object used for the single pixel flow model at a resolution of 32 × 32 pixels is 5.94 mm2. These problems will be discarded because the average object size to be measured in this project is 10.89 mm2 (3.3 mm × 3.3 mm) and the flow involved in the gravity flow rig were mainly flow other that single pixel flow and multiple pixels flow.

5.2.1.3 Measurement of PSNR Measurement of PSNR has an opposite concept to that of NMSE. The algorithm image which produced the highest value of PSNR (in dB) indicates that the reconstructed flow image has a resolution very close to that of the resolution of the actual flow model image. Figure 5.17 shows the values of PSNR obtained for each model and plotted based on the data tabulated in Appendix D.II. Pure algorithms

Hybrid algorithms

Figure 5.17: Graphs of PSNR readings (dB) for the reconstructed flow model image

96

As a whole the highest value of PSNR for each flow and resolution occurred when using the HLBP algorithm. This algorithm is able to regain the actual image resolution through the isolation of the actual signal and the noise signal. At a resolution of 8 × 8 pixels, the HLBP algorithm gave a consistent value of PSNR but the resolution of 8 × 8 pixels is not suitable to be used to compare the flow rates. As for the 16 × 16 pixels resolution, the results obtained using the HLBP algorithm experienced a drastic reduction between the multiple pixels flow model and the half flow model. At a resolution of 32 × 32 pixels resolution, the results obtained using the HLBP algorithm experienced a small difference. As for the algorithms other than that of the hybrid algorithms, the highest value of PSNR occurred for the results obtained using the LBP algorithm and only for the full flow model. If the results obtained using the CFLBPi+s algorithm is observed at a resolution of 32 × 32 pixels, the computed value of PSNR for each flow model is consistent and the algorithm is capable of giving a clearer image.

5.2.1.4 Measurement of Vmax Figure 5.18 shows the graph plotted for Vmax value for each model based on the data tabulated in Appendix D.II.

Pure algorithms

Hybrid algorithms

Figure 5.18: Graphs of Vmax readings (volt) for reconstructed flow model images

97

The values of Vmax indicated that the higher the resolution and projection angle used, the higher the value of Vmax obtained. In this project the maximum value, Vmax, only occurred for the algorithms not involving any filters in the spatial domain and in the frequency domain. This is because in the spatial filtering each pixel in the image was summed and averaged with its neighbours depends on the kernel used, while the result obtained from the frequency domain filtering, the projection data value will be reduced up to 65%. It can be observed that in the 32 × 32 pixels resolution i.e. for the highest resolution the value of Vmax increased as a whole if the flow model size increased as an example for the single pixel flow model using the LBP algorithm the value of Vmax is 3.097V (Figure 5.11(i)) and for the multi pixels flow using the LBP algorithm the value of Vmax is 4.999V (Figure 5.12(i)).

5.2.1.5 Single Pixel Flow Model Referring to the image at a resolution of 8 × 8 pixels (Appendix D.I) a single pixel flow object cannot be properly detected due to the limited total number of projections and due to overlapping around the single pixel object. As for the 16 × 16 pixels resolution the single pixel flow is more distinctive. For the 32 × 32 pixels resolution the size of the object being experimented is 5.94 mm2, but this system will give a clear image of a single pixel flow if the minimum size of the single pixel object is 23.77 mm2 where the single pixel flow has at least a matching value of the maximum data projection at every projection angle. However the HLBP algorithm is capable of giving a pronounced single pixel flow image because the smearing effect along the projection data path is canceled out.

Generally, at a resolution of 16 × 16 pixels (Appendix D.I) the most suitable single object size to be detected is (4.8758 × 4.875 = 23.77 mm2) and it can be observed that the NMSE measurement values are among the lowest and the value of PSNR achieve the highest value when using the CFLBPi+s algorithm. As for the

98 hybrid algorithm, HLBP is capable of giving a zero value of NMSE and the highest value of PSNR measurement.

5.2.1.6 Multi Pixels Flow Model In this part the algorithm was tested using four single flow objects. As analyzed in the single pixel flow model the size of the object used at a resolution of 32 × 32 pixels is 23.77mm2. At a resolution of 8 × 8 pixels resolution (Appendix D.I(a-i)) the objects overlapped when the LBP algorithm was used because too many pixels are occupied by the infra red view. The overlapping effect can be reduced if the data projection is filtered using the FFLBPi+s algorithm and subsequently it gave a low value of NMSE and a high value of PSNR. However, the NMSE values can be assumed to be too high and the values of PSNR are too low. The use of HLBP algorithm also is suitable at a resolution of 8 × 8 pixels resolution in determining the location of various objects when the overlapping effect along the data projection path can be eliminated. At a resolution of 16 × 16 (Appendix D.I(f)) and 32 × 32 pixels (Figure 5.12), the multiple pixels flow object at an image plane seems to approximate the actual object flow. Both resolutions display a reduction pattern in NMSE and an increase in PSNR using the same algorithm and the same model. The only difference is that at a resolution of 32 × 32 pixels, the pure CFLBPi+s algorithm gave a low value of NMSE, a high value of PSNR, and smooth image of multiple flow image in which the smearing effects (as can be observed for the other pure algorithm) around the object has been reduced compared to the other pure algorithms. The HLBP algorithm has the lowest value of NMSE and the highest value of PSNR compared to the other hybrid algorithms.

99 5.2.1.7 Half and Full Flow Models For the half flow, it is clear that results from the 8 × 8 pixels using the pure LBP algorithm (Appendix D.I) is not able to accurately describe the true half flow but at a resolution of 16 × 16 pixels resolution (Appendix D.I) and 32 × 32 pixels resolution (Figure 5.13), the images are more distinctive but a mirror effect occurred for half of the right-hand side image and this caused an increase in the value of C compared to the expected value (Table 5.1). The use of SFLBP for the half flow model at a resolution of 8 × 8 pixels did not help much whereas at resolutions of 16 × 16 and 32 × 32 pixels resolutions the image spot effect at the right-hand side of the half flow employing the LBP algorithm was lost but it caused a change in the value around the ideal half flow model limit (right-hand side). As a whole for half flow model at a resolution of 32 × 32 pixels, the CFLBPi+s algorithm can provide a smooth half flow image, a low value of NMSE, and a high value of PSNR. However there is slight drop in the value of Vmax compared to the expected value around 25%. The HLBP algorithm is able to provide a distinctive half flow image, lower values of NMSE and higher values of PSNR, and a value of Vmax which is the same as the expected value at a resolution of 32 × 32 pixels. For the full flow model only the pure LBP and the hybrid LBP algorithms are suitable to be used because all projection data gave full values of 5V and the use of pure LBP and the hybrid algorithms other that those stated above will only cause a change in the quality and quantity of the full flow model images.

5.3

Comparison of Algorithms Performance Figure 5.19 shows the graphs representing PT (Processing Time) readings

plotted based on the values tabulated in Appendix D.II. Table 5.2 shows the comparison of time periods taken between various algorithms. In this project image visual display used the Pentium 4 computer with a 1.8G Hz processor and a NVIDIA GeForce MX/MX-400 display card. The measurement is involving the time period required to draw 2 picture frames (upstream and downstream).

100

Figure 5.19: Graphs of PT(s) readings for reconstructed flow model images

Table 5.2: Comparison of the time taken for each algorithm Algorithm

Base Freq (Hz)

Time(ms)

Tick

min

max

min

max

Pure LBP

0.666

5.300

2384

18974

Pure Spatial Domain Filter

0.737

6.090

2638

21802

Pure Freq Domain Filter

0.934

6.141

3344

21985

1.019

7.668

3648

27451

Hybrid LBP

0.012

5.300

43

18974

Spatial Domain Filter

0.202

6.090

723

21802

Freq Domain Filter

0.354

6.141

1267

21985

Combined Domain Filter

0.530

7.668

1897

27451

Pure Combined Domain Filter

3.57945 M

As a whole the higher the resolution the higher the time period required to draw an image. The most minimum time period occurred at a resolution of 8 × 8 pixels using the HBLP algorithm and the most maximum period is for all flow models having resolutions of 32 × 32 pixels using the CFLBPi+s algorithm and the full flow model using the HCFLBPi+s algorithm. The hybrid algorithm enable the processing time to be shortened because pixels which fulfilled the required conditions only are drawn whereas the other pixels will be assumed zero and will be ignored, with the exception of the full flow model in which all data projection values will reach maximum values.

101 5.4

Conclusions from Results Based on the above results the image reconstruction algorithms selected for

plastic bead flow real time analysis are: i.

CFLBPi+s at a resolution of 32 × 32 pixels.

ii.

HLBP at a resolution of 32 × 32 pixels.

This selection is strengthened by the computation of the values of NMSE, PSNR, image quality obtained which was discussed above. Although the first algorithm has the most maximum time period and the least precision when detecting a single object (refer to Section 5.2.1.5) however an assumption has been made that most of the solid flow in the gravity flow rig are between multiple pixels flow and full flow.

5.5

Summary Three resolution images were selected to be tested i.e. resolutions of 8 × 8, 16

× 16, and 32 × 32 pixels resolutions. The images were tested with four flow models and various image reconstruction algorithms which the quality and quantity measurements were obtained. Analyses from the results of quality and quantity measurements were carried out to determine the best resolution and reconstruction algorithms to be applied. CFLBPi+s and HLBP algorithm both at 32 × 32 pixels resolution were selected based on the discussion of quantity and quality measurements.

CHAPTER 6

CONCENTRATION MEASUREMENT AND CONCENTRATION PROFILES

6.1

Introduction The main aim of this chapter is to discuss the utilization of infra-red sensors

to measure concentration of objects in the cross-section of a flow inside an industrial pipeline which is used to display the concentration profile in two dimensions. The measurement made use of an array of upstream and downstream sensors in real time. The main sampling frequency chosen is 88.03 Hz based on the ability of the data acquisition system which is the time required by the hardware to convert a series of analog signal in a buffer in at least 1 second (Keithley, 2001). Concentration measurement has been performed and compared with the expected values whereas the results from the measurement of two-dimensional image concentration will be acquired.

6.2

Concentration Measurement Concentration measurement was obtained using 128 infra-red sensors (64

upstream and 64 downstream sensors) by taking output voltage readings based on two types of flow i.e. static object and the flow of plastic beads which can be used to compare the theoretical measurements of the flow model with the actual measurements carried out by the infra-red sensing arrays. Table 6.1 shows the various measurement categories used.

103

Table 6.1: Types of concentration measurement Concentration Measurements Single flow

Multiple flow

Type of Flow A static object (Diameter = 5mm) Four static objects (Diameter = 5mm)

Total Number of Measurements 30 cycle using 254 buffers (127 upstream and 127 downstream), 1 flow rate 30 cycle using 254 buffers (127 upstream and 127 downstream), 1 flow rate 30 cycles using 254 buffers (127

Half flow

Plastic bead flow

upstream and 127 downstream) for 10 different flow rates using a flow rig 30 cycles using 254 buffers (127

Full flow

Plastic bead flow

upstream and 127 downstream) for 10 flow rates using a flow rig

As elaborated in sub-section 4.2.4.3 the selected main sampling frequency is 88.03 Hz and the total readings taken is 30 cycles in which each cycle contains 254 buffers and each buffer contains 64 voltage reading samples. Figure 6.1 shows how one cycle of measurement contains 254 buffers and each buffer contains 64 voltage reading samples.

Figure 6.1: One cycle of measurement

104 The objects that were used to represent a static object/ a single pixel flow and multiple flow consist of an iron rod with a diameter of 5.0mm. For the half flow and for the full flow the sensor readings were taken from plastic bead flow which was dropped into a flow rig at a range from 27 gs-1 to 126 gs-1. The predicted values have been scaled to provide the same mean flow rate for each concentration measurement to those which have been discussed in Chapter 5 and are shown along the measured values for comparison (Rahmat, 1996). The calculation procedure of scaling factor is shown below (Rahmat, 1996): i. Find the total value of sensors sensitivity, ST . ST =

7



∑ ⎜⎜



∑ (V φ (x, y ) ⋅ N (x' , φ ))⎟⎟

x ', x '= −8 φ = 0 o , 45o , 90o , and 135o





(6.1)

where:

N ( x' , φ ) =

VmTx , Rx (x' , φ ) 5

, normalized sensor reading during flow condition for

each flow model (Chapter 5). V x ',φ ( x, y ) = the sensitivity map of infra-red views before the exemption

processes. ii. Find the total voltage reading from upstream sensors, UT .

⎛ ⎛ 29 ⎛ 126 ⎞ ⎞⎞ ⎜ ⎜ ∑ ⎜ ∑ (VsTux , Rux ( x' , φ )n ,b ⋅ N ( x' , φ ))⎟ ⎟ ⎟ 7 ⎜ ⎜ n =0 ⎝ b=0 ⎠ ⎟⎟ UT = ∑ ⎜ ∑ ⎟⎟ ⎜ 30 x '= −8 φ = 0 o , 45o , 90o , and 135o ⎜ ⎟⎟ ⎜ ⎟⎟ ⎜ ⎜ ⎠⎠ ⎝ ⎝

(6.2)

where: n = number of cycles b = number of buffers. iii. Find the total voltage reading from downstream sensors, DT . ⎛ ⎛ 29 ⎛ 126 ⎞ ⎞ ⎞⎟ ⎜ ⎜ ( ( ) ( ) ) ' , ' , Vs x φ N x φ ⋅ ⎜ ⎟⎟ ∑ ∑ Tdx Rdx , n b , 7 ⎜ ⎜ n =0 ⎝ b =0 ⎠ ⎟⎟ DT = ∑ ⎜ ∑ ⎟⎟ ⎜ 30 x '= −8 φ = 0 o , 45o , 90o , and 135o ⎜ ⎟⎟ ⎜ ⎟⎟ ⎜ ⎜ ⎠⎠ ⎝ ⎝ iv. Find the upstream scaling factor, KU.

(6.3)

105 KU =

UT ST

(6.4)

v. Find the downstream scaling factor, KD. DT ST

(6.5)

KU + K D 2

(6.6)

KD =

vi. Find the overall scaling factor, K.

K=

vii. The expected value which has been rescaled

PTx , Rx ( x' , φ ) = S T ⋅ N (x' , φ ) ⋅ K

(6.7)

where: ST =

6.2.1

ST . ∑ N ( x' , φ )

Experimental Results for Single Pixel and Multiple Pixels Flow

In this section a set of 30 cycles have been acquired in which each cycle contains 254 buffers (127 upstream and 127 downstream) and each buffer contains 64 samples (sensor 0 to 63). Figure 6.2 shows the position of an iron rod which represents a single pixel flow in which the resolution of the display is 32 × 32 pixels.

P32((-8,-7),(7,6))

106 Figure 6.2: The location of an iron rod representing a single pixel flow

Figure 6.3 represents the locations of four iron rods which are used to represent a multiple flow in which the image resolution is 32 × 32 pixels.

P32((-8,-7),(7,6))

P32((7,6),(7,6))

P32((-8,-7),(-7,-8))

P32((-8,-7),(-7,-8))

Figure 6.3: The locations of four iron rods representing multiple pixels flow

Figure 6.4 shows a graph representing the total average values for upstream and downstream sensors based on 30 measurement cycles before and after normalized with single pixel flow modeling rate values ( N ( x' , φ ) ) in which each value is obtained based on the following equation: ⎛ 126 ⎛ VsTux , Rux ( x' , φ )n ,b + VsTdx , Rdx ( x' , φ )n ,b ⎜∑⎜ ∑ ⎜ ⎜ 2 n =0 ⎝ b =0 ⎝ AS Tx , Rx ( x' , φ ) = 30 29

⎛ 126 ⎛ VsTux , Rux ( x' , φ )n ,b + VsTdx , Rdx ( x' , φ )n ,b ⎜∑⎜ ∑ ⎜ ⎜ 2 n =0 ⎝ b=0 ⎝ AS Tx , Rx ( x' , φ ) = 30 29

⎞⎞ ⎟⎟ ⎟ ⎟ ⎠⎠

⎞ ⎞ ⎟⎟ ⋅ N ( x' , φ )⎟ ⎟ ⎠ ⎠

where: AsTx , Rx (x ' , φ ) = the total average values for upstream and downstream sensors

based on 30 measurement cycles before normalized. AsTx , Rx (x' , φ ) = the total average values for upstream and downstream sensors based on 30 measurement cycles after normalized.

(6.8)

(6.9)

107 Figure 6.5 shows a graph of the total average values for each sensor before normalized (upstream and downstream) and the expected values of the single pixel flow, in which the total average values of each upstream and downstream sensors before normalized based on 30 measurement cycles is obtained from:

⎛ 126 ⎞ ⎜ ∑ (VsTux , Rux ( x' , φ )n ,b )⎟ ∑ ⎠ UTTx , Rx ( x' , φ ) = n =0 ⎝ b =0 30 29

(6.10)

⎛ 126 ⎞ ⎜ ∑ (VsTdx , Rdx ( x' , φ )n ,b )⎟ ∑ ⎠ DTTx , Rx ( x' , φ ) = n =0 ⎝ b =0 30 29

(6.11)

where: UTTx , Rx ( x ' , φ ) = the total average values for each upstream sensor before normalized. DTTx , Rx ( x' , φ ) = the total average values for each downstream sensor before

normalized. Figure 6.6 shows a graph of the total average values for each sensor after normalized (upstream and downstream) and the expected values for a single pixel flow, in which the total average values of the upstream and downstream sensors after normalized based on 30 measurement cycles is obtained from: 29

UTTx , Rx ( x' , φ ) =

⎛ 126

∑ ⎜⎝ ∑ (Vs n =0

b =0

Tux , Rux

(x' , φ )n,b )× N (x' , φ )⎞⎟ ⎠

30

⎛ 126 ⎞ ⎜ ∑ (VsTdx , Rdx ( x' , φ )n ,b )× N ( x' , φ )⎟ ∑ ⎠ DTTx , Rx ( x' , φ ) = n =0 ⎝ b =0 30

(6.12)

29

(6.13)

where: UTTx , Rx ( x' , φ ) = the total average values for each upstream sensor after normalized. DTTx , Rx ( x' , φ ) = the total average values for each downstream sensor after normalized.

108

A graph of total average values for upstream and downstream sensors before and after normalized based on 30 cycles of single pixel flow measurement AsTx , Rx ( x ' , φ )

AsTx , Rx ( x' , φ )

Average Values

Average values

Voltage amplitude (Volts)

700 600 500 400 300 200 100 0 0

4

8

12

16

20

24

28

32

36

40

44

48

52

56

60

Receiver (Upstream and Downstream)

Figure 6.4: A graph representing the differences between AsTx , Rx (x ' , φ ) and

AsTx , Rx (x' , φ ) for a single pixel flow measurement

A graph of total average values for each sensor (upstream and downstream) before being normalized and the expected values based on 30 cycles of single pixel flow measurement Average Values (x ' , φ ) U TTx , Rx

Voltage amplitude (Volts)

700

D

(x' , φ )

Average T values

Tx , Rx

PTx , Rx ( x' , φ )

Series3

600 500 400 300 200 100 0 0

4

8

12

16

20

24 28 32 36 40 44 48 Receiver (Upstream/Downstream)

52

56

60

Figure 6.5: A graph representing the differences between UTTx , Rx ( x ' , φ ) (upstream), DTTx , Rx ( x' , φ ) (downstream), and PTx , Rx ( x ' , φ ) for a single pixel flow

109

A graph of total average values for each sensor (upstream and downstream) after being normalized and the expected values based on 30 cycles of single pixel flow measurement

U

(x' , φ )

Average T Values

Voltage amplitude (Volts)

700

Tx , Rx

D

(x' , φ )

Average T values

Tx , Rx

PTx , Rx ( x' , φ )

Series3

600 500 400 300 200 100 0 0

4

8

12

16

20

24 28 32 36 40 44 48 Receiver (Upstream/Downstream)

52

56

60

Figure 6.6: A graph representing the differences between UTTx , Rx ( x' , φ ) (upstream),

DTTx , Rx ( x' , φ ) (downstream), and PTx , Rx ( x ' , φ ) for a single pixel flow

From Figure 6.4, as an average it can be found that there are output readings of AsTx , Rx (x ' , φ ) at the left-hand side or the right-hand side at each AsTx , Rx (x' , φ ) voltage reading. This is probably due to the fact that the position of the rod in the flow regime is not in parallel with the distribution pipe. Data obtained from Figures 6.6 ( UTTx , Rx ( x' , φ ) and UTTx , Rx ( x' , φ ) ) are then summed using Equations 6.2 and 6.3 while data obtained from Figure 6.5 ( UTTx , Rx ( x ' , φ ) and DTTx , Rx ( x' , φ ) ) are then summed using the Equations 6.14 and 6.15 respectively. Those equations represent the total voltage reading from each upstream sensor before and after it were normalized with the single flow modeling rates N ( x' , φ ) . The total voltage readings of the expected values are based on Equation 6.16. Then the error reading for each total voltage readings of upstream and downstream sensors before and after it were multiplied with the N (x' , φ ) value are compared with the total voltage reading of the expected sensor output based on

110 Equations 6.17, 6.18, 6.19 and 6.20. The results of the values of K, KU, KD, and the error reading are tabulated in Table 6.2. ⎛ ⎛ 29 ⎛ 126 ⎞ ⎞⎞ ⎜ ⎜ ∑ ⎜ ∑ (VsTux , Rux ( x' , φ )n ,b )⎟ ⎟ ⎟ 7 ⎜ ⎜ n =0 ⎝ b =0 ⎠ ⎟⎟ UT = ∑⎜ ∑ ⎜ ⎟⎟ 30 x '= −8 φ = 0o , 45o , 90o , and 135o ⎜ ⎜ ⎟⎟ ⎜ ⎟⎟ ⎜ ⎝ ⎠⎠ ⎝

(6.14)

⎛ ⎛ 29 ⎛ 126 ⎞ ⎞⎞ ⎜ ⎜ ∑ ⎜ ∑ (VsTdx , Rdx ( x' , φ )n ,b )⎟ ⎟ ⎟ 7 ⎜ ⎜ n =0 ⎝ b =0 ⎠ ⎟⎟ DT = ∑ ⎜ ∑ ⎟⎟ ⎜ 30 x '= −8 φ = 0o , 45o , 90o , and 135o ⎜ ⎟⎟ ⎜ ⎟⎟ ⎜ ⎜ ⎠⎠ ⎝ ⎝

(6.15)

PT =

7





∑ P (x' , φ )⎟⎟

∑ ⎜⎜

Tx , Rx x '= −8 φ = 0o , 45o , 90 o , and 135o





(6.16)

eU (% ) =

U T − PT × 100 PT

(6.17)

e D (% ) =

DT − PT × 100 PT

(6.18)

eU (% ) =

U T − PT × 100 PT

(6.19)

e D (% ) =

DT − PT × 100 PT

(6.20)

where: UT = the total voltage reading from upstream sensors before normalized. DT = the total voltage reading from downstream sensors before normalized. PT = the total voltage reading from expected values. eU = the upstream error.

eD = the downstream error. eU = the normalized upstream error. e D = the normalized downstream error.

111 Table 6.2: Values of scaling factor, total voltage reading and error for a single pixel

flow Single flow

UT

eD

eU

eD

3529 2629 2271 1428 212 147

84

59

DT

UT

DT

PT

eU

KU 0.1048 KD 0.0905 K

4449

0.0570

From Figure 6.5 the actual upstream sensor reading shows a consistent maximum reading whereas the actual downstream sensor reading does not show a consistent maximum reading. The results show that the error reading between the upstream and the downstream ( eU and eD ) depicts a significant difference. The high error ( eU and eD ) is also due to the position of the iron rod. Figure 6.6 indicates that the normalized sensor readings which exceed the expected value are the most active areas (sensors RX04, RX23, RX43, and RX61) and errors ( eU and e D ) also show the inconsistency in sensor readings between the upstream and the downstream sensors is also due to the position of the iron rod. Figure 6.7 shows a graph of the total average values for upstream and downstream sensors based on 30 measurement cycles before and after normalized with multiple pixel flow modeling rate values ( N ( x' , φ ) ) in which each rate is obtained from Equations 6.8 and 6.9. Figure 6.8 shows a graph of the total average values for each sensor before being normalized (upstream and downstream) and the expected values of the multiple pixel flow, in which the total average values of each upstream and downstream sensors before being normalized based on 30 measurement cycles are respectively obtained from Equations 6.10 and 6.11 whereas Figure 6.9 is a graph of the total average values for each sensor after normalized (upstream and downstream) and the expected values for a single pixel flow, in which the total average values of the upstream and downstream sensors after being normalized based on 30 measurement cycles are obtained from Equations 6.12 and 6.13.

112 In Figure 6.7 the situation is identical to the single pixel flow in which there are output readings of AsTx , Rx (x ' , φ ) at the left-hand side or the right-hand side at each AsTx , Rx (x' , φ ) voltage reading. The reason why this phenomenon occurred is similar to that of the single pixel flow. The results from the values of K, KU, KD, and the error reading based on the data obtained from Figures 6.8 and 6.9 are given in Table 6.3. Table 6.3: Values of scaling factor, total voltage reading and error for multiple

pixels flow Multiple flow KU KD K

0.1048 0.0905 0.0570

UT

DT

UT

DT

PT

9509 7665 5718 4765 5242

eU

eD

eU

eD

81

46

9

-9

From Figure 6.8 the actual upstream sensor and actual downstream sensor readings show a consistent maximum reading, but the error reading between the upstream and downstream ( eU and eD ) indicates a significant difference between upstream and downstream as the total expected value PT is not between UT and DT. Figure 6.9 and errors ( eU and e D ) indicate that the normalized sensor readings which exceed the expected value are the most active areas (sensors RX04, RX12, RX18, RX23, RX43, and RX61) and the normalized sensor reading between the upstream and downstream sensor arrays as a whole are consistent as the value of PT is between U T and DT .

113 A graph of total average values for upstream and downstream sensors before and after normalized based on 30 cycles of multiple pixels flow measurement AsTx ,Values AsTx ,values Rx ( x ' , φ ) Average Average Rx ( x ' , φ )

Voltage amplitude (Volts)

700 600 500 400 300 200 100 0 0

4

8

12

16

20

24

28

32

36

40

44

48

52

56

60

Receiver (Upstream and Downstream)

AsTx , Rx (x ' , φ ) and

Figure 6.7: A graph representing the differences between

AsTx , Rx (x' , φ ) for multiple pixels flow

A graph of total average values for each sensor (upstream and downstream) before normalized and the expected values based on 30 cycles of multiple pixels flow measurement

(x ' , φ )

U

Average TTxValues , Rx

(x' , φ )

D

P

Average TTxvalues , Rx

Series3 Tx , Rx

(x' , φ )

Voltage amplitude (Volts)

700 600 500 400 300 200 100 0 0

4

8

12

16

20

24

28

32

36

40

44

48

52

56

60

Receiver (Upstream/Downstream)

Figure 6.8: A graph representing the differences between UTTx , Rx ( x ' , φ ) (upstream), DTTx , Rx ( x' , φ ) (downstream), and PTx , Rx ( x ' , φ ) for multiple pixels flow

114 A graph of total average values for each sensor (upstream and downstream) after normalized and the expected values based on 30 cycles of multiple pixels flow measurement

U

Average T Values

Voltage amplitude (Volts)

700

Tx , Rx

(x' , φ )

D

AverageTvalues

Tx , Rx

(x' , φ )

PTx , Rx ( x' , φ )

Series3

600 500 400 300 200 100 0 0

4

8

12

16

20

24

28

32

36

40

44

48

52

56

60

Receiver (Upstream/Downstream)

Figure 6.9: A graph representing the differences between UTTx , Rx ( x' , φ ) (upstream),

DTTx , Rx ( x' , φ ) (downstream), and PTx , Rx ( x ' , φ ) for multiple pixels flow

115 6.2.2 Experimental Results for Half Flow

A total of 30 reading cycles were taken, in which each cycle contains 254 buffers (127 upstream and 127 downstream) for 10 different values of solid flow rates between 27 gs-1 and 126 gs-1. Figure 6.10 shows how an obstacle was used in the distribution pipe to represent a half flow for an image resolution of 32 × 32 pixels.

Sensor Obstruction

Figure 6.10: Top and side views for a half flow model inside the distribution pipe

Figures 6.11, Appendix G.I(i), Appendix G.I(ii), and Appendix G.I(iii) show the graphs of total average values for upstream and downstream sensors based on 30 measurement cycles before and after normalized with half flow modeling rate values ( N (x' , φ ) ) at flow rates of 27 gs-1, 49 gs-1, 93 gs-1, and 126 gs-1 in which those values are obtained from Equations 6.8 and 6.9. Figures 6.12, Appendix G.I(iv), Appendix G.I(v), and Appendix G.I(vi) show the graphs of the total average values for each sensor before normalized (upstream and downstream) and the expected values of the half flow at flow rates of 27 gs-1, 49 gs-1, 93 gs-1, and 126 gs-1, in which the total average value of each upstream and downstream sensor respectively is obtained from Equations 6.10 and 6.11. Figures 6.13, Appendix G.I(vii), Appendix G.I(viii), and Appendix G.I(ix) respectively show the graphs of total average values for each sensors (upstream and downstream) which have been normalized and the expected values for half flow at flow rates of 27 gs-1, 49 gs-1, 93 gs-1, and 126gs-1, the total average value of each upstream and downstream sensor respectively is obtained from Equations 6.12 and 6.13.

116 A graph of total average values for upstream and downstream sensors before and after normalized based on 30 cycles of half flow measurement (27 gs-1) AsTx , Rx ( xValues ' ,φ ) Average

Average AsTx , Rx (values x' , φ )

120 Voltage amplitude (Volts)

100 80 60 40 20 0 0

4

8

12

16

20

24

28

32

36

40

44

48

52

56

60

Receiver (Upstream and Downstream)

Figure 6.11: A graph representing the differences between AsTx , Rx (x ' , φ ) and

AsTx , Rx (x' , φ ) for half flow (27 gs-1)

A graph of total average values for each sensor (upstream and downstream) before normalized and the expected values based on 30 cycles of half flow measurement (27 gs-1) UTTx , Rx ( xValues ' ,φ ) Average

120

DTTx , Rx ( xvalues ' ,φ ) Average

PTx , Rx ( x' , φ ) Series3

Voltage amplitude (Volts)

100 80 60 40 20 0 0

4

8

12

16

20

24

28

32

36

40

44

Receiver (Upstream/Downstream)

48

52

56

60

Figure 6.12: A graph representing the differences between UTTx , Rx ( x ' , φ ) (upstream), DTTx , Rx ( x' , φ ) (downstream), and PTx , Rx ( x ' , φ ) for half flow (27 gs-1)

117

A Graph of total average values for each sensor (upstream and downstream) after normalized and the expected values based on 30 cycles of half flow measurement (27 gs-1)

DTTx , Rx ( xvalues ' ,φ ) Average

UTTx , Rx (xValues ' ,φ ) Average

120

PTx , Rx ( x' , φ ) Series3

Voltage amplitude (Volts)

100 80 60 40 20 0 0

4

8

12

16

20

24

28

32

36

40

44

Receiver (Upstream/Downstream)

48

52

56

60

Figure 6.13: A graph representing the differences between UTTx , Rx ( x' , φ ) (upstream),

DTTx , Rx ( x' , φ ) (downstream), and PTx , Rx ( x ' , φ ) for half flow (27 gs-1)

From Figures 6.11, Appendix G.I(i), Appendix G.I(ii), and Appendix G.I(iii), the ( AsTx , Rx (x ' , φ ) ) reading shows that the actual output is not identical to the normalized value ( AsTx , Rx (x' , φ ) ) in which there should be a reduction or zeroes in output readings for several sensors at projections of 45° and 0° as well as 135°. This is due to the incompleteness of the obstruction process (refer to sub-section 6.3.2) which was used to create a half flow but it can be observed that the average sensor readings (upstream and downstream) increased when the flow rates were increased. However, the envelope for the average reading ( AsTx , Rx (x ' , φ ) or AsTx , Rx (x' , φ ) ) is nearly identical and the output reading distribution pattern as an average shows similarity or consistency for different flow rates. This shows that the output distribution pattern as an average is independent from flow rate change. The values of K, KU, KD, and the error reading obtained from Figure 6.12, Appendix G.I(iv),

118 Appendix G.I(v), Appendix G.I(vi), Figure 6.13, Appendix G.I(vii), Appendix G.I(viii), and Appendix G.I(ix) are shown in Table 6.4. Nonetheless the above paragraph does not provide a complete picture of the half flow. Figures 6.12, Appendix G.I(iv), Appendix G.I(v), and Appendix G.I(vi) as well as error values ( eU and eD ) from Table 6.4 show that almost all upstream sensor actual output exceed the downstream sensor actual output and there are unwanted sensor output. Figures 6.13, Appendix G.I(vii), Appendix G.I(viii), and Appendix G.I(ix) show that the most active areas are at receiving sensors (RX02 to RX06, RX17 to RX22, RX43 to RX41 and RX52 to RX57). After the results from each flow ( eU and e D ) were normalized except at 49 gs-1 and 93 gs-1 flow rate, the flow regime is consistent. However the actual reading does not equal to the expected reading because after the solid particles pass through the obstacle, the particles dispersed and as such the flow regime in the pipe is not identical to the shape of the obstacle. For example, if a half flow obstacle is used, the particles are dispersed after passing through the obstacle and the flow does not represent a half flow anymore. What can be pictured here is that the obstruction caused the solid flow to concentrate at the upstream level and the flow distribution dispersed when it reached downstream.

119 Table 6.4: Values of scaling factor, total voltage reading and error for half flow gs-1 Half flow

27

eD

eU

0.60

-14.0

7.90

-7.68

8.33

-8.26

0.0216 -7.51 -7.11

0.0253 19.3

0.44

7.23

-7.38

7.75

-7.05

6.94

-6.89

0.0228

0.0258

0.0270

KU 0.0328 126 KD 0.0286 9529 8684 8478 7382 7928 20.19 9.54 K

-6.84

0.0209

KU 0.0291 115 KD 0.0249 8405 7623 7513 6481 6973 20.55 9.33 K

7.29

0.0164

KU 0.0277 104 KD 0.0239 7971 7261 7144 6171 6663 19.64 8.99 K

-2.60

0.0156

KU 0.0245 KD 0.0211 7844 6959 7025 5914 5888 33.22 18.2 K

eD

0.0134

KU 0.0271 KD 0.0235 7792 7105 7008 6069 6534 19.25 8.74 K

93

eU

KU 0.0234 KD 0.0198 6846 5970 6043 5117 5578 22.73 7.03 K

82

PT

KU 0.0226 KD 0.0193 6493 5819 5824 4983 5397 20.29 7.80 K

71

DT

KU 0.0177 KD 0.0151 5067 4546 4576 3912 4549 11.38 -0.07 K

60

UT

KU 0.0167 KD 0.0145 4782 4356 4323 3753 4029 18.68 8.11 K

49

DT

KU 0.0138 KD 0.013 3835 3846 3574 3370 3460 10.85 11.16 3.31 K

36

UT

0.0307

120 6.2.3

Experimental Result for Full Flow

A total of 30 cycles readings were taken, in which each cycle contains 254 buffers (127 upstream and 127 downstream) for 10 different values of solid flow rates between 27 gs-1 and 126 gs-1. Figure 6.14 represents a full flow for an image plane of 32 × 32 pixels resolution.

Figure 6.14: Top view of full flow

Figure 6.15, Appendix G.II(i), Appendix G.II(ii), Appendix G.II(iii) and Appendix G.II(iv) show the graphs of total average values for upstream and downstream sensors based on 30 measurement cycles at flow rates of 27 gs-1, 49 gs-1, 71 gs-1, 93 gs-1, and 126 gs-1 in which the total average values for between each upstream and downstream sensor is obtained from Equation 6.8. Figure 6.16, Appendix G.II(v), Appendix G.II(vi), Appendix G.II(vii), and Appendix G.II(viii) show the graphs of the total average values for each sensor (upstream and downstream) and the expected values of the full flow at flow rates of 27 gs-1, 49 gs-1, 70 gs-1, 93 gs-1, and 126 gs-1, in which the total average value of each upstream and downstream sensor is obtained from Equation 6.10. In this section all the flow modeling rate values ( N (x' , φ ) ) are equal to 1 (sub-section 5.1.4.).

121 A graph of total average values for upstream and downstream sensors based on 30 cycles of full flow measurement (27 gs-1) AsTx , Rx ( x ' , φ )

Voltage amplitude (Volts)

Average Values Average Values

70 60 50 40 30 20 10 0 0

4

8

12 16 20 24 28 32 36 40 44 48 52 56 60 Receiver (Upstream and Downstream)

Figure 6.15: A graph representing the AsTx , Rx (x ' , φ ) values for full flow (27 gs-1)

A graph of total average values for each sensors (upstream and downstream) and the expected values based on 30 cycles of full flow measurement (27 gs-1)

(

Average UT Values x' ,φ

Voltage amplitude (Volts)

Tx , Rx

)

Average DT value x' , φ

)

28

40

Tx , Rx

(

Tx , Rx

(x' , φ )

48

52

Series3 P

70 60 50 40 30 20 10 0 0

4

8

12

16

20

24

32

36

44

56

60

Receiver (Upstream/Downstream)

Figure 6.16: A graph representing the differences between UTTx , Rx ( x ' , φ ) (upstream), DTTx , Rx ( x' , φ ) (downstream), and PTx , Rx ( x ' , φ ) for full flow (27 gs-1)

122 Figure 6.15, Appendix G.II(i), Appendix G.II(ii), Appendix G.II(iii), and Appendix G.II(iv) show that the AsTx , Rx (x ' , φ ) values are not uniform. This is due to the non-uniform solid distribution caused by the rotary valve. However, the AsTx , Rx (x ' , φ ) values increased with the increasing flow rate value and the output

reading distribution pattern as an averge shows similarity and consistency for different flow rates. This shows that the output distribution pattern as an average is independent of the flow rate change. The values of K, KU, KD, and the error reading are shown in Table 6.4. Figure 6.16, Appendix G.II(v), Appendix G.II(vi), Appendix G.II(vii), and Appendix G.II(viii) show that the most active areas are at receivers (RX02 to RX09, RX18 to RX27, RX37 to RX44, and RX55 to RX61). Table 6.5 the error readings show that in the 27gs-1 flow rate value, the flow is non-uniform while the rest (36 gs-1 to 126 gs1

) give a more consistent distribution rate because of the PT values are always

between the UT and DT values. Figure 6.17 shows that the total output voltage of upstream, downstream, and expected value is increased with the solid flow rate and has a linear relationship with the solid mass-flow rate. The linear regression equations for upstream, downstream, and expected total output voltage values are: V(upstream) = 44.624 × flow indicator + 1734.4

...(6.21)

V(downstream) = 47.182 × flow indicator + 1514.8

...(6.22)

V(expected) = 46.197 × flow indicator + 1596.1

...(6.23)

each regression line respectively has a R2 value of 0.9876, 0.9894, and 0.9883. The gradient of the regression line of upstream, downstream, and expected total output voltage values are 44.624 V/gs-1, 47.182 V/g/s-1, and 46.197 V/gs-1.

123

Table 6.5: Values of scaling factor, total voltage reading and error for full flow gs-1 Half flow

27

82

93

-1.401

-1.120

0.01364 -0.817

0.01614 -0.283

0.01718

KU 0.01798 KD 0.01786 5650 5616 5633 0.309 K

-1.334

0.01262

KU 0.01722 KD 0.01713 5414 5385 5400 0.253 K

0.118

0.01074

KU 0.01628 KD 0.01601 5116 5032 5074 0.844 K

eD

0.00886

KU 0.01379 KD 0.01349 4334 4240 4288 1.089 K

71

eU

KU 0.01280 KD 0.01244 4022 3911 3967 1.388 K

60

PT

KU 0.01088 KD 0.01060 3421 3331 3376 1.344 K

49

DT

KU 0.00904 KD 0.00867 2843 2725 2722 4.432 K

36

UT

-0.309

0.01792

KU 0.01966 104 KD 0.01998 6180 6280 6230 -0.802 0.804 K

0.01982

KU 0.02157 115 KD 0.02195 6782 6899 6840 -0.850 0.860 K

0.02176

KU 0.02427 126 KD 0.02459 7630 7729 7679 -0.650 0.643 K

0.02493

124

Summed Sensor Output Vs. Measured Flow Rates 90000

Voltage amplitude (Volts)

80000 70000 60000 50000

Upstream Downstream Average Linear (Upstream) Linear (Downstream) Linear (Average)

40000 30000 20000 10000 0 0

20

40

60

80

100

120

140

Measured Flow Rates (gs-1)

Figure 6.17: The regression line of output sensors versus measured flow rates

6.3

Concentration Profile

This section uses the same sets of data from the flow mesurements as discussed in the previous section, and visualized those sets of data into twodimension. Those data were visualized and analyzed using the CFLBPi+s and HLBP algorithms (Chapter 5) on an image plane of 32 × 32 pixels resolution.

6.3.1

Experimental Results for Single Pixel and Multiple Pixels Flow

From Section 5.2, the expected C values for single pixel and multiple pixels flow at 32 × 32 pixels resolution are 0.12% and 1.87% respectively. In these experiments the single pixel flow uses an iron rod while the multiple pixel flow uses

125 four iron rods. Each iron rod has a diameter of 5.0 mm and a cross-section area of π(d/2)2=19.63 mm2 . The expected C values will be used as a comparison to the total average value of concentration profile which is obtained from:

⎛ 126 ⎜ ∑ C n ,b 29 ⎜ b =0 ∑ ⎜ 127 n =0 ⎜ ⎝ C av = 30

⎞ ⎟ ⎟ ⎟ ⎟ ⎠

(6.24)

where:

C av = total average concentration profile percentage value for 30 cycles. C n ,b = total concentration profile percentage value from each cycle and

buffer. b

= buffer number.

n

= cycle number.

Figure 6.18 shows upstream and downstream concentration profiles obtained from single object representation using the CFLBPi+s and the HLBP image reconstruction algorithms. The C av value obtained using the CFLBPi+s algorithm shows a significant difference as a result of projection data along the projection path or infrared light views being taken into account into the computation of C av values, in which the object is on the intersection path of the infra-red views. Several important points from the HLBP algorithm can be observed from Figure 6.18 and Table 6.6 are that the object area becomes more clearer, the Vmax values are higher, and the PT (processing time) values decreased. However, the C av values are still considered high compared to the expected C value due to the problem in placing the iron rod at its actual position in the pipe cross-section area and the divergence of adjacent infrared light source.

126

a) CFLBPi+s PT = 7.210ms.

c) HLBP PT = 1.501ms.

5 Volts

Upstream Vmax = 2.133V

Upstream Vmax = 4.914V

b) CFLBPi+s

d) HLBP

0 Volts

Downstream Vmax = 2.137V

Downstream Vmax = 4.310V

Figure 6.18: Concentration profiles for single pixel flow

Table 6.6: C av values for single pixel flow using the CFLBPi+s and HLBP algorithms C av

C(predicted) CFLBPi+s 0.12

HLBP

upstream

downstream

upstream

downstream

7.35342

8.00083

1.29007

1.20125

127 Figure 6.19 shows a display of upstream and downstream concentration profiles obtained from multiple objects representation using the CFLBPi+s and the HLBP algorithms. The C av values calculated using the CFLBPi+s shows a significant difference to the predicted C values due to the similar reason as in the single object section and the position of the objects are located on the complete intersection path of the infrared views. The HLBP algorithm has discarded all the unwanted infra-red light paths, as results from the Figure 6.19 and Table 6.7 the C av values were reduced nearer to the value of predicted C, the Vmax values were increased, and the PT values were decreased. However the same factors as in the single object experiment have its effect on the multiple objects experiment.

128 a) CFLBPi+s PT = 7.292ms.

c) HLBP PT = 1.583ms.

5 Volts

Upstream Vmax = 2.195V

Upstream Vmax = 4.917V

b) CFLBPi+s

d) HLBP

Downstream Vmax = 2.137V

Downstream Vmax = 4.772V

0 Volts

Figure 6.19: Concentration profiles for multiple pixels flow Table 6.7: C av values for multiple pixels flow using the CFLBPi+s and HLBP

algorithms C av

C(predicted) CFLBPi+s 1.87

HLBP

upstream

downstream

upstream

downstream

17.4456

15.9916

5.13446

4.01785

129

6.3.2 Experimental Results for Half Flow

Figures 6.20(a), Appendix G.III(i), Appendix G.III(iii), Appendix G.III(v), and Appendix G.III(vii) show the concentration profiles that represent the upstream and upstream selected samples of reconstructed flow images using the CFLBPi+s and HLBP algorithms at flow rates of 27gs-1, 49gs-1, 93 gs-1, and 126 gs-1. Figure 6.20(b), Appendix G.III(ii), Appendix G.III(iv), Appendix G.III(vi), and Appendix G.III(viii) show the graphs of tabulated Cn,b (C = concentration profile percentage value, n = cycle, and b=buffer) values that have been calculated from upstream and downstream concentration profiles using the CFLBPi+s and HLBP algorithms at flow rates of 27gs-1, 49 gs-1, 93 gs-1, and 126 gs-1. Table 6.8 shows the count of Cn,b values at every quarters of 100% for flow rates from 27 gs-1 to 126 gs-1.

130 i)CFLBPi+s PT: 7.323ms

ii)CFLBPi+s PT: 7.094ms

iii)CFLBPi+s PT: 7.300ms

Upstream C3,45: 37.84% Vmax: 3.319V

Upstream C11,38: 47.70% Vmax: 3.672V

Upstream C16,81: 39.59% Vmax: 3.698V

Downstream C3,45: 28.72% Vmax: 2.764V

i)HLBP PT: 2.985ms

Downstream C11,38: 51.02% Vmax: 3.588V ii)HLBP PT: 4.160ms

Downstream C16,81: 42.15% Vmax: 3.730V iii)HLBP PT: 3.010ms

Upstream C3,45: 26.14% Vmax: 4.931V

Upstream C11,38: 36.56% Vmax: 4.999V

Upstream C16,81: 24.03% Vmax: 4.999V

Downstream C3,45: 14.33% Vmax: 4.384V

Downstream C11,38: 48.74% Vmax: 4.999V

Downstream C16,81: 29.85% Vmax: 4.999V

0 Volts

5 Volts

Figure 6.20(a): Concentration profiles for half flow at a flow rate of 27 gs-1

(CFLBPi+s and HLBP) (i) Cycle=3, buffer=45, (ii) Cycle =11, buffer = 38, (iii) Cycle = 16, buffer =38, and PT = Processing Time

131

A Graph of Upstream Concentration Values Vs. Cycle Numbers-(i)

75

%

50 25 *

29

*

28

*

27

*

26

*

25

*

24

*

23

*

22

*

21

*

20

*

19

*

18

*

17

*

16

*

15

*

14

*

13

*

12

*

11

10

9

*

8

*

*

*

7

*

6

5

4

*

*

*

3

*

2

1

0

*

*

0

A Graph of Upstream Concentration Values Vs. Cycle Numbers-(ii)

75

%

50 25 *

24

*

25

*

26

*

27

*

28

*

*

24

*

25

*

26

*

27

*

28

*

29

*

24

*

25

*

26

*

27

*

28

*

29

29

*

23

*

23

*

23

*

22

*

22

*

22

*

21

*

20

*

19

*

18

*

17

*

16

*

15

*

14

*

13

*

12

*

11

10

9

*

*

8

*

*

7

6

*

5

*

4

*

*

3

2

*

1

0

*

*

0

A Graph of Downstream Concentration Values Vs. Cycle Numbers-(i)

75

%

50 25 *

21

*

20

*

19

*

18

*

17

*

16

*

15

*

14

*

13

*

12

*

11

10

9

*

*

8

*

*

7

6

*

5

*

4

*

*

3

2

*

1

0

*

*

0

A Graph of Downstream Concentration Values Vs. Cycle Numbers-(ii)

75

%

50 25 *

21

*

20

*

19

*

18

*

17

*

16

*

15

*

14

*

13

*

12

*

11

10

9

*

*

8

*

*

7

6

*

5

*

4

*

*

3

2

*

1

0

*

*

0

131

Figure 6.20(b): Graphs of upstream and downstream Cn,b values for half flow using (i)CFLBPi+s and (ii)HLBP algorithms at a flow rate of 27gs-1

132

Table 6.8: The count of Cn,b values at every quarters of 100% for flow rates of 27 gs-1 to 126 gs-1 using the CFLBPi+s and HLBP algorithms

CFLBPi+s (up = upstream and down = downstream) Half flow 0 ≤ Cn,b< 25 25 ≤ Cn,b< 50 25 ≤ Cn,b< 75 up

Cn,b>75

down

up

down

up

down

27gs-1

3286 3297

488

463

36

50

0

0

38gs-1

3268 3295

428

404

114

111

0

49gs-1

3251 3300

428

388

131

122

60gs-1

3077 3146

515

457

218

71gs-1

3081 3127

497

483

82gs-1

2960 3031

516

93gs-1

3000 3053

0 ≤ Cn,b< 25 25 ≤ Cn,b< 50 25 ≤ Cn,b< 75 down

3541 3531 245

249

24

30

0

0

0

3453 3487 270

239

87

84

0

0

0

0

3456 3497 247

219

107

94

0

0

207

0

0

3343 3376 264

271

201

162

2

1

232

200

0

0

3329 3389 275

279

202

140

4

2

466

334

313

0

0

3229 3270 302

295

275

238

4

7

504

488

306

269

0

0

3275 3225 258

261

267

212

10

12

104gs-1 2994 3047

504

461

312

302

0

0

3229 3290 292

267

279

231

10

22

115gs-1 2933 3015

541

471

336

316

0

0

3219 3282 291

265

281

237

19

26

617

544

405

389

0

0

3127 3173 327

313

311

281

45

43

2788 2877

down

up

up down

132

up

126gs

up

Cn,b>75

down

-1

up down

HLBP (up = upstream and down = downstream)

133 As a whole for the half flow the graphs indicate a number of Cn,b peak value sets at every flow rates, e.g. there are five Cn,b peak value sets at the flow rate of 27gs-1 which are at the cycle numbers of 3, 11, 16, 23, and 29. Each peak is obtained from upstream and downstream sections using the CFLBPi+s and HLBP algorithms (Figures 6.20(b)). Figure 6.20(a) shows the upstream and downstream concentration profiles for the Cn,b peak value sets at the cycle numbers of 3, 11, and 16 using the CFLBPi+s and HLBP algorithms. The number of Cn,b peak value sets increased with an increase in the flow rate value at the both algorithms as shown in the graphs. As a comparison to the CFLBPi+s algorithm, the HLBP algorithm turns a large number of Cn,b to zero. The sets of data from Table 6.8 show that for both algorithms the highest count of Cn,b values is at the range from 0% to 25%. Concentration profiles produced using the CFLBPi+s algorithm (Figures 6.20(a), Appendix G.III(i), Appendix G.III(iii), Appendix G.III(v), and Appendix G.III(vii)) show that the distribution of two phase solid-air flows were forced to converge at the left-hand side of the distribution pipe and the execution time to produced two frame of images is between 6.7 and 7.4 ms and the maximum voltage value of 3.8V. The HLBP algorithm produced a clearer visual image resulting from the two phase solid-air flows that were forced to converge at the left-hand side of the distribution pipe with the execution time of between 1.5 and 5.0 ms and maximum voltage value of 4.999V. The obstruction interrupted the initial concentration profile of the two phase solid-air flows and caused the flow to dispersed at the downstream area almost at all flow rates values except at 126 gs-1 in which it shows a consistent flow pattern for upstream and downstream. This is because at 126 gs-1, there is only a small obstruction and hence the concentration profile at upstream is identical to that at downstream.

6.3.3 Experimental Results for Full Flow

Figures 6.21(a) and Appendix G.IV(i) show the concentration profiles that represent the upstream and downstream selected samples of reconstructed flow images using the CFLBPi+s and HLBP algorithms at flow rates of 27 gs-1 and 49 gs-1.

134 Appendix G.IV(iii), Appendix G.IV(vi), and Appendix G.IV(ix) show the concentration profiles (upstream and downstream) of reconstructed flow images using the CFLBPi+s algorithm, while Appendix G.IV(iv), Appendix G.IV(vii), and Appendix G.IV(x) show the concentration profiles (upstrteam and downstream) that represent the selected samples of reconstructed flow images using the HLBP algorithm at flow rates of 71 gs-1, 93 gs-1, and 126 gs-1. Figures 6.21(b), Appendix G.IV(ii), Appendix G.IV(v), Appendix G.IV(viii), and Appendix G.IV(xi) show the graphs of tabulated Cn,b values using the CFLBPi+s and HLBP algorithms which were obtained from the upstream and downstream concentration profiles at flow rates of 27 gs-1, 49 gs-1, 71 gs-1, 93 gs-1, and 126 gs-1 respectively. Table 6.9 shows the count of Cn,b values at every quarter of 100% for flow rates from 27 gs-1 to 126 gs-1. As has been discussed in the half flow (sub-section 6.3.2), the distributed solid particles by the rotary valve in the distribution pipe at each flow rate value has produced a number of Cn,b peak value sets when using the CFLBPi+s and HLBP algorithms. The size of each Cn,b peak value set has been narrowed and become closer with the adjacent Cn,b peak set when the flow rate value was increased. At the flow rate values of 71 gs-1, 93 gs-1, and 126 gs-1, there were repeatition of Cn,b peak value sets, e.g. at the flow rate of 71 gs-1 (Appendix G.IV(v)) and if the Cn,b peak value set at the cycle number of 1 is assumed as a reference, the next Cn,b peak value sets occured at cycle numbers of 3, 4 and 5, 6 and 7, 8, and 10. Then a similar pattern of Cn,b peak value sets occurred at cycle numbers of 11, 13, 14 and 15, 16, 18, and between 19 and 20. As a comparison to the CFLBPi+s algorithm, the HLBP algorithm turns a large number of Cn,b to zero.

135 i)CFLBPi+s PT: 6.835ms

ii)CFLBPi+s PT: 6.881ms

iii)CFLBPi+s PT: 6.936ms

Upstream C2,16: 29.30% Vmax: 2.529V

Upstream C8,48: 60.13% Vmax: 3.746V

Upstream C12,95: 48.95% Vmax: 3.400V

Downstream C2,16: 23.31% Vmax: 2.492V i)HLBP PT: 2.736ms

Downstream C8,48: 54.30% Vmax: 3.604V ii)HLBP PT: 4.057ms

Downstream C12,95: 40.72% Vmax: 2.923V

Upstream C2,16: 17.82% Vmax: 4.210V

Upstream C8,48: 69.43% Vmax: 4.999V

Upstream C12,95: 46.98% Vmax: 4.974V

Downstream C2,16: 6.17% Vmax: 4.689V

Downstream C8,48: 55.16% Vmax: 4.999V

Downstream C12,95: 29.03% Vmax: 4.883V

0 Volts

iii)HLBP PT: 3.923ms

5 Volts

Figure 6.21(a): Concentration profiles for full flow at a flow rate of 27 gs-1

(CFLBPi+s and HLBP) (i) Cycle=2, buffer=16, (ii) Cycle =8, buffer = 48, (iii) Cycle = 12, buffer =95, and PT = processing time

136

A Graph of Upstream Concentration Values Vs. Cycle Numbers-(i)

75

%

50 25 *

29

*

28

*

27

*

26

*

26

*

26

*

26

*

25

*

25

*

25

*

25

*

24

*

24

*

24

*

24

23

*

23

*

23

*

23

*

*

22

*

22

*

22

*

22

*

21 21

*

20

*

19

*

18

*

17

*

16

*

15

*

14

*

13

*

12

*

11

10

9

*

8

*

*

*

7

*

6

5

4

*

*

*

3

*

2

1

*

*

0

0

A Graph of Upstream Concentration Values Vs. Cycle Numbers-(ii)

75

%

50 25 *

28

*

*

28

*

29

*

28

*

29

29

*

27

*

27

*

27

*

*

20

*

19

*

18

*

17

*

16

*

15

*

14

*

13

*

12

*

11

10

9

*

*

8

*

*

7

6

*

5

*

4

*

3

*

2

*

1

*

*

0

0

A Graph of Downstream Concentration Values Vs. Cycle Numbers-(i)

75

%

50 25 *

21

*

20

*

19

*

18

*

17

*

16

*

15

*

14

*

13

*

12

*

11

10

9

*

*

8

A Graph of Downstream Concentration Values Vs. Cycle Numbers-(ii)

100

%

*

*

7

6

*

5

*

4

*

3

*

2

*

1

*

*

0

0

75 50

*

21

*

20

*

19

*

18

*

17

*

16

*

15

*

14

*

13

*

12

*

11

*

10

9

8

*

7

*

6

*

*

5

4

*

3

*

*

*

2

1

*

*

0

25 0

136

Figure 6.21(b): Graphs of upstream and downstream Cn,b values for full flow using (i)CFLBPi+s and (ii)HLBP algorithms at a flow rate of 27 gs-1

137

Table 6.9: The counts of Cn,b values at every quarter of 100% for flow rates of 27 gs-1 to 126 gs-1 using the CFLBPi+s and the HLBP algorithms

CFLBPi+s Full flow

0 ≤ Cn,b< 25 up

25 ≤ Cn,b< 50

HLBP

25 ≤ Cn,b< 75

Cn,b>75

0 ≤ Cn,b< 25

25 ≤ Cn,b< 50

25 ≤ Cn,b< 75

Cn,b>75

up

down

up

down

up

down

up

down

up

down

up

down

up

down

27gs-1

3447 3468

327

298

36

44

0

0

3624

3638

164

140

22

32

0

0

38gs-1

3402 3415

306

283

102

112

0

0

3553

3561

173

155

84

92

0

2

49gs-1

3316 3331

383

344

111

135

0

0

3500

3500

219

192

90

115

1

3

60gs-1

3285 3310

361

328

164

172

0

0

3454

3462

218

195

128

145

10

8

71gs-1

3176 3202

447

399

187

209

0

0

3401

3409

253

213

148

172

8

16

82gs-1

3170 3199

383

334

257

277

0

0

3359

3358

234

200

199

219

18

33

93gs-1

3163 3184

400

346

247

280

0

0

3351

3356

246

211

202

216

11

27

104gs-1

3117 3104

376

350

317

356

0

0

3281

3283

249

211

267

279

13

37

115gs-1

3006 2992

459

425

345

393

0

0

3129

3198

328

258

289

331

1

23

-1

2955 2968

402

361

453

481

0

0

3136

3133

268

226

401

416

5

35

126gs

137

down

138

The tabulated data from Table 6.9 shows that the counts of Cn,b values at each algorithm have the highest values in the range from 0% to 25% and as a whole the counts of Cn,b values using the HLBP algorithm had exceeded the CFLBP algorithm at the first (0 ≤ Cn,b< 25) and the fourth quarter (Cn,b>75). The Cn,b values for both algorithms only give quantitative values for each flow rate and show that the solid particles distributed by the rotary valve in the distribution pipe of 78 mm diameter does not produced a consistent full flow (Cn,b values are not always at 100%). The concentration profiles obtained using the CFLBPi+s algorithm show the reconstructed images of two phase solid air flow distribution in the pipe in which as an average it required an execution time between 6.7 ms to 7.4 ms and produced a maximum voltage value of 3.8V. The concentarion profiles obtained from the HLBP algorithm shows a clearer reconstructed images of two phase solid air flows distribution in the pipe compared to the CFLBPi+s algorithm and required an execution time between 1.5 ms and of 5.0 ms. The HLBP algorithm also produced a maximum voltage value of 4.999V, the concentration profiles obtained from the flow rate of 27 gs-1 (Figure 6.21(a)) using the CFLBPi+s algorithm show that the distribution of solid-gas flow is inconsistent and the HLBP algorithm helps to reveal that the densities of two phase solid gas flow was dispersed over the flow regime. Almost similar situations happened at a flow rate of 49 gs-1 except at the second sample of concentration profiles which show that the distribution of solid gas flow has changed to a consistent flow and the density of two phase solid gas flow is concentrated at the upper left-hand side area of the flow regime and the concentration profiles obtained using the HLBP algorithm shows that the center areas of the flow regime with highest densities (5V) were surrounded with lower values of densities (less than 5V). As had been discussed above, the flow rate values of 71 gs-1, 93 gs-1, and 126gs-1 show a repeatition pattern of Cn,b peak value sets. As a whole at those flow rates (71 gs-1, 93 gs-1, and 126 gs-1) the concentration profiles that represent the reconstructed images of CFLBPi+s and HLBP algorithms show the distribution of two phase solid air flows obtained consist of either a consistent distribution of flow at the

139 upper left-hand side of the pipe cross section area or an inconsistent distribution of flow and the densities of two phase solid gas flows consist of concentrating densities at the the upper left-hand side of the pipe cross section area or converging densities. As the flow rate values were increased, the areas with highest densities (5V) grown into bigger areas. These show that the pattern of the flow can be concluded or classified as an intermittent flow (Dyakowski, 1995). The accuracies of the images reconstructed from real-time measurements can be determined by comparing the results obtained from each flow. All the pixels from each concentration profile at 30 cycles of measurements were summed and then averaged with the number of cycles, based on the equation: 29

Vf =



15

15

∑ ⎜⎜ ∑ ∑ fˆ (x, y ) n =0

⎝ y = −16 x = −16 30

n

⎞ ⎟ ⎟ ⎠

(6.24)

where: Vf

= average total concentration profile values (Volts) for each flow rate.

fˆ ( x, y ) = approximation of the object function in volts at the xy plane (concentration profile). Table 6.10 shows a tabulated data obtained at each flow using the CFLBPi+s and HLBP image reconstruction algorithms for upstream, downstream, and average. Figures 6.22 and 6.23 show the relationship between the Vf and the solid flow rates using the CFLBPi+s and HLBP algorithms. Figures 6.22 and 6.23 show both of the upstream and downstream measurements have a linear relationship between the Vf and the solid flow rate values in which the Vf values increased with the flow rate values. The equations of regression lines for CFLBPi+s algorithm are: i.

Upstream Vf = 473.69flowrate + 17952 R2 = 0.9878

ii.

(6.25) (6.26)

Downstream Vf = 504.54flowrate + 15499 R2 = 0.9903

(6.27) (6.28)

140 iii.

Average Vf = 489.11flowrate + 16726

(6.29)

R2 = 0.9893

(6.30)

While the gradient errors of upstream and downstream for the CFLBPi+s algorithm compared with the average gradient are -3.15% and 3.15% respectively. The equations of regression lines for the HLBP algorithm are: i.

Upstream Vf = 379.27flowrate + 5604.3

(6.31)

R2 = 0.9855 ii.

(6.32)

Downstream Vf = 416.6 flowrate + 3976.6

(6.33)

2

R = 0.9847 iii.

(6.34)

Average Vf = 397.93 flowrate + 4790.4

(6.35)

R2 = 0.9854

(6.36)

While the gradient errors of upstream and downstream for the HLBP algorithm compared to the average gradient are -4.69% and 4.69% respectively. Table 6.10: Tabulated Vf values for flow rates values of full flow using the

CFLBPi+s and the HLBP algorithms Vf

HLBP

CFLBPi+s Upstream Downstream

Average

Upstream Downstream

Average

V27

29959.11

28837.06

29398.09 15099.16

14993.20

15046.18

V38

36104.45

35193.42

35648.93 20692.99

20790.04

20741.52

V49

42408.76

41340.76

41874.76 24732.03

25144.28

24938.16

V60

45696.92

44735.76

45216.21 28351.65

28314.90

28333.28

V71

54020.71

53167.96

53594.34 33747.16

342272.98

34010.07

V82

57162.36

56849.84

57006.10 37706.17

38971.87

38339.02

V93

59571.43

59388.87

59480.15 37816.07

38827.73

38321.90

V104 65108.44

66535.74

65822.09 43723.67

46113.39

44918.53

V115 71443.56

73112.25

72277.90 48520.38

51900.66

50210.52

V126 80415.05

81803.01

81109.03 55796.63

59133.87

57465.25

141 A Graph of Vf (V) versus the flow rates (gs-1)

90000

Vf (V)

80000 70000 60000 50000 40000 30000 20000 10000 0 0

20

40

60

80

100

120

140

-1

Flow Rate (gs ) Upstream Linear (Upstream)

Downstream Linear (Downstream)

Average Linear (Average)

Figure 6.22: A Graph of Vf (V) versus the flow rates (gs-1) using the CFLBPi+s

algorithm Graph of Vf (V) versus the flow rates (gs-1) values -1 )

70000

Vf (V) 60000 50000 40000 30000 20000 10000 0 0

20

40

60

80

100

120

140

-1

Flow Rate (gs ) Upstream Linear (Upstream)

Downstream Linear (Downstream)

Average Linear (Average)

Figure 6.23: A graph of Vf (V) versus the flow rates (gs-1) using the HLBP algorithm

142

6.4

Summary

Two forms of experiments carried out were concentration measurement and measurement of concentration profiles for flow rate from 27 gs-1 to 126 gs-1. The experiments were aimed at testing on-line the effectiveness of the infra-red tomography system in measuring and visualizing the two-phase solid-gas flow in a gravity flow rig. A total of 30 cycles of measurements were taken for each flow model. The results are analyzed based on the data tabulation, the plotted graphs and the linear relationship between the voltage readings and the flow rate.

CHAPTER 7

MEASUREMENT AND PROFILES OF VELOCITY

7.1

Introduction The main aim of this chapter is to investigate the utilization of infra-red

sensors and fiber optics, which were positioned in parallel manner around a flow pipe at upstream and downstream planes for velocity measurements. The measurements of particles velocities were done simultaneously using the output signal arrays from 64 pairs of infra-red sensors (each plane has 64 infra-red sensors). The cross correlation function (Chapter 3) was used for measuring the velocities of flowing particles, at a sampling frequency of 44.015 Hz corresponding to the inverse of time required to take the measurement of one buffer for each plane as discussed in sub-section 4.2.4.3. The main sampling frequency for concentration measurement at upstream and downstream planes is 88.03 Hz. Figure 7.1 shows how the sampling frequency is determined from one cycle of measurement (254 buffers).

Figure 7.1: One cycle of measurement (254 buffers)

144 7.1.1

Free Fall Motion Free fall is a type of motion in which the only force acting upon an object is

gravity. Objects which are said to be undergoing free fall, are not encountering a significant force of air resistance, they are falling under the influence of gravity. Under such condition, all objects will fall with the same rate of acceleration, regardless of their mass. Figure 7.2 shows the free falling motion (under the influence of gravity) of a 1000 kg baby elephant and a 1 kg doll. As Newton’s second law stated, it would be seen that the 1000 kg baby elephant would experience a greater force of gravity. m=1000kg

m=1kg

Fgrav=9.81N

Fgrav = 98000N

a=

Fnet 98000N = m 1000kg

a = 9.81ms −2

a=

Fnet 9.81N = m 1kg

a = 9.81ms − 2

Figure 7.2: Free fall motion of two objects under influence of gravity 9.81 ms-2 This greater force of gravity would have a direct effect upon the baby elephant’s acceleration; thus, based on force alone, it might be thought that the 1000 kg baby elephant would accelerate faster. However, acceleration depends on two factors: force and mass. The 1000 kg baby elephant obviously has more mass or inertia. This increased mass has caused the baby elephant’s acceleration to decrease. And thus, the direct effect of greater force on the 1000 kg baby elephant is offset by the inverse of the greater mass of the 1000 kg baby elephant; an so each object accelerates at the same rate of 9.81ms-2 . The ratio of force to mass (Fnet/m) is the same for every object that experienced free fall motion. The ratio of force to mass (Fnet/m) is equal to the value of acceleration for object.

145 7.1.2

Falling with Air Resistance

As an object falls through air, it usually encounters some degree of air resistance. This air resistance is the result of collisions of the object’s leading surface with air molecules. The actual amount of air resistance encountered by the object depends on two main factors: speed of object and cross section area of the object. The increased speeds have a forward relationship with the air resistance and object that encounters air resistance effect will reach a maximum velocity called terminal velocity. Figure 7.3 shows how an object that falls with air resistance and the acceleration produced at each instance in time. (a)

(b)

(c)

(d) Fair=980N Fair=780N

Fair=380N

Fgrav=980N

Fgrav=980N

Fgrav=980N

Fgrav=980N

(a)

(b)

(c)

(d)

a = (Fnet m )

a = (Fnet m )

a = (Fnet m )

a = (Fnet m )

= (980/100 ) = 9.81ms

−2

= (600/100 ) = 6ms

−2

= (200/100 ) = 2ms

−2

= (0/100 ) = 0ms − 2

Figure 7.3: An object falls under the influence of air resistance and the value of

acceleration produced at each instance of time Figure 7.3 shows the acceleration values decreased toward zero. ‘Fnet’ is the net force obtained from the formula of ‘Fnet = Fgrav – Fair’. When the object falls, the velocity will be increased. The increase in velocity leads to an increase in the amount of air resistance. Eventually, the force of air resistance becomes large enough to balance the gravity force and produced a net force value of 0 N. At this point, the acceleration value is zero and the object encounters a terminal velocity. Terminal velocity is the maximum velocity value of an object in which the object has a constant velocity. For object with smaller amount of mass will reaches the terminal

146 velocity faster compare to the object with greater amount of mass, because the air resistance force will balance the small mass object’s gravity force at a shorter time rate. The theoretical calculation (equation 7.1) of objects velocity distributed by a rotary valve with a certain vertical length which is shown in Figure 7.4 was obtained by neglecting the air resistance factor.

Rotary Valve

Distribution Pipe

s1 s2

Upstream Sensors

L

Downstream Sensors Figure 7.4: Distance between upstream/downstream sensors with the rotary valve s = ut +

1 2 at 2

where: s

= distance (m)

u

= initial velocity (ms-1)

t

= time (s)

a

= acceleration of gravity (ms-2)

From above formula, the object velocity can be calculated as follows: s1 (t = 0) = 0.97m, a = 9.81ms −2 , u (t = 0) = 0ms −1

For s1 = 0.97m 2

2(0.97 ) = 0.1978 9.81 = 0.4447 s

t1 =

s 2 = s1 + L = 0.97 + 0.1 = 1.07 2

t2 =

2(1.07 ) = 0.2181 s2 9.81

(7.1)

147

t 2 = 0.4671 s ∴ Δt = 0.4671 − 0.4447 = 0.0224 s L 0.1 ∴ velocity(Vt ) = = = 4.46ms −1 Δt 0.0224

7.2

Velocity Measurement (Sensor-to-Sensor)

The velocity of an object can be measured from a distance between two points over the transit time measured using a stopwatch. The cross correlation concept, which has been discussed in Chapter 3, was applied to measure the velocity of solids within the pneumatic pipeline. Two important values required for velocity measurement are: i.

A known distance from an upstream sensor to a downstream sensor and labeled as L. In this project the distance is equal to 0.1 m.

ii.

The transit time ( τ m ) which is the time taken by the solid particles to move from an upstream sensor to a downstream sensor

Finally, the velocity of solids particles can be calculated using Equation 3.24. The sensor-to-sensor measurement method is capable of giving an average velocity values of solids particles (Yan et al., 1995). The output sensors were collected at flow rate values of 49 gs-1, 93 gs-1, and 126gs-1. The overall cycle is 30 and each cycle has 254 number of samples (127 upstream and 127 downstream). The time resolution for each section is 1/44.015 (minor frequency) = 22.72ms. As discussed above objects that fall under the influence of gravity effects and with a same average size should have a uniform velocity value for each sensors pair in the measurement system. The correlation coefficient between both upstream and downstream signal decreases with increasing L distance and the absolute variance increases (Azrita, 2002). However, the absolute variance can be decreased by selecting the proper sampling rate value.

148 7.2.1

Plastic Beads Flow

Figures 7.5, 7.6, and 7.7 show the selected signals from both upstream (u(t)) and downstream (d(t)) and the cross correlation coefficient (R(τ)) for plastic beads flow at 49 gs-1, 93 gs-1, and 126 gs-1.

Figure 7.5: Output signal for sensor 06 (a) Upstream and (b) Downstream, (c) The

correlation function for upstream and downstream sensors at 49 gs-1 and cycle=2

Figure 7.6: Output signal for sensor 06 (a)Upstream and (b) Downstream, (c) The

correlation function for upstream and downstream sensors at 93 gs-1 and cycle=4

149

Figure 7.7: Output signal for sensor 00 (a)Upstream and (b) Downstream, (c)

correlation function for upstream and downstream sensors at 126 gs-1 and cycle=2

7.2.2

Discussion on Sensor to Sensor Cross Correlation Results

Cross-correlation within two-phase flows has a major advantage: the contrast between the two-phases tends to be high, giving plenty of structure on which to correlate; but a major disadvantage: the correlation is essentially the square of the signal, so within any volume, the largest signal changes, and hence the largest flow structures, dominate the result (Andrew et al., 2003). Therefore, sensors that average across the entire flow cross section (Yan et al., 1995), give results which are entirely dependent on the flow structure and which cannot be reliably interpreted without knowledge of that structure. The results in Chapter 6 shows that the pattern of the solid particles distributed by the rotary valve is an intermittent flow, where this complex flow consists of integration of plug, slug, and churn flows. Another important information is the output reading of the infra-red optical sensor depends on the level of infra-red light intensities caused by the solid particles flows or static objects (saturated measurement). Based on those conditions, the chosen sampling rate is 88.03 Hz and

150 this sampling rate gives flexibility to the system and visualization in term of giving a whole understanding what are the flow structure forms. For example at a low flow rate of 27 gs-1 (Figure 6.21), one set of solid particles drop occurred at measurement cycle number of 2 to measurement cycle number of 3. While at flow rate values of 49 gs-1 and above the set of solid particles drop can be measured at a certain individual measurement cycle. Table 7.1 shows the information obtained from the measurement of 49 gs-1, 71 gs-1, and 126 gs-1 plastic beads flows, in which at each flow one sensor that represent each projection (0°, 45°, 90°, and 135°) has been chosen, where τ m = the transit time, V = the measured velocity value, Rmak = the maximum crosscorrelation function value. The error of velocity value compared to the calculated velocity value (%e) was obtained using the following equation:

%e =

V − Vt × 100 Vt

where: V = measure velocity value. Vt = calculated velocity value (theoretically).

(7.2)

151 Table 7.1: Information from sensor-to-sensor cross-correlation measurement Flow

49gs-1

Sensor

%e

Rmaks Cycle Cn,b range (CFLBPi+s)

22.72ms 4.40 -1.34

2.35

2

21

22.72ms 4.40 -1.34

3.89

17

22.72ms 4.40 -1.34

3.23

22.72ms 4.40 -1.34

2.57

22.72ms 4.40 -1.34

5.38

22.72ms 4.40 -1.34

2.36

22.72ms 4.40 -1.34

5.86

56 06 29 -1

40 55 00 19 126gs

V

06

45

71gs

τm

-1

45 56

15 12 4 7 19

22.72ms 4.40 -1.34 11.19

15

4.12

2

22.72ms 4.40 -1.34

22.72ms 4.40 -1.34 10.78

18

2.04

7

22.72ms 4.40 -1.34

22.72ms 4.40 -1.34 16.08

2

0≤ Cn,bup ≤ 4.16 0≤ Cn,bdown ≤5.35 0≤ Cn,bup ≤39.88 0≤ Cn,bdown ≤ 21.29 0≤ Cn,bup ≤7.26 0≤ Cn,bdown ≤8.08 0≤ Cn,bup ≤12.70 0≤ Cn,bdown ≤10.01 0≤ Cn,bup ≤49.35 0≤ Cn,bdown ≤47.20 0≤ Cn,bup ≤5.64 0≤ Cn,bdown ≤5.39 0≤ Cn,bup ≤55.61 0≤ Cn,bdown ≤59.32 0≤ Cn,bup ≤8.17 0≤ Cn,bdown ≤9.03 0≤ Cn,bup ≤34.23 0≤ Cn,bdown ≤38.99 0≤ Cn,bup ≤67.00 0≤ Cn,bdown ≤64.07 0≤ Cn,bup ≤64.80 0≤ Cn,bdown ≤65.13 0≤ Cn,bup ≤34.23 0≤ Cn,bdown ≤38.99

The time for processing the cross-correlation of 64 sensors was 8.88 ms. As a whole Table 7.1 shows that the measurement amplitude or the cross-correlation results for the sensor-to-sensor velocity measurement technique depend on the infrared light absorption factor at certain locations within the flow rig. However the results did not rely on the volumetric concentration of the flow regime because the Cn,b values are the results from all sensors at every projection. For example at a flow

rate of 49 gs-1, sensor 21 and sensor 45 with a projection angle of 45˚ and 90˚ respectively produced an identical cross correlation (Rmak) maximum amplitude value. However the Cn,b values associated with each sensor are different.

152

7.3

Velocity Profile

The cross section area of the pipe was divided into 16×16 pixels, as discussed in sub-section 3.7.1. For every tomogram, the individual pixel for upstream and downstream section was named as u(x,y) and d(x,y), at an integer range of -8 ≤ x ≤ 7 and -8 ≤ y ≤ 7. Each pixel represents a numerical value which was updated every 22.72 ms. Therefore, the u(x,y) and d(x,y) values in term of time series can be represented as ux,y(t) and dx,y(t) series of time. These values are cross-correlated and the position of peak cross-correlation coefficient will give the time required for solid particles to move from the upstream cross section area to the downstream cross section area. Another profile has been reconstructed and represents the maximum values of cross correlation coefficient (Rmax) for each pixel in the x and y planes. A maximum value amongst the Rmax(x,y) values was identified (HighRmax(x,y)) as a maximum reference and the minimum reference was set at zero. The same color scales was used in which it has a 256 points.

153 7.3.1

Results of Velocity Profile

Figure 7.8(a) shows the result of velocity profiles using the pixel-to-pixel cross correlation at a flow rate of 49 gs-1 and measurement cycles of 1, 4, and 6 while Figure 7.8(b) shows the Rmax(x,y) profiles at the selected cycle measurements respectively. i) PT: 33.64ms

ii) PT: 30.46ms

iii) PT: 31.11ms

(a)

i) HighRmax = 7.309 0

ii) HighRmax = 74.43 (b)

iii) HighRmax = 31.69 4.46m/s

HighRmax

Figure 7.8: (a) Velocity profiles for solid particle flows at a flow rate of 49 gs-1 (i)

Cycle = 1 ii) Cycle = 4 iii) Cycle = 6 and (b) Rmax(x,y) profiles at each measurement cycle respectively

154

Figures 7.9, 7.10, and 7.11 show the output signals of ux,y(t), dx,y(t), and Rx,y(τ) at each measurement cycle respectively. At the measurement cycles of 1, 4,

and 6. The x,y coordinates are (2,-4), (-2,1), and (0,1).

Figure 7.9: Output signal for Pixel(2,-4) (a)Upstream and (b) Downstream, (c) The

correlation function for upstream and downstream Pixel(2,-4) at 49 gs-1 and cycle=1

Figure 7.10: Output Signal for Pixel(-2,1) (a)Upstream and (b) Downstream, (c) The

correlation function for upstream and downstream Pixel(-2,1) at 49 gs-1 and cycle=4

155

Figure 7.11: Output signal for Pixel(0,1) (a)Upstream and (b) Downstream, (c) The

correlation function for upstream and downstream Pixel(0,1) at 49 gs-1 and cycle=6

156 Figure 7.12(a) shows the result of velocity profiles using the pixel-to-pixel cross correlation at a flow rate of 71 gs-1 and measurements cycles of 1, 3, 4, 6, 8, and 10 while Figure 7.12(b) shows the Rmax(x,y) profiles at the selected measurement cycles respectively. i) PT: 31.42ms

ii) PT: 33.17ms

iii) PT: 30.57ms

iv) PT: 33.29ms

v) PT: 30.42ms

vi) PT: 30.58ms

(a) i) HighRmax =6.31

ii) HighRmax =66.47

iii) HighRmax =21.22

iv) HighRmax =64.31

v) HighRmax =8.79

vi) HighRmax =14.90

(b) 0

4.46m/s

HighRmax Figure 7.12: (a) Velocity profiles for solid particle flows at a flow rates of 71 gs-1 (i)

Cycle = 1 ii) Cycle = 3 iii) Cycle = 4 iv) Cycle = 6 v) Cycle = 8 vi) Cycle = 10 and (b) Rmax(x,y) profiles at each cycle measurement respectively

157 Figures 7.13, 7.14, 7.15, 7.16, 7.17, and 7.18 show the output signals of ux,y(t), dx,y(t), and Rx,y(τ) at each measurement cycle respectively. At the measurement

cycles of 1, 3, 4, 6, 8, and 10 the x,y coordinates are (1,1), (11,0), (2,-1), (-4,-1), (4,1) and (12,6).

Figure 7.13: Output signal for Pixel(1,1) (a)Upstream and (b) Downstream, (c) The

correlation function for upstream and downstream Pixel(1,1) at 71 gs-1 and cycle=1

Figure 7.14: Output signal for Pixel(11,0) (a)Upstream and (b) Downstream, (c) The

correlation function for upstream and downstream Pixel(11,0) at 71 gs-1 and cycle=3

158

Figure 7.15: Output signal for Pixel(2,-1) (a)Upstream and (b) Downstream, (c) The

correlation function for upstream and downstream Pixel(2,-1) at 71 gs-1 and cycle=4

Figure 7.16: Output signal for Pixel(-4,-1) (a)Upstream and (b) Downstream, (c) The

correlation function for upstream and downstream Pixel(-4,-1) at 71 gs-1 and cycle=6

159

Figure 7.17: Output signal for Pixel(4,-1) (a)Upstream and (b) Downstream, (c) The

correlation function for upstream and downstream Pixel(4,-1) at 71 gs-1 and cycle=8

Figure 7.18: Output signal for Pixel(12,6) (a)Upstream and (b) Downstream, (c) The

correlation function for upstream and downstream Pixel(12,6) cycle=10

at 71 gs-1 and

160 Figure 7.19(a) shows the result of velocity profiles using the pixel-to-pixel cross correlation at a flow rate of 126 gs-1 and cycle measurement cycles of 5, 6, 7, 8, 10, and 11 while Figure 7.19(b) shows the Rmax(x,y) profiles at the selected cycles measurement respectively. i)PT: 30.58ms

ii)PT: 34.34ms

iii)PT: 30.44ms

iv) PT: 35.43ms

v)PT: 31.43ms

vi)PT: 33.46ms

(a) i) HighRmax = 29.37

ii) HighRmax = 9.26

iii) HighRmax = 54.45

iv) HighRmax = 60.45

v) HighRmax = 54.46

vi) HighRmax =56.15

(b) 0

4.46m/s

HighRmax Figure 7.19: (a) Velocity profiles for solid particle flows at a flow rates of 126 gs-1

(i) Cycle = 5 ii) Cycle = 6 iii) Cycle = 7 iv) Cycle = 8 v) Cycle = 10 vi) Cycle = 11 and (b) Rmax(x,y) profiles at each cycle measurement respectively

161 Figures 7.20, 7.21, 7.22, 7.23, 7.24, and 7.25 show the output signals of ux,y(t), dx,y(t), and Rx,y(τ) at each measurement cycle respectively. At the cycles

measurement of 5, 6, 7, 8, 10, and 11 the x,y coordinates are (5,-1), (0,-2), (-5,3), (-5,3), (-3,-3) and (-5,1).

Figure 7.20: Output signal for Pixel(5,-1) (a) Upstream and (b) Downstream, (c) The

correlation function for upstream and downstream Pixel(5,-1)

at 126 gs-1 and

cycle=5

Figure 7.21: Output signal for Pixel(0,-2) (a) Upstream and (b) Downstream, (c) The

correlation function for upstream and downstream Pixel(0,-2) cycle=6

at 126 gs-1 and

162

Figure 7.22: Output signal for Pixel(-5,3) (a) Upstream and (b) Downstream, (c)

correlation function for upstream and downstream Pixel(-5,3)

at 126 gs-1 and

cycle=7

Figure 7.23: Output signal for Pixel(-5,3) (a) Upstream and (b) Downstream, (c)

correlation function for upstream and downstream Pixel(-5,3) cycle=8

at 126 gs-1 and

163

Figure 7.24: Output signal for Pixel(-3,-3) (a) Upstream and (b) Downstream, (c)

The correlation function for upstream and downstream Pixel(-3,-3) at 126 gs-1 and cycle=10

Figure 7.25: Output signal for Pixel(-5,1) (a) Upstream and (b) Downstream, (c) The

correlation function for upstream and downstream Pixel(-5,1) cycle=11

at 126 gs-1 and

164 7.3.2

Discussion on Pixel-to-Pixel Cross Correlation Results

Each pixel at the velocity profiles with 16 × 16 pixels resolution is represents the total output reading of four sensors in which each sensor is from 0 º, 45 º, 90 º, and 135 º projections respectively. The output readings from each sensor is multiplied by the corresponding sensitivity map at each pixels It can be observed that the higher the series of readings of the sensor output at the associated upstream and downstream pixels, the higher the cross-correlation peak as shown at the figures above. The time required to complete a pixel-to-pixel cross correlation process was between 30 and 33 ms. At the flow rate value of 49 gs-1 (Figure 7.8), the first sample (cycle=1) and Rmax(x,y) values show that the flow is a non-homogenous flow in which the non

uniform velocity and Rmax(x,y) profiles values were obtained. However, the other samples (cycle=4 and cycle=6) show a concentrated values of velocity and Rmax(x,y) profiles at certain areas. A similar phenomenon occurred at the flow rates of 71 gs-1 and 126 gs-1. Theoretically, the velocity of solid particles obtained does not change although the flow rates changed (Azrita, 2002). However, this condition does not happened because the solid particles flow is under the influences of the rotary valve, collision between the solid particles, and air resistance force.

7.4

Summary

Based on the theory of free fall motion and air resistance force, the sensor-tosensor and pixel-to-pixel velocity measurements had been accomplished, the results obtained from the measurements compared with the results calculated theoretically. The results from sensor-to-sensor and pixel-to-pixel cross correlation give the average velocity values of solid particles within the pipeline and the distribution of velocity values of solid particles in the flow regime respectively.

CHAPTER 8

CONCLUSION AND RECOMMENDATIONS

8.1

Conclusion

The infra-red tomography measurement system has been successfully designed and tested. The specific objectives of the thesis have been fulfilled as follows: •

An introduction on process tomography, types of tomography, several research carried out on optical tomography by the previous researchers is put briefly described, and a brief discussion on the characteristics of infra red have been expounded in Chapter 2 (Objective one).



The data acquisition system has been discussed in Chapter 4 in which the discussion includes the characteristics and features of the selected data acquisition system. Various types of image reconstruction algorithms used to reconstruct the image have been discussed in Chapter 3 (Objective two).



The design of the measurement fixture and the preparation of the fiber optic are very important to ensure that the infra-red light emitted to the detectors are collimated and so the interaction between the collimated infra-red light and the targeted object can be studied using various model of flow objects that has been discussed in Chapter 5 and Chapter 6 (Objective three).

166 •

The inverse and forward problems have been solved in Chapter 3 which applies the relationship between the spatial domain (x1, y1) with the projection domain (x1’, y1’) and measurement of the infra-red light intensity (Objective four).



A description of the cross-correlation method and its relationship with the free fall motion which was used for velocity measurement and velocity profile are discussed in Chapter 3 and Chapter 7 respectively (Objective five).



The development of the infra-red tomography hardware system has been developed as discussed in Chapter 4 which focused on the important aspects such as the arrangement of infra-red sensor around the pipeline, selection of infra-red transmitter and receiver, selection and preparation of the fiber optic, and circuits design (Objective six).



The developed infra-red tomography system is capable of utilizing a set of 254 buffers with 64 samples per buffer resulting from the synchronized signal conditioning and the data sampling processes which controlled using digital signals produced by a PIC controller as described in Chapter 4 (Objective seven).



The CFLBPi+s and HLBP algorithm both at 32 × 32 pixels resolution have been found the most suitable image reconstruction algorithms based on the qualitative and quantitative measurements as discussed in Chapter 5 (Objective eight).



The measurement system has been implemented to obtained 30 cycles of data measurement for 10 sets of mass flow rates in grams per second with an optimum main sampling frequency of 88.03 Hz as described in Chapter 6 (Objective nine)



The prototype of infra-red tomography prototype system has been tested on a gravity flow rig and the results show (Chapter 6) that the pattern of the flow

167 produce by the rotary valve can be concluded or classified as an intermittent flow (Objective ten). •

8.2

Suggestions for further research are discussed in Section 8.2.

Contribution to the Field of Tomography System

Several important aspects that have been achieved from the hardware and software development are •

The use of infra-red fiber optic in the infra-red tomography system enables the increase in measurement and image quality.



The concepts of mode switching, sample and hold, synchronous signal sampling, and interfacing system programming using DriverLINX had increased the total number of measurements.



The tomography infra-red measurement system was able to measure the concentration profile and velocity measurement of solid particle flow in a gravity flow rig.

168 8.3

Recommendations for Future Work

Based on the constructed prototype and the various experiments carried out using the constructed tomography system, the following suggestions are given which can improve the system: •

The infra-red optical sensors and thermal infra-red imaging camera (infra-red CCD) can be combined to function as a thermal imaging system and a CCD standard system. Infra-red CCDs can detect a broad spectrum of near infrared light, provide heat profile of investigated object, and could provide velocity information if the frame rate is known.



The use of a CCD infra-red camera can enable more characteristics to be examined i.e. infra-red light reflection effect, infra-red light attenuation and scatter effect, and environment changing effect (temperature and pressure).



The Bluetooth technology can be introduced. Bluetooth is a cable replacement technology. The use of Bluetooth can reduce the use of cables or wires that enables the system to be combined with internet application as well as wireless local area network, and test any flow system which is difficult to enter such as an underground sewage system.



Made use of the SMD (surface mount device) technology in the circuit development stage in order to gain several advantages such as reduced circuit scale, a possible implementation of complex circuit design, and the quality of the output signal can be increased with the combination of programmable gain arrays (PGA).



An investigation should be carried out to develop a more mobile tomography system i.e. developing a tomography system based on the Personal Digital Assistant (PDA) system.

169 •

Further work is required to optimize the lens curvature of fiber optic. A lens production fixture should be designed so that the lens surface is smoother, providing a more collimated beam.



An adjustable sampling frequency system can be used, which enables the effects of various sampling frequency to be investigated.



Multi modality tomography should be investigated in which the optical tomography system can be combined with other types of sensing system with the aim of comparing the accuracy of the measurements and increasing the understanding on the flow process.



The tomography system can be tested using other two phase flow such as gasliquid flow or solid-liquid flow to observe its limitation as well as observing the accuracy and precision in measuring concentration and velocity.



The sampling frequency limitation as stated in sub-section 4.1.4.3 can be solved by designing a data acquisition system that may contain a series of parallel converters to increase the sampling speed and to avoid the use of a sample and hold unit.



The use of other image reconstruction algorithms should be investigated such as Inverse Radon transform, Inverse Fourier transform, and modified Newton Raphson algorithms.



The use of artificial intelligence such as neural network or fuzzy logic should be explored for image reconstruction or an auto-adjusting sampling rate system can be proposed.



The longer term aim of this project is to determine the volumetric flow rate of the conveyed phase. This can be achieved by combining the concentration profiles with the velocity profiles.

170 •

In this project, measurements were made on concentration and velocity profiles. Further investigation should be carried out to characterize the particles in terms of shape, size, and thermal properties by modifying the transducer circuit such as combining it with an infra-red CCD camera or a pyroelectric transducer and using optical fiber with smaller diameters.



A dedicated sensor auto calibration system should be designed. This could be based on a voltage controlled light source or a laser light source, followed by a mechanical or electrical light chopper. The dedicated system should be under the control of a microcontroller or a PC.

171

REFERENCES

Abdul Rahim, Ruzairi (1996). A tomography imaging system for pneumatic conveyors using optical fibres. Sheffield Hallam University: Ph.D. Thesis.

Abdul Rahim, R., Green R. G., Horbury, N., Dickin, F. J., Naylor, B. D., and Pridmore, T. P. (1993). Further development of a tomographic imaging system using optical fibres for pneumatic conveyors. Meas. Sci. Technol. Vol(7): 419-422.

Alias, Azrita (2002). Mass Flow Visualization of Solid Particles in Pneumatic Pipelines Using Electrodynamics Tomography System. Universiti Teknologi Malaysia.: M.Eng. Thesis.

Andrew H, John D.P., and Robert B.W. (2003). A Novel Tomographic Flow Analysis System. 3rd World Congress on Industrial Tomography, Banff, Canada.

Badea, Cristian (2000). Volume Imaging Using a Combined Cone Beam CT-DTS Approach. University of Patras: Ph.D. Thesis.

Beck, M.S. (1995). Selection of sensing techniques. In: Williams, R. A & Beck, M. S (ed) Process Tomography – Principles, Technique and Application. Oxford: Butterworth-Heinemann Ltd. 41-48.

Beck, M. S. and Williams, R. A. (1996). Process tomography: a European innovation and its applications. Meas. Sci. Technol. Vol(7): 215–224.

172

Beck, M. S., Plaskowski, A. B., and Green R. G. (1986). Imaging for measurement of two-phase flow. Proc. Flow Visualization IV. Paris: 585-8. Bell, David A. (1990). Operational Amplifier, Applications, Troubleshooting, and Design. Canada: Prentice Hall.

Bolomey, J. Ch. (1995). Microwave sensors. . In: Williams, R. A & Beck, M. S (ed) Process Tomography – Principles, Technique and Application. Oxford: Butterworth-Heinemann Ltd. 151-165.

Bolomey, J. Ch. and Pichot, C. (1991). Microwave tomography: from theory to practical imaging systems. Int. J. Imaging System and Technology. 2: 144156.

Brigham, E. (1974). The Fast Fourier Transform. Englewood Cliffs, NJ: Prentice Hall.

Bronson, G. J. (1999). C++ for Engineers and Scientists. Kansas: Brook Publishing Company.

Brown, G. J., Reilly, D., and Mills, D. (1996). Development of an ultrasonic tomography system for application in pneumatic conveying. Meas. Sci. Technol. Vol(7): 396-405.

Brown, B., Smallwood, R., Barber, D., Lawford, P., and Hose, D. (1999). Medical Physics and Biomedical Engineering. Bristol: Institute of Physics Publishing.

Chan, Kok San. (2003). Real Time Image Reconstruction for Fan Beam Optical Tomography System. Universiti Teknologi Malaysia: M.Eng. Thesis.

Daniels, A. R. (1996). Dual Modality Tomography for the Monitoring of Constituent Volumes in Multi-Component Flows. Sheffield Hallam University: Ph.D. Thesis.

173 Dickin, F. J. and Wang, M. (1995). Impedance sensors – conducting systems. In: Williams, R.A & Beck, M.S (ed) Process Tomography – Principles, Technique and Application. Oxford: Butterworth-Heinemann Ltd. 63-83.

Dugdale, W. P. (1994). An optical instrumentation system for the imaging of twocomponent flow. University of Manchester: Ph.D. Thesis.

Dyakowski, T. (1996). Process Tomography Applied to multi-phase measurement. Meas. Sci. Technol. Vol(7): 343-353.

Ellenberger, U., Soom, B., and Balmer, J. E. (1993). Characterization of an x-ray streak camera at 18.2 nm. Meas. Sci. Technol. Vol(4): 874-880.

Embree, P. M. and Daniele, Damon. (1999). C++ Algorithm for Digital Signal Processing. Upper Saddle River, New Jersey: Prentice Hall.

Feng, J., Okamoto, K., Tsuru, D., Madarame, H., and Fumizawa, M. (2002). Visualization of 3D gas density distribution using optical tomography. Chemical Engineering Journal. 86: 243-250.

Gibbs, S. J. and Hall, L. D. (1996). What roles are there for magnetic resonance imaging in process tomography?. Meas. Sci. Technol.Instrum. Vol(7): 827837.

Graeme, Jerald.G. (1995). Photodiode amplifiers – Op amp solutions. New York: McGraw-Hill.

Graham, Deryn. and Barret, Anthony. (1997) .Knowledge-Based Image Processing Systems. Brunnel, Middlesex: Springer.

174 Grassler, T. and Wirth, K.E. (1999). X-Ray Computer Tomography-Potential and Limitation for the Measurement of Local Solids Distribution in Circulating Fluidized Beds. Proc. 1st World Congress on Industrial Process Tomography. Buxton. 202-209.

Green, R. G., Abdul Rahim, R., Evans, K., Dickin, F. J., Naylor, B. D., and Pridmore, T. P. (1996). Concentration profiles in a pneumatic conveyor by optical tomography measurement. Meas. Sci. Technol. Vol(7): 419–22.

Halow, J. S. and Nicoletti, P. (1992) Observations of fluidized bed coalescence using capacitance imaging. Powder Technol. Vol(69): 255–77.

Hammer, E.A. and Green, R.G. (1983). The Spatial Filtering Effect of Capacitance Transducer Electrodes. Journal Physics Science Instrument. 16:483-443.

Hauptmann, P., Hoppe, N. and Puttmer, A. (2002) . Application of ultrasonic sensors in the process industry. Meas. Sci. Technol. Vol(13): 73-83.

Hartley, A. J., Dugdale, P., Green, R. G., Jackson, R. G., and Landauro, J. (1995). Design of an optical tomography System. In: Williams, R.A & Beck, M.S (ed) Process Tomography – Principles, Technique and Application. Oxford: Butterworth-Heinemann Ltd. 183-197.

Hawkesworth M. R., O’Dwyer M. A., Walker J., Fowles P., Heritage J., Stewart P. A. E., Witcomb R. C., Bateman J. E., Connolly J. F. and Stephenson R. (1986). A positron camera for industrial application. Nucl. Instrum. Methods. 253:145–157

Herman, Gabor T. (1980). Image Reconstruction from Projection. New York: Academic Press.

175 Hoyle, B. S., Xu, L. A. (1995). Ultrasonic Sensors. In : Williams R. A. & Beck, M.S. ed . Process Tomography – Principles. Techniques and Application. Oxford : Butterworth-Heinemann Ltd. 120-149.

Huang, S. M., Plaskowski, A. B., Xie, C. G. and Beck, M. S. (1986). Tomographic imaging of two-component flow using capacitance sensors. J. Phys. E: Sci. Instrum. Vol(22): 173-177. Ibrahim, Sallehuddin. (2000). Measurement of Gas Bubble in a Vertical Water Column Using Optical Tomography. Sheffield Hallam University: Ph.D. Thesis.

Ibrahim, S., Green, R. G., Dutton, K., Abdul Rahim, R., Evans, K. and Goude, A. (1999). Optical Fibres for Process Tomography: A Design Study. In: 1st World Congress on Industrial Process Tomography, Buxton, Greater Manchester, April 14-17.

Jaworski, A. J., and Dyakowski, T. (2001). Application of Electrical Capacitance Tomography for Measurement of Gas-Solids Flow Characteristic in Pneumatic Conveying System. Meas. Sci. Technol. Vol(12): 1109-1119.

Jung, Walter G. (1973). IC Op-Amp Cookbook. Indiana U. S. A.: Howard W. Sams & Co., Inc.

Jordana, J., Gasulla, M., and Pallas-Arenny, R. (2001). Application Electrical resistance tomography to detect leaks from buried pipes. Meas. Sci. Technol. Vol(12): 1061-1068.

Kak, Avinash C. and Slaney Malcom. (1999). Principle of Computerized Tomography Imaging. New York.: IEEE Press. The Institute of Electrical and Electronics Engineers, Inc.

176 Kaplan, Herbert. (1993). Practical Application of Infrared Thermal Sensor and Imaging Equipment. SPIE. The International Society for Optical Engineering: Bellingham, Washington. 360-365

Keithley Instrument Inc (2001). Technical Reference Manual. Third Printing. Ohio: Scientific Software Tool Inc.

Kugelstadt, Thomas (2001). “Op Amps for Everyone”. Texas: Texas Instruments Inc.

Maher, A., and Sid, Ahmed. (1995). Image Processing (Theory, Algorithms, and Architectures). New York: McGraw-Hill, Inc.

Mass, J. S. (1972). Basic Infra-red Spectroscopy. London: Heyden & Son Ltd.

McKee, S. L. (1995). Applications of Nuclear Magnetic Resonance Tomography. In: Williams, R.A & Beck, M.S (ed) Process Tomography – Principles, Technique and Application. Oxford: Butterworth-Heinemann Ltd. 539-548.

Menke, William. (1984). Geophysical Data Analysis: Discrete Inverse Theory. Academic Press Inc.,pg 11-14 and pg 182-186.

Mohd. Hafiz Fazalul Rahiman (2002). Image Reconstruction Using Transmissionmode of Ultrasonic Tomography In Water-Particles Flow. Universiti Teknologi Malaysia: B.Eng. Thesis.

Neuffer, D., Alvarez, A., Owens, D. H., Ostrowski, K. L., Luke, S. P., and Williams, R. A. (1999). Control of pneumatic conveying using ECT. In: Proceedings of 1st World Congress on Industrial Process Tomography. Buxton, UK. 71–77.

Nyfor, E. and Vainikainen, P. (1989). Industrial Microwave Sensors. Norwood: Artech House.

Isaksen, O. (1996). A review of reconstruction techniques for capacitance tomography. Meas. Sci. Technol. Vol(7): 325-337.

177 Pang, Jon Fea. (2004). Real-Time Velocity and Mass Flow Rate Measurement Using Optical Tomography. Universiti Teknologi Malaysia: M.Eng. Thesis.

Parker,D. J. and McNeil, P. A. (1996). Positron emission tomography for process applications Meas. Sci. Technol.Instrum. Vol(7): 287-296.

Peyton, A. J. (1995). Mutual inductance tomography. In: Williams, R.A & Beck, M.S (ed) Process Tomography – Principles, Technique and Application. Oxford: Butterworth-Heinemann Ltd. 85-100.

Philips Semiconductor. (1994). NE/SE5537. Data Sheet. Philips Semiconductor Inc.

Plaskowski, A., Beck, M. S., Thorn, R., and Yakowski, T. (1997). Imaging Industrial Flows. Bristol and Philadelphia: Institute of Physics Publishing.

Ramli, N., Green, R. G., Abdul Rahim, R., and Naylor, B. (1999). Fibre Optic Lens Modelling for Optical Tomography. In: Proceedings of 1st World Congress on Industrial Process Tomography. Buxton, UK. 517–521.

Rzasa, M. R. and Plaskowski, A. (2003). Application of optical tomography for measurements of aeration parameters in large water tanks. Meas. Sci. Technol. Vol(14): 199-204.

Syed Salim, Syed Najib. (2003). Concentration Profiles and Velocity Measurement Using Ultrasonic Tomography. Universiti Teknologi Malaysia.: M.Eng. Thesis.

Shung, Smith, K. M., and Tsui, B. (1995). Principle of Medical Imaging. San Diego: Academic Press.

Tan, Chee Siong. (2002). Velocity Measurement of Solid Particle in Pneumatic Conveyor using Optical and Ultrasonic Sensor via Cross Correlation Method. Universiti Teknologi Malaysia: B. Eng. Thesis.

178 Tapp, H. S., Kemsley, E. K., Wilson, R. H., and Holley, M. L. (1998). Image improvement in soft-field tomography through the use of chemometrics. Meas. Sci. Technol. Vol(9): 592-598.

Tapp, H. S., Peyton, A. J., Kemsley, E. K., and Wilson, R. H. (2003). Chemical engineering applications of electrical process tomography. Sensors and Actuators B. Vol(92): 17–24

Taroni, P., Pifferi, A., Toricelli, A., Spinelli, L., Danesini, G. M., and Cubeddu, R. (2004). Do shorter wavelengths improve contrast in optical mammography?. Phys. Med. Biol. Vol(49): 1203–1215.

Uchiyama, H., Nakajma, M., and Yuta, S. (1985). Measurement of flame temperature distribution by IR emission computed tomography. Appl.Opt, 24: 4111-4116

Vanzetti, Riccardo. (1972). Practical Application of Infrared Techniques. New York: John Wiley & Sons.

Wang, M., Dorward, A., Vlaey, D., and Mann, R. (1999). Measurements of gasliquid mixing in a stirred vessel using electrical resistance tomography (ERT). In: Proceedings of 1st World Congress on Industrial Process Tomography. Buxton, UK. 78–83.

Wang, M., Yin W., and Holliday, N. (2002). A highly adaptive electrical impedance sensing system for flow measurement. Meas. Sci. Technol. Vol(13): 18841889.

Webb, S. (1992). The Physics of Medical Imaging. Bristol: Institute of Physics Publishing.

Williams, R. A. and Beck, M. S. (1995). Introduction into process tomography. In:Williams, R. A. & Beck, M. S. ed. Process Tomography – Principles, Techniques and Application . Oxford : Butterworth-Hienemenn Ltd. 3-12.

179 Williams, R. A. and Beck, M. S. (1997). Process Tomography, Principles, techniques and applications. Oxford London: Butterworth-Heinemann.

Xie, C.G. (1995). Image Reconstruction. In :- Williams R. A. and Beck, M. S. ed. Process Tomography – Principles. Techniques and Application. Oxford : Butterworth-Heinemann Ltd. 1995. 281-323.

Xu Z. Z., Peyton A. J., and Beck M. S. (1993). A feasibility study of electromagnetic tomography. In : Process Tomography – A strategy for industrial exploitation. Karlesruhe. 218-212 Yan, Y. Byrne, B., Woodhead, S., and Coulthard, J. (1995). Velocity measurement of pneumatically conveyed solids using electrodynamic sensors. Meas. Sci. Technol. Vol(6): 515-537.

Yan, G., Zhong, J., Liao, Y., Lai, S., Zhang, M., and Gao, D. (2005). Design of an applied optical fiber process tomography system. Sensors and Actuators. B 104: 324–331.

Yang, W. Q., and Szuster, K. (1996). A long-distance high-speed serial link for process tomography systems. Meas. Sci. Technol. Vol(7): 853-858.

Yang, W.Q., and Liu, S. (2000). Role of tomography in gas/solids flow measurement. Flow Measurement and Instrumentation. Vol(11): 237–244.

Zeni, L., Bernini, R., and Pierri, R. (2000). Optical tomography for dielectric profiling in processing electronic materials. Chemical Engineering Journal. 77: 137–142.

( )

Appendix A.I x’ = -8, Ru,dx16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 -1 -2 -3 -4 -5 -6 -7 -8 -9 -10 -11 -12 -13 -14 -15 -16

-16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 91 133 0 0 0 0 0 0 0 0 0 0

-15 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 91 133 0 0 0 0 0 0 0 0 0

-14 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 91 133 0 0 0 0 0 0 0 0

-13 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 91 133 0 0 0 0 0 0 0

x’ = 0, Ru,dx24 -15 21 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

-14 175 21 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

-13 28 175 21 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

-12 0 28 175 21 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

-11 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 91 133 0 0 0 0 0

V− 8, 45 x , y -10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 91 133 0 0 0 0

V0, 45 ( x, y ) -11 0 0 28 175 21 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

-10 0 0 0 28 175 21 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

-9 0 0 0 0 28 175 21 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

-9 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 91 133 0 0 0

-8 0 0 0 0 0 28 175 21 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

-8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 91 133 0 0

-7 0 0 0 0 0 0 28 175 21 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

-7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 91 133 0

-6 0 0 0 0 0 0 0 28 175 21 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

-6 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 91 133

-5 0 0 0 0 0 0 0 0 28 175 21 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

-5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 91

-4 0 0 0 0 0 0 0 0 0 28 175 21 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

-4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

-3 0 0 0 0 0 0 0 0 0 0 28 175 21 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

-3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

-2 0 0 0 0 0 0 0 0 0 0 0 28 175 21 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

-2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

-1 0 0 0 0 0 0 0 0 0 0 0 0 28 175 21 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

-1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 28 175 21 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 28 175 21 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 28 175 21 0 0 0 0 0 0 0 0 0 0 0 0 0 0

3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 28 175 21 0 0 0 0 0 0 0 0 0 0 0 0 0

5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 28 175 21 0 0 0 0 0 0 0 0 0 0 0 0

6 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 28 175 21 0 0 0 0 0 0 0 0 0 0 0

8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

6 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 28 175 21 0 0 0 0 0 0 0 0 0 0

9 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 28 175 21 0 0 0 0 0 0 0 0 0

10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 28 175 21 0 0 0 0 0 0 0 0

11 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

12 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

9 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 28 175 21 0 0 0 0 0 0 0

13 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 28 175 21 0 0 0 0 0 0

14 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

11 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 28 175 21 0 0 0 0 0

15 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

12 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 28 175 21 0 0 0 0

13 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 28 175 21 0 0 0

14 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 28 175 21 0 0

15 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 28 175 21 0

180

15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 -1 -2 -3 -4 -5 -6 -7 -8 -9 -10 -11 -12 -13 -14 -15 -16

-16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

-12 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 91 133 0 0 0 0 0 0

Appendix A.II x’ = -8, Ru,dx16 M-8,45°(x,y) 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 -1 -2 -3 -4 -5 -6 -7 -8 -9 -10 -11 -12 -13 -14 -15 -16

-16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 15 0 0 0 0 0 0 0 0 0 0

-15 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 84 126 0 0 0 0 0 0 0 0 0

-14 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 91 133 0 0 0 0 0 0 0 0

-13 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 91 133 0 0 0 0 0 0 0

-12 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 91 133 0 0 0 0 0 0

-11 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 91 133 0 0 0 0 0

-10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 91 133 0 0 0 0

-9 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 91 133 0 0 0

-8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 91 133 0 0

-7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 91 126 0

-6 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 84 15

-5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1

-10 0 0 0 28 175 21 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

-9 0 0 0 0 28 175 21 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

-8 0 0 0 0 0 28 175 21 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

-7 0 0 0 0 0 0 28 175 21 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

-6 0 0 0 0 0 0 0 28 175 21 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

-5 0 0 0 0 0 0 0 0 28 175 21 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

-4 0 0 0 0 0 0 0 0 0 28 175 21 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

-4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

-3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

-2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

-1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

6 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

9 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

11 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

12 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

13 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

14 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

15 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

x’ = 0, Ru,dx24 M0,45°(x,y) -16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

-15 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

-14 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

-13 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

-12 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

-11 0 0 0 46 21 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

-3 0 0 0 0 0 0 0 0 0 0 28 175 21 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

-2 0 0 0 0 0 0 0 0 0 0 0 28 175 21 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

-1 0 0 0 0 0 0 0 0 0 0 0 0 28 175 21 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 28 175 21 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 28 175 21 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 28 175 21 0 0 0 0 0 0 0 0 0 0 0 0 0 0

3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 28 175 21 0 0 0 0 0 0 0 0 0 0 0 0 0

4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 28 175 21 0 0 0 0 0 0 0 0 0 0 0 0

5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 28 175 21 0 0 0 0 0 0 0 0 0 0 0

6 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 28 175 21 0 0 0 0 0 0 0 0 0 0

7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 28 175 21 0 0 0 0 0 0 0 0 0

8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 28 175 21 0 0 0 0 0 0 0 0

9 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 28 175 21 0 0 0 0 0 0 0

10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 28 175 21 0 0 0 0 0 0

11 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 28 175 21 0 0 0 0 0

12 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 28 46 0 0 0 0 0

13 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

14 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

15 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

181

15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 -1 -2 -3 -4 -5 -6 -7 -8 -9 -10 -11 -12 -13 -14 -15 -16

Appendix A.III

Resolution: 32x32 pixels a) Sensitivity map for 45º projection ( S φ ( x, y ) ) S 45 (x, y ) 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 -1 -2 -3 -4 -5 -6 -7 -8 -9 -10 -11 -12 -13 -14 -15 -16

-16 0 0 0 0 0 0 0 0 0 0 18 1 146 90 73 235 55 94 199 28 18 18 0 0 0 0 0 0 0 0 0 0

-15 0 0 0 0 0 0 0 0 42 10 206 136 45 232 91 78 235 55 105 219 29 144 163 15 0 0 0 0 0 0 0 0

-14 0 0 0 0 0 0 16 16 176 165 24 211 136 45 232 91 78 235 55 105 219 29 151 201 15 0 0 0 0 0 0 0

-13 0 0 0 0 0 2 127 201 21 178 165 24 211 136 45 232 91 78 235 55 105 219 29 151 201 15 0 0 0 0 0 0

-12 0 0 0 0 25 215 37 151 201 21 178 165 24 211 136 45 232 91 78 235 55 105 219 29 151 201 15 0 0 0 0 0

-11 0 0 0 51 66 105 227 37 151 201 21 178 165 24 211 136 45 232 91 78 235 55 105 219 29 151 201 15 0 0 0 0

-10 0 0 2 65 236 66 105 227 37 151 201 21 178 165 24 211 136 45 232 91 78 235 55 105 219 29 151 201 15 0 0 0

-9 0 0 146 91 66 236 66 105 227 37 151 201 21 178 165 24 211 136 45 232 91 78 235 55 105 219 29 151 201 15 0 0

-8 0 21 36 225 91 66 236 66 105 227 37 151 201 21 178 165 24 211 136 45 232 91 78 235 55 105 219 29 151 201 15 0

-7 0 157 136 36 225 91 66 236 66 105 227 37 151 201 21 178 165 24 211 136 45 232 91 78 235 55 105 219 29 151 163 0

-6 18 27 211 136 36 225 91 66 236 66 105 227 37 151 201 21 178 165 24 211 136 45 232 91 78 235 55 105 219 29 144 18

-5 99 178 27 211 136 36 225 91 66 236 66 105 227 37 151 201 21 178 165 24 211 136 45 232 91 78 235 55 105 219 29 18

-4 11 178 178 27 211 136 36 225 91 66 236 66 105 227 37 151 201 21 178 165 24 211 136 45 232 91 78 235 55 105 219 28

-3 183 27 178 178 27 211 136 36 225 91 66 236 66 105 227 37 151 201 21 178 165 24 211 136 45 232 91 78 235 55 105 199

-2 135 211 27 178 178 27 211 136 36 225 91 66 236 66 105 227 37 151 201 21 178 165 24 211 136 45 232 91 78 235 55 94

-1 36 136 211 27 178 178 27 211 136 36 225 91 66 236 66 105 227 37 151 201 21 178 165 24 211 136 45 232 91 78 235 55

0 225 36 136 211 27 178 178 27 211 136 36 225 91 66 236 66 105 227 37 151 201 21 178 165 24 211 136 45 232 91 78 235

1 91 225 36 136 211 27 178 178 27 211 136 36 225 91 66 236 66 105 227 37 151 201 21 178 165 24 211 136 45 232 91 73

2 45 91 225 36 136 211 27 178 178 27 211 136 36 225 91 66 236 66 105 227 37 151 201 21 178 165 24 211 136 45 232 90

3 173 66 91 225 36 136 211 27 178 178 27 211 136 36 225 91 66 236 66 105 227 37 151 201 21 178 165 24 211 136 45 146

4 63 236 66 91 225 36 136 211 27 178 178 27 211 136 36 225 91 66 236 66 105 227 37 151 201 21 178 165 24 211 136 1

5 0 66 236 66 91 225 36 136 211 27 178 178 27 211 136 36 225 91 66 236 66 105 227 37 151 201 21 178 165 24 206 18

6 0 40 66 236 66 91 225 36 136 211 27 178 178 27 211 136 36 225 91 66 236 66 105 227 37 151 201 21 178 165 10 0

7 0 62 105 66 236 66 91 225 36 136 211 27 178 178 27 211 136 36 225 91 66 236 66 105 227 37 151 201 21 176 42 0

8 0 0 172 105 66 236 66 91 225 36 136 211 27 178 178 27 211 136 36 225 91 66 236 66 105 227 37 151 201 16 0 0

9 0 0 23 227 105 66 236 66 91 225 36 136 211 27 178 178 27 211 136 36 225 91 66 236 66 105 227 37 127 16 0 0

10 0 0 0 36 227 105 66 236 66 91 225 36 136 211 27 178 178 27 211 136 36 225 91 66 236 66 105 215 2 0 0 0

11 0 0 0 0 36 227 105 66 236 66 91 225 36 136 211 27 178 178 27 211 136 36 225 91 66 236 66 25 0 0 0 0

12 0 0 0 0 0 36 227 105 66 236 66 91 225 36 136 211 27 178 178 27 211 136 36 225 91 65 51 0 0 0 0 0

13 0 0 0 0 0 0 23 172 105 66 236 66 91 225 36 136 211 27 178 178 27 211 136 36 146 2 0 0 0 0 0 0

14 0 0 0 0 0 0 0 0 62 40 66 236 66 91 225 36 136 211 27 178 178 27 157 21 0 0 0 0 0 0 0 0

15 0 0 0 0 0 0 0 0 0 0 0 63 173 45 91 225 36 135 183 11 99 18 0 0 0 0 0 0 0 0 0 0

182

Resolution: 32x32 pixels b) Total distribution of the infra-red light beam in a specific rectangle ( S max (x, y ) ), Top = 856

15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 -1 -2 -3 -4 -5 -6 -7 -8 -9 -10 -11 -12 -13 -14 -15 -16

-16 0 0 0 0 0 0 0 0 0 0 78 242 408 375 559 569 389 566 456 344 222 53 0 0 0 0 0 0 0 0 0 0

-15 0 0 0 0 0 0 0 0 82 154 677 512 356 723 425 412 720 356 481 695 316 553 375 52 0 0 0 0 0 0 0 0

-14 0 0 0 0 0 0 64 247 460 641 400 522 627 379 566 576 379 611 531 392 640 475 432 654 259 24 0 0 0 0 0 0

-13 0 0 0 0 0 56 467 486 497 554 476 515 545 470 530 533 467 554 522 476 551 500 486 558 486 410 13 0 0 0 0 0

-12 0 0 0 0 211 598 322 627 577 332 669 499 358 696 437 421 708 378 499 681 336 562 626 314 626 562 309 159 0 0 0 0

-11 0 0 0 106 451 390 703 413 462 692 355 512 650 325 587 612 332 653 537 359 692 462 390 694 390 462 692 331 10 0 0 0

-10 0 0 56 412 521 542 481 538 528 485 535 506 479 541 500 498 557 491 513 548 485 520 530 466 530 520 485 548 421 24 0 0

-9 0 0 386 376 542 612 377 596 561 371 636 502 397 654 452 445 657 417 502 639 376 553 596 366 596 553 376 639 502 251 0 0

-8 0 65 320 701 467 377 727 400 439 712 338 527 677 308 599 611 305 668 543 330 707 452 389 726 389 452 707 330 543 664 53 0

-7 0 313 612 412 536 582 400 570 551 406 603 513 438 622 467 459 622 431 496 611 406 543 582 412 582 543 406 611 496 431 358 0

-6 93 501 587 447 527 559 425 551 537 442 581 514 458 597 482 478 585 450 499 572 447 536 566 438 566 536 447 572 499 450 565 53

-5 367 554 338 702 470 370 710 392 442 712 353 526 673 318 608 608 306 653 526 335 702 470 392 720 392 470 702 335 526 653 306 212

-4 287 489 669 361 545 621 337 601 567 353 657 512 386 684 444 436 676 382 489 656 358 558 624 346 624 558 358 656 489 382 676 363

-3 475 518 512 512 512 512 512 512 512 512 512 517 523 512 512 512 512 512 512 512 512 512 512 528 512 512 512 512 512 512 512 450

-2 626 545 361 663 479 403 687 423 457 671 372 523 643 351 580 588 348 642 535 368 666 466 416 678 416 466 666 368 535 642 348 564

-1 370 470 696 328 554 654 314 632 582 317 682 498 351 711 427 416 718 371 498 689 322 570 632 304 632 570 322 689 498 371 718 416

0 559 521 437 587 503 465 599 473 492 593 443 510 566 427 547 557 439 574 525 452 593 488 458 586 458 488 593 452 525 574 439 557

1 576 526 412 612 498 448 624 459 484 618 421 511 586 402 557 570 413 593 528 429 618 481 442 612 442 481 618 429 528 593 413 559

2 319 467 701 323 557 657 308 635 585 312 686 497 347 716 425 413 724 367 497 694 317 572 635 298 635 572 317 694 497 367 724 386

3 506 542 378 646 482 417 668 434 463 653 388 522 627 370 572 579 367 628 533 385 648 471 428 658 428 471 648 385 533 628 367 389

4 309 523 487 537 506 493 543 496 502 539 489 518 545 483 524 526 483 533 516 487 539 504 494 558 494 504 539 487 516 533 483 239

5 62 485 682 347 548 632 321 611 572 338 669 512 374 699 437 428 692 371 487 670 343 562 634 330 634 562 343 670 487 371 682 80

6 0 304 347 693 473 376 700 397 447 702 361 525 666 328 603 603 316 646 525 343 693 473 398 710 398 473 693 343 525 646 162 0

7 0 114 562 473 521 541 452 536 527 470 558 515 479 570 494 491 557 470 502 548 473 529 549 466 549 529 473 548 502 468 82 0

8 0 0 454 390 541 597 377 582 559 383 624 512 419 645 458 448 645 413 493 632 384 549 597 388 597 549 384 632 493 233 0 0

9 0 0 69 654 466 377 727 400 438 713 337 528 678 307 599 612 304 668 543 329 708 452 388 728 388 452 708 329 452 64 0 0

10 0 0 0 65 531 596 400 583 554 392 617 503 416 632 461 455 635 434 504 619 397 547 583 388 583 547 397 583 56 0 0 0

11 0 0 0 0 249 554 452 554 537 458 558 505 457 570 488 484 585 471 510 572 458 528 547 438 547 528 436 211 0 0 0 0

12 0 0 0 0 0 61 667 406 458 703 346 512 659 313 593 618 320 661 539 349 703 458 383 706 383 397 106 0 0 0 0 0

13 0 0 0 0 0 0 69 445 572 346 657 500 368 682 443 429 694 388 500 670 349 558 617 328 371 56 0 0 0 0 0 0

14 0 0 0 0 0 0 0 0 116 285 498 513 523 498 518 519 497 533 519 500 525 506 321 65 0 0 0 0 0 0 0 0

15 0 0 0 0 0 0 0 0 0 0 62 297 527 312 574 586 358 627 486 266 364 96 0 0 0 0 0 0 0 0 0 0

183

c) Normalized sensitivity map for 45º projection ( S φ ( x, y ) )

S 45 (x, y ) 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 -1 -2 -3 -4 -5 -6 -7 -8 -9 -10 -11 -12 -13 -14 -15 -16

-16 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.23 0.00 0.36 0.24 0.13 0.41 0.14 0.17 0.44 0.08 0.08 0.34 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

-15 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.51 0.06 0.30 0.27 0.13 0.32 0.21 0.19 0.33 0.15 0.22 0.32 0.09 0.26 0.43 0.29 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

-14 0.00 0.00 0.00 0.00 0.00 0.00 0.25 0.06 0.38 0.26 0.06 0.40 0.22 0.12 0.41 0.16 0.21 0.38 0.10 0.27 0.34 0.06 0.35 0.31 0.06 0.00 0.00 0.00 0.00 0.00 0.00 0.00

-13 0.00 0.00 0.00 0.00 0.00 0.04 0.27 0.41 0.04 0.32 0.35 0.05 0.39 0.29 0.08 0.44 0.19 0.14 0.45 0.12 0.19 0.44 0.06 0.27 0.41 0.04 0.00 0.00 0.00 0.00 0.00 0.00

-12 0.00 0.00 0.00 0.00 0.12 0.36 0.11 0.24 0.35 0.06 0.27 0.33 0.07 0.30 0.31 0.11 0.33 0.24 0.16 0.35 0.16 0.19 0.35 0.09 0.24 0.36 0.05 0.00 0.00 0.00 0.00 0.00

-11 0.00 0.00 0.00 0.48 0.15 0.27 0.32 0.09 0.33 0.29 0.06 0.35 0.25 0.07 0.36 0.22 0.14 0.36 0.17 0.22 0.34 0.12 0.27 0.32 0.07 0.33 0.29 0.05 0.00 0.00 0.00 0.00

-10 0.00 0.00 0.04 0.16 0.45 0.12 0.22 0.42 0.07 0.31 0.38 0.04 0.37 0.30 0.05 0.42 0.24 0.09 0.45 0.17 0.16 0.45 0.10 0.23 0.41 0.06 0.31 0.37 0.04 0.00 0.00 0.00

-9 0.00 0.00 0.38 0.24 0.12 0.39 0.18 0.18 0.40 0.10 0.24 0.40 0.05 0.27 0.37 0.05 0.32 0.33 0.09 0.36 0.24 0.14 0.39 0.15 0.18 0.40 0.08 0.24 0.40 0.06 0.00 0.00

-8 0.00 0.32 0.11 0.32 0.19 0.18 0.32 0.17 0.24 0.32 0.11 0.29 0.30 0.07 0.30 0.27 0.08 0.32 0.25 0.14 0.33 0.20 0.20 0.32 0.14 0.23 0.31 0.09 0.28 0.30 0.28 0.00

-7 0.00 0.50 0.22 0.09 0.42 0.16 0.17 0.41 0.12 0.26 0.38 0.07 0.34 0.32 0.04 0.39 0.27 0.06 0.43 0.22 0.11 0.43 0.16 0.19 0.40 0.10 0.26 0.36 0.06 0.35 0.46 0.00

-6 0.19 0.05 0.36 0.30 0.07 0.40 0.21 0.12 0.44 0.15 0.18 0.44 0.08 0.25 0.42 0.04 0.30 0.37 0.05 0.37 0.30 0.08 0.41 0.21 0.14 0.44 0.12 0.18 0.44 0.06 0.25 0.34

-5 0.27 0.32 0.08 0.30 0.29 0.10 0.32 0.23 0.15 0.33 0.19 0.20 0.34 0.12 0.25 0.33 0.07 0.27 0.31 0.07 0.30 0.29 0.11 0.32 0.23 0.17 0.33 0.16 0.20 0.34 0.09 0.08

-4 0.04 0.36 0.27 0.07 0.39 0.22 0.11 0.37 0.16 0.19 0.36 0.13 0.27 0.33 0.08 0.35 0.30 0.05 0.36 0.25 0.07 0.38 0.22 0.13 0.37 0.16 0.22 0.36 0.11 0.27 0.32 0.08

-3 0.39 0.05 0.35 0.35 0.05 0.41 0.27 0.07 0.44 0.18 0.13 0.46 0.13 0.21 0.44 0.07 0.29 0.39 0.04 0.35 0.32 0.05 0.41 0.26 0.09 0.45 0.18 0.15 0.46 0.11 0.21 0.44

-2 0.22 0.39 0.07 0.27 0.37 0.07 0.31 0.32 0.08 0.34 0.24 0.13 0.37 0.19 0.18 0.39 0.11 0.24 0.38 0.06 0.27 0.35 0.06 0.31 0.33 0.10 0.35 0.25 0.15 0.37 0.16 0.17

-1 0.10 0.29 0.30 0.08 0.32 0.27 0.09 0.33 0.23 0.11 0.33 0.18 0.19 0.33 0.15 0.25 0.32 0.10 0.30 0.29 0.07 0.31 0.26 0.08 0.33 0.24 0.14 0.34 0.18 0.21 0.33 0.13

0 0.40 0.07 0.31 0.36 0.05 0.38 0.30 0.06 0.43 0.23 0.08 0.44 0.16 0.15 0.43 0.12 0.24 0.40 0.07 0.33 0.34 0.04 0.39 0.28 0.05 0.43 0.23 0.10 0.44 0.16 0.18 0.42

1 0.16 0.43 0.09 0.22 0.42 0.06 0.29 0.39 0.06 0.34 0.32 0.07 0.38 0.23 0.12 0.41 0.16 0.18 0.43 0.09 0.24 0.42 0.05 0.29 0.37 0.05 0.34 0.32 0.09 0.39 0.22 0.13

2 0.14 0.19 0.32 0.11 0.24 0.32 0.09 0.28 0.30 0.09 0.31 0.27 0.10 0.31 0.21 0.16 0.33 0.18 0.21 0.33 0.12 0.26 0.32 0.07 0.28 0.29 0.08 0.30 0.27 0.12 0.32 0.23

3 0.34 0.12 0.24 0.35 0.07 0.33 0.32 0.06 0.38 0.27 0.07 0.40 0.22 0.10 0.39 0.16 0.18 0.38 0.12 0.27 0.35 0.08 0.35 0.31 0.05 0.38 0.25 0.06 0.40 0.22 0.12 0.38

4 0.20 0.45 0.14 0.17 0.44 0.07 0.25 0.43 0.05 0.33 0.36 0.05 0.39 0.28 0.07 0.43 0.19 0.12 0.46 0.14 0.19 0.45 0.07 0.27 0.41 0.04 0.33 0.34 0.05 0.40 0.28 0.00

5 0.00 0.14 0.35 0.19 0.17 0.36 0.11 0.22 0.37 0.08 0.27 0.35 0.07 0.30 0.31 0.08 0.33 0.25 0.14 0.35 0.19 0.19 0.36 0.11 0.24 0.36 0.06 0.27 0.34 0.06 0.30 0.23

6 0.00 0.13 0.19 0.34 0.14 0.24 0.32 0.09 0.30 0.30 0.07 0.34 0.27 0.08 0.35 0.23 0.11 0.35 0.17 0.19 0.34 0.14 0.26 0.32 0.09 0.32 0.29 0.06 0.34 0.26 0.06 0.00

7 0.00 0.54 0.19 0.14 0.45 0.12 0.20 0.42 0.07 0.29 0.38 0.05 0.37 0.31 0.05 0.43 0.24 0.08 0.45 0.17 0.14 0.45 0.12 0.23 0.41 0.07 0.32 0.37 0.04 0.38 0.51 0.00

8 0.00 0.00 0.38 0.27 0.12 0.40 0.18 0.16 0.40 0.09 0.22 0.41 0.06 0.28 0.39 0.06 0.33 0.33 0.07 0.36 0.24 0.12 0.40 0.17 0.18 0.41 0.10 0.24 0.41 0.07 0.00 0.00

9 0.00 0.00 0.33 0.35 0.23 0.18 0.32 0.17 0.21 0.32 0.11 0.26 0.31 0.09 0.30 0.29 0.09 0.32 0.25 0.11 0.32 0.20 0.17 0.32 0.17 0.23 0.32 0.11 0.28 0.25 0.00 0.00

10 0.00 0.00 0.00 0.55 0.43 0.18 0.17 0.40 0.12 0.23 0.36 0.07 0.33 0.33 0.06 0.39 0.28 0.06 0.42 0.22 0.09 0.41 0.16 0.17 0.40 0.12 0.26 0.37 0.04 0.00 0.00 0.00

11 0.00 0.00 0.00 0.00 0.14 0.41 0.23 0.12 0.44 0.14 0.16 0.45 0.08 0.24 0.43 0.06 0.30 0.38 0.05 0.37 0.30 0.07 0.41 0.21 0.12 0.45 0.15 0.12 0.00 0.00 0.00 0.00

12 0.00 0.00 0.00 0.00 0.00 0.59 0.34 0.26 0.14 0.34 0.19 0.18 0.34 0.12 0.23 0.34 0.08 0.27 0.33 0.08 0.30 0.30 0.09 0.32 0.24 0.16 0.48 0.00 0.00 0.00 0.00 0.00

13 0.00 0.00 0.00 0.00 0.00 0.00 0.33 0.39 0.18 0.19 0.36 0.13 0.25 0.33 0.08 0.32 0.30 0.07 0.36 0.27 0.08 0.38 0.22 0.11 0.39 0.04 0.00 0.00 0.00 0.00 0.00 0.00

14 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.53 0.14 0.13 0.46 0.13 0.18 0.43 0.07 0.27 0.40 0.05 0.36 0.34 0.05 0.49 0.32 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

15 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.21 0.33 0.14 0.16 0.38 0.10 0.22 0.38 0.04 0.27 0.19 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

184

Appendix B

Output and controller signal waveform

Upstream Downstream

a) Final output of receiver Rx03 (downstream and upstream)

(a). (b). (c).

(d).

(a). (b). (e). (f).

b) Controller Output Signals: (a) Transmit, (b) Trig, (c) Sampleup, (d) Muxup, (e) Sampledown, and (f) Muxdown.

185

Appendix C

FVR-C9S controller

a) FVR parts name

b) Keypad panel

186

Appendix D.I

187

Results of flow models image reconstruction

a) Results of using image reconstruction algorithms for single pixel flow models at a resolution of 8x8 pixels 0 Volts

5 Volts i

LBP: NMSE: 1127% C: 33.78% PT:0.666ms PSNR:7.54db Vmax:5.00V

iv

FFLBPh: NMSE: 326.6% C:: 19.48% PT:0.93ms PSNR:12.74db Vmax:4.87V

vii

HLBP: NMSE: 0% C: 1.67% PT:0.012ms PSNR:60.00db Vmax:5.00V

x

HFFLBPh: NMSE: 0.04% C: 1.67% PT:0.354ms PSNR:51.53 Vmax:4.896V

ii

SFLBP: NMSE: 898.0% C: 32.95% PT:0.737ms PSNR:5.98db Vmax:3.73V

v

CFLBPi+s: NMSE: 518.5% C: 25.33% PT:1.019ms PSNR:6.79db Vmax:3.11V

viii

HSFLBP: NMSE: 88.89% C: 1.67% PT:0.202ms PSNR:0.51db Vmax:0.56V

xi

HCFLBPi+s: NMSE: 88.89% C: 1.67% PT:0.53ms PSNR:0.51db Vmax:0.56V

iii

FFLBPi: NMSE: 583.6% C: 26.25% PT:0.934ms PSNR:10.40db Vmax:5.00V

vi

CFLBPh+s: NMSE: 317.5% C: 18.90% PT:1.019ms PSNR:6.70db Vmax:2.41V

ix

HFFLBPi: NMSE: 0% C: 1.67% PT:0.354ms PSNR: 60.00dbVmax:5.00V

xii

HCFLBPh+s: NMSE:88.89% C: 1.67% PT:0.53ms PSNR:16.73db Vmax:0.54V

Appendix D.I

188

Results of flow models image reconstruction

b) Results of using image reconstruction algorithms for multiple pixel flow models at a resolution of 8x8 pixels 0 Volts

5 Volts

i

LBP: NMSE: 753.3% C: 70.56% PT:0.666ms PSNR:3.27db Vmax:5.00V

iv

FFLBPh: NMSE: 244.7% C: 43.17% PT:0.934ms PSNR:7.89db Vmax:4.85V

vii

HLBP: NMSE: 0.00% C: 6.67% PT:0.065ms PSNR: 60.0db Vmax: 5.00V

x

HFFLBPh: NMSE: 0.23% C: 6.54% PT:0.347ms PSNR: 38.16 Vmax: 4.85V

ii

SFLBP: NMSE: 712.3% C: 68.89% PT: 0.737ms PSNR: 5.98db Vmax: 3.73V

v

CFLBPi+s: NMSE: 477.4% C: 56.71% PT:1.019ms PSNR: 2.69db Vmax: 3.72V

viii

HSFLBP: NMSE: 88.89% C: 6.67% PT:0.47ms PSNR: 6.53db Vmax: 0.56V

xi

HCFLBPi+s: NMSE: 88.89% C: 6.67% PT:0.756ms PSNR: 6.53db Vmax: 0.55V

iii

FFLBPi: NMSE: 477.6% C: 58.71% PT: 0.934ms PSNR: 5.25db Vmax: 5.00V

vi

CFLBPh+s: NMSE: 281.1% C:41.87% PT:1.019ms PSNR:2.44db Vmax:2.78V

ix

HFFLBPi: NMSE: 0.00% C: 6.67% PT:0.347ms PSNR: 60.0db Vmax:5.00V

xii

HCFLBPh+s: NMSE:88.91% C:6.54% PT:0.756ms PSNR: 6.79db Vmax: 0.55V

Appendix D.I

189

Results of flow models image reconstruction

c) Results of using image reconstruction algorithms for half flow models at a resolution of 8x8 pixels 0 Volts

5 Volts i

LBP: NMSE: 95.64% C: 98.71% PT:0.666ms PSNR: 3.48db Vmax: 5.00V

iv

FFLBPh: NMSE: 47.02% C: 83.83% PT: 0.934ms PSNR: 6.57db Vmax:5.00V

vii

HLBP: NMSE: 0% C: 50.01% PT:0.25ms PSNR:60.0db Vmax:5.00V

x

HFFLBPh: NMSE: 0% C: 50.01% PT:0.54ms PSNR: 60.00db Vmax: 5.00V

ii

SFLBP: NMSE: 93.66% C: 97.86% PT: 0.737ms PSNR: 3.58db Vmax: 5.00V

v

CFLBPi+s: NMSE: 83.90% C: 95.18% PT: 1.019ms PSNR: 4.05db Vmax: 5.00V

viii

HSFLBP: NMSE: 4.53% C: 50.01% PT: 0.50ms PSNR: 16.73db Vmax: 5.00V

xi

HCFLBPi+s: NMSE: 4.53% C: 49.64% PT: 0.76ms PSNR: 16.73db Vmax: 5.00V

iii

FFLBPi: NMSE: 85.93% C: 96.10% PT: 0.934ms PSNR: 3.95db Vmax:5.00V

vi

CFLBPh+s: NMSE: 47.64% C: 82.83% PT: 1.019ms PSNR: 6.51db Vmax:5.00V

ix

HFFLBPi: NMSE: 0% C: 50.01% PT: 0.54ms PSNR: 60.0db Vmax: 5.00V

xii

HCFLBPh+s: NMSE:4.53% C: 49.64% PT:0.76ms PSNR:16.73db Vmax: 5.00V

Appendix D.I

190

Results of flow models image reconstruction

d) Results of using image reconstruction algorithms for full flow models at a resolution of 8x8 pixels 0 Volts

5 Volts i

LBP: NMSE: 0.00% C: 100% PT: 0.666ms PSNR: 60.0db Vmax:5.00V

iv

FFLBPh: NMSE: 0.00% C: 100% PT: 0.934ms PSNR: 60.0db Vmax:5.00V

vii

HLBP: NMSE: 0.00% C: 100% PT: 0.666ms PSNR: 60.0db Vmax:5.00V

x

HFFLBPh: NMSE: 0.00% C: 100% PT: 0.934ms PSNR: 60.0db Vmax:5.00V

ii

SFLBP: NMSE: 0.08% C: 99.27% PT: 0.737ms PSNR: 31.1db Vmax: 5.00V

v

CFLBPi+s: NMSE: 0.08% C: 99.27% PT: 1.019ms PSNR: 31.1db Vmax:5.00V

viii

HSFLBP: NMSE: 0.08% C: 99.27% PT: 0.737ms PSNR: 31.1db Vmax: 5.00V

xi

HCFLBPi+s: NMSE: 0.08% C: 99.27% PT: 1.019ms PSNR: 31.1db Vmax: 5.00V

iii

FFLBPi: NMSE: 0.00% C: 100% PT: 0.934ms PSNR: 60.0db Vmax:5.00V

vi

CFLBPh+s NMSE: 0.08% C: 100% PT: 1.019ms PSNR: 31.1db Vmax:5.00V

ix

HFFLBPi: NMSE: 0.00% C: 100% PT: 0.934ms PSNR: 60.0db Vmax:5.00V

xii

HCFLBPh+s NMSE: 0.08% C: 100% PT: 1.019ms PSNR: 31.1db Vmax:5.00V

Appendix D.I

191

Results of flow models image reconstruction

e) Results of using image reconstruction algorithms for single pixel flow models at a resolution of 16x16 pixels 0 Volts

5 Volts i

LBP: NMSE: 448.6% C: 8.11% PT:1.71ms PSNR: 17.56db Vmax: 4.99V

iv

FFLBPh: NMSE: 139.3% C: 4.55% PT:2.075ms PSNR:13.3db Vmax:1.71V

vii

HLBP: NMSE: 0.00% C: 0.45% PT: 0.137ms PSNR: 60.0db Vmax:4.99V

x

HFFLBPh: NMSE: 43.41% C: 0.17% PT: 0.46ms PSNR: 18.37db Vmax:1.71V

ii

SFLBP: NMSE: 332.3% C: 8.13% PT: 1.80ms PSNR: 11.46db Vmax: 2.13V

v

CFLBPi+s: NMSE: 181.1% C: 5.99% PT: 2.33ms PSNR: 10.43db Vmax: 1.40V

viii

HFSLBP: NMSE: 88.89% C: 0.45% PT: 0.475ms PSNR: 5.51db Vmax: 0.56V

xi

HCFLBPi+s: NMSE: 92.80% C: 0.18% PT: 0.92ms PSNR: 2.49db Vmax: 0.226V

iii

FFLBPi: NMSE: 191.3% C: 5.99% PT:2.075ms PSNR:13.45db Vmax:2.30V

vi

CFLBPh+s: NMSE: 135.8% C: 4.50% PT: 2.33ms PSNR:9.95db Vmax: 1.146V

ix

HFFLBPi: NMSE: 35.19% C: 0.18% PT: 0.46ms PSNR: 20.1db Vmax: 2.03V

xii

HCFLBPh+s NMSE: 93.71% C: 0.17% PT:0.92ms PSNR: 4.06db Vmax: 2.697V

Appendix D.I

192

Results of flow models image reconstruction

f) Results of using image reconstruction algorithms for multiple pixel flow models at a resolution of 16x16 pixels 0 Volts

5 Volts i

LBP: NMSE: 295.9% C: 19.44% PT: 1.71ms PSNR: 13.35db Vmax:4.99V

iv

FFLBPh: NMSE: 117.5% C: 10.63% PT: 2.075ms PSNR: 8.03db Vmax:1.71V

vii

HLBP: NMSE: 0% C: 1.82% PT: 0.16ms PSNR: 60.0db Vmax: 4.99V

x

HFFLBPh: NMSE: 43.43% C: 0.68% PT: 0.49ms PSNR: 12.36db Vmax:2.71V

ii

SFLBP: NMSE: 278.5% C: 19.46% PT: 1.802ms PSNR: 7.46db Vmax: 2.46V

v

CFLBPi+s: NMSE: 169.0% C: 14.38% PT: 2.33ms PSNR: 4.71db Vmax:1.398V

viii

HFSLBP: NMSE: 88.89% C: 1.82% PT: 0.584ms PSNR: 0.51db Vmax: 0.56V

xi

HCFLBPi+s: NMSE: 92.75% C: 0.75% PT: 1.06ms PSNR: 8.40db Vmax: 0.229V

iii

FFLBPi; NMSE: 162.1% C: 10.63% PT:2.075ms PSNR:8.26db Vmax:2.060V

vi

CFLBPh+s: NMSE: 124.4% C: 10.56% PT: 2.33ms PSNR: 4.32db Vmax: 1.15V

ix

HFFLBPi: NMSE: 34.76% C: 0.76% PT: 0.49ms PSNR:14.95db Vmax: 2.06V

xii

HCFLBPh+s: NMSE: 93.71% C:0.68% PT: 1.06ms PSNR: 10.07db Vmax:0.19V

Appendix D.I

193

Results of flow models image reconstruction

g) Results of using image reconstruction algorithms for half flow models at a resolution of 16x16 pixels 0 Volts

5 Volts i

LBP: NMSE: 44.08% C: 80.99% PT: 1.708ms PSNR: 7.19db Vmax:4.99V

iv

FFLBPh: NMSE: 36.69% C: 43.91% PT:2.075ms PSNR: 2.90db Vmax: 2.82V

vii

HLBP: NMSE: 3.39% C: 49.09% PT: 0.76ms PSNR: 18.33db Vmax:4.99V

x

HFFLBPh: NMSE: 24.46% C: 26.35% PT: 1.125ms PSNR: 4.66db Vmax:2.79V

ii

SFLBP: NMSE: 43.52% C: 79.76% PT: 1.802ms PSNR: 7.24db Vmax: 4.99V

v

CFLBPi+s: NMSE: 35.16% C: 59.57% PT: 2.33ms PSNR: 5.67db Vmax: 3.749V

viii

HFSLBP: NMSE: 5.61% C: 48.48% PT: 1.19ms PSNR: 16.14db Vmax: 4.99V

xi

HCFLBPi+s: NMSE:14.23% C:35.25% PT: 1.74ms PSNR: 9.60db Vmax: 3.75V

iii

FFLBPi: NMSE: 34.67% C: 60.54% PT:2.075ms PSNR:5.74db Vmax:3.755V

vi

CFLBPh+s: NMSE: 37.86% C: 43.22% PT: 2.33ms PSNR:2.61db Vmax: 2.736V

ix

HFFLBPi : NMSE: 10.60% C: 35.79% PT:1.13ms PSNR: 10.89db Vmax:3.76V

xii

HCFLBPh+s:NMSE:28.18% C:25.96% PT: 1.74ms PSNR:3.89db Vmax: 2.736V

Appendix D.I

194

Results of flow models image reconstruction

h) Results of using image reconstruction algorithms for full flow models at a resolution of 16x16 pixels 0 Volts

5 Volts i

LBP: NMSE: 3.49% C: 99.55% PT: 1.71ms PSNR: 15.17db Vmax:4.99V

iv

FFLBPh: NMSE: 23.87% C: 54.03% PT: 2.075ms PSNR: 1.48db Vmax:2.71V

vii

HLBP: NMSE: 3.49% C: 99.55% PT: 0.76ms PSNR: 18.33db Vmax:4.99V

x

HFFLBPh: NMSE: 23.87% C: 54.03% PT: 2.075ms PSNR: 1.48db Vmax:2.71V

ii

SFLBP: NMSE: 2.56% C: 97.85% PT: 1.802ms PSNR: 16.5db Vmax: 4.99V

v

CFLBPi+s: NMSE: 9.68% C: 73.46% PT: 2.33ms PSNR: 8.24db Vmax: 3.749V

viii

HSFLBP: NMSE: 2.56% C: 97.85% PT: 1.802ms PSNR: 16.5db Vmax: 4.99V

xi

HCFLBPi+s: NMSE: 9.68% C: 73.46% PT: 2.33ms PSNR: 8.24db Vmax: 3.749V

iii

FFLBPi: NMSE: 9.57% C: 74.66% PT: 2.075ms PSNR:8.29db Vmax:3.737V

vi

CFLBPh+s: NMSE: 24.45% C : 53.12% PT: 2.33ms PSNR: 1.38db Vmax: 2.75V

ix

HFFLBPi: NMSE: 9.57% C: 74.66% PT: 2.075ms PSNR:8.29db Vmax:3.737V

xii

HCFLBPh+s: NMSE: 24.45% C:53.12% PT: 2.33ms PSNR: 1.38db Vmax: 2.75V

Full

Single

Multiple

Half

Full

NMSE (%) Half

C (%) Multiple

LBP SFLBP FFLBPi FFLBPh CFLBPi+s CFLBPh+s 8x8 HLBP HSFLBP HFFLBPi HFFLBPh HCFLBPi+s HCFLBPh+s LBP SFLBP FFLBPi FFLBPh CFLBPi+s CFLBPh+s 16x16 HLBP HSFLBP HFFLBPi HFFLBPh HCFLBPi+s HCFLBPh+s LBP SFLBP FFLBPi FFLBPh CFLBPi+s CFLBPh+s 32x32 HLBP HSFLBP HFFLBPi HFFLBPh HCFLBPi+s HCFLBPh+s

195

Quantity and quality data tabulation

Single

Algorithm

Resolution

Appendix D.II

33.78 32.95 26.25 19.48 25.33 18.90 1.67 1.67 1.67 1.67 1.67 1.67 8.11 8.13 5.99 4.55 5.99 4.50 0.45 0.45 0.18 0.17 0.18 0.17 4.22 4.17 3.17 2.43 3.14 2.41 0.14 0.14 0 0 0 0

70.56 68.89 58.71 43.17 56.71 41.87 6.67 6.67 6.67 6.54 6.67 6.54 19.44 19.46 10.63 10.63 14.38 10.56 1.82 1.82 0.75 0.68 0.75 0.68 18.98 18.80 14.19 10.33 14.19 10.26 1.38 1.38 0.52 0.52 0.52 0.52

98.71 97.86 96.10 83.83 95.18 82.83 50.01 50.01 50.01 50.01 49.64 49.64 80.99 79.76 60.54 43.91 59.57 43.22 49.09 48.48 35.79 26.35 35.25 25.96 80.53 80.11 60.28 43.72 59.96 43.49 49.22 48.94 35.96 26.45 35.75 26.31

100 99.27 100 100 99.27 99.27 100 99.27 100 100 99.27 99.27 99.55 97.85 74.66 54.03 73.46 53.12 99.55 97.85 74.66 54.03 73.46 53.12 98.99 98.48 74.24 53.73 73.86 53.46 98.99 98.48 74.24 53.73 73.86 53.46

1127 898 583.6 326.6 518.5 317.5 0 88.89 0 0.04 88.09 88.09 448.6 332.3 191.3 139.3 181.1 135.8 0 88.89 35.19 42.41 92.86 93.71 801.9 491.6 272.2 201.1 240.6 179.0 173.1 86.67 0 0 0 0

753.3 712.3 477.6 244.7 477.4 281.1 0 88.89 0 0.23 88.89 88.89 295.9 278.5 162.1 117.5 169.0 124.4 0 88.89 34.76 43.43 92.75 93.71 319.3 247.0 161.4 118.1 157.7 115.9 25.08 52.15 51.30 57.79 75.84 79.50

95.64 93.66 85.93 47.02 83.90 47.64 0 4.53 0 0 4.53 4.53 44.08 43.52 34.67 36.69 35.16 37.86 3.39 5.61 10.60 24.46 14.23 28.18 42.56 42.29 33.11 35.21 33.49 36.00 1.35 3.23 8.69 22.86 11.27 25.35

0 0.08 0 0 0.08 0.08 0 0.08 0 0 0.08 0.08 3.49 2.56 9.57 23.87 9.68 24.45 3.49 2.56 9.57 23.87 9.68 24.45 0.85 1.40 7.11 21.81 8.10 22.88 0.85 1.40 7.11 21.81 8.10 22.88

Full

Single

Multiple

Half

Full

Vmax (Volt)

Half

PSNR (db) Multiple

LBP SFLBP FFLBPi FFLBPh CFLBPi+s CFLBPh+s 8x8 HLBP HSFLBP HFFLBPi HFFLBPh HCFLBPi+s HCFLBPh+s LBP SFLBP FFLBPi FFLBPh CFLBPi+s CFLBPh+s 16x16 HLBP HSFLBP HFFLBPi HFFLBPh HCFLBPi+s HCFLBPh+s LBP SFLBP FFLBPi FFLBPh CFLBPi+s CFLBPh+s 32x32 HLBP HSFLBP HFFLBPi HFFLBPh HCFLBPi+s HCFLBPh+s

196

Quantity and quality data tabulation

Single

Algorithm

Resolution

Appendix D.II

7.54 5.98 10.40 12.74 6.79 6.70 60.00 0.51 55.00 51.53 0.51 16.73 17.56 11.46 13.45 13.30 10.43 9.95 60.00 5.51 20.10 18.37 2.49 4.06 16.90 16.13 13.37 13.18 12.21 11.93 23.05 13.29 0 0 0 0

3.27 2.47 5.25 7.89 2.69 2.44 60.00 6.53 55.00 38.16 6.53 6.79 13.35 7.46 8.26 8.03 4.71 4.23 60.00 0.51 14.95 12.36 8.40 10.07 13.02 11.56 8.53 8.26 7.10 6.80 24.07 11.25 13.51 11.36 1.97 0.13

3.48 3.58 3.95 6.57 4.05 6.51 60.00 16.73 55.00 38.16 16.73 16.73 7.19 7.24 5.74 2.90 5.67 2.61 18.33 16.14 10.89 4.66 9.60 3.89 7.50 7.52 6.11 3.35 6.05 3.13 22.49 18.69 11.92 5.23 10.78 4.65

60.00 31.10 55.00 38.16 31.10 31.10 60.00 31.10 55.00 38.16 31.10 31.10 15.17 16.50 8.29 1.48 8.24 1.38 15.17 16.5 8.29 1.48 8.24 1.38 21.49 19.31 9.76 2.05 9.19 1.85 21.49 19.31 9.76 2.05 9.19 1.85

5.00 3.73 5.00 4.87 3.11 2.41 5.00 0.56 5.00 4.896 0.56 0.54 4.99 2.13 2.30 1.71 1.40 1.146 4.99 0.56 2.03 1.71 0.226 2.697 3.097 2.22 1.20 1.00 0.99 0.825 4.99 0.67 0 0 0 0

5.00 4.43 5.00 4.85 3.72 2.78 5.00 0.56 5.00 4.85 0.55 0.55 4.99 2.46 2.060 1.71 1.398 1.151 4.99 0.56 2.06 2.71 0.229 0.19 4.99 3.72 2.12 1.757 1.78 1.47 4.99 1.65 1.76 1.76 0.683 0.57

5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00 4.99 4.99 3.755 2.82 3.749 2.736 4.99 4.99 3.76 2.79 3.75 2.736 4.99 4.99 3.760 2.82 3.75 2.781 4.99 4.99 3.76 2.82 3.74 2.786

5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00 4.99 4.99 3.74 2.71 3.75 2.75 4.99 4.99 3.74 2.71 3.65 2.75 4.99 4.99 3.749 2.71 3.75 2.71 4.99 4.99 3.75 2.71 3.75 2.71

Half

Full

Process Time (ms) Multiple

LBP SFLBP FFLBPi FFLBPh CFLBPi+s CFLBPh+s 8x8 HLBP HSFLBP HFFLBPi HFFLBPh HCFLBPi+s HCFLBPh+s LBP SFLBP FFLBPi FFLBPh CFLBPi+s CFLBPh+s 16x16 HLBP HSFLBP HFFLBPi HFFLBPh HCFLBPi+s HCFLBPh+s LBP SFLBP FFLBPi FFLBPh CFLBPi+s CFLBPh+s 32x32 HLBP HSFLBP HFFLBPi HFFLBPh HCFLBPi+s HCFLBPh+s

Quantity and quality data tabulation

Single

Algorithm

Resolution

Appendix D.II

0.667 0.737 0.934 0.934 1.019 1.019 0.012 0.202 0.354 0.354 0.530 0.530 1.710 1.800 2.075 2.075 2.330 2.330 0.137 0.175 0.460 0.460 0.920 0.920 5.300 6.090 6.141 6.141 7.670 7.670 0.360 1.440 0.560 0.560 1.280 1.280

0.667 0.737 0.934 0.934 1.019 1.019 0.065 0.470 0.347 0.347 0.756 0.756 1.710 1.800 2.075 2.075 2.330 2.330 0.160 0.584 0.490 0.490 1.060 1.060 5.300 6.090 6.141 6.141 7.670 7.670 0.420 1.640 0.750 0.750 2.810 2.810

0.667 0.737 0.934 0.934 1.019 1.019 0.250 0.500 0.540 0.540 0.760 0.760 1.710 1.800 2.075 2.075 2.330 2.330 0.760 1.190 1.125 1.125 1.740 1.740 5.300 6.090 6.141 6.141 7.670 7.670 2.450 3.620 2.750 2.750 4.760 4.760

0.667 0.737 0.934 0.934 1.019 1.019 0.667 0.737 0.934 0.934 1.019 1.019 1.710 1.800 2.075 2.075 2.330 2.330 1.710 1.800 2.075 2.075 2.335 2.335 5.300 6.090 6.141 6.141 7.670 7.670 5.300 6.090 6.141 6.141 7.670 7.670

197

Appendix E

List of Published Papers

198

Journal

1. December 2004, Volume VI(Number 1) , Preliminary Result for Infra-red Tomography, Elektrika - Journal of Electrical Engineering.

Conferences 1. 2-5th September 2003, Kuala Lumpur, Tomography System Based on Infra-red Sensors, Malaysia Science and Technology Congress 2003. 2. 5-7th October 2004, Kuala Lumpur, Imaging Concentration Profile using Infra-red Sensor Tomography System, Malaysia Science and Technology Congress 2004. 3. 9-10th October 2003, Kuala Lumpur, Infra-red Tomography System, Seminar Penyelidikan dan Pembangunan IPTA 2003. 4. 12th October 2004, Surabaya Indonesia, Real Time Flow Imaging using Infra-red Tomography, Seminar Elektronik Industri ke-6. 5. 5-7th December 2004, Pulau Pinang, Optical Tomography System using Infra-red Sensors, Persidangan Antarabangsa Bahan dan Pempakejan Elektronik ke-6. 6. 16th December 2004, Kuala Lumpur, Development of an Infra-red Sensor Tomography System, Simposium Pasca Siswazah Teknikal Kebangsaan ke-3. 7. 28-30th December 2004, Dhaka, Bangladesh, Sensor Modelling for Infra-red fibre optic Process Tomography System using Parallel Projection, 3rd International Conference on electrical and Computer Engineering. 8. 4-16th February 2005, Muscat, Oman, Visualizing Concentration Profile using Infra-red Tomography, International Conference on Communication, Computer and Power.

Appendix F.I

DriverLINX programming

199

Appendix.F.I (DriverLINX programming) a) Initialization void CTomographyDlg::TabDasInit() {m_pSR=NULL; m_driverInstance=NULL; m_logicalDevice=0; //Set for the DriverLINX logical device being used m_logicalChannel=0; //Use channel 0 by default m_samples=64; //Set the number of samples to acquire here // the buffer size MUST be a multiple of the number of channels m_DLmsg=RegisterWindowMessage(DL_MESSAGE); //need to register DriverLINX messages to detect buffer filled message // pass in driver name to avoid the Open DriverLINX dialog m_driverInstance=OpenDriverLINX(m_hWnd,"KPCI1800"); //Open DriverLINX driver, and bring up the dialog box to pick a driver m_pSR=(DL_ServiceRequest*) new (DL_ServiceRequest); //get a pointer to the service request memset(m_pSR,0,sizeof(DL_ServiceRequest)); //Initialize the members of the service request DL_SetServiceRequestSize(*m_pSR); //Need to set the service request size member m_pSR->device=m_logicalDevice; //set the device number to the device being used m_pSR->operation=INITIALIZE; //Need to initialize the device before we can use it m_pSR->subsystem=DEVICE; //the initialize function is part of the DEVICE subsystem m_pSR->mode=OTHER; //Initialize is not a polled, interrupt, or dma operation, so we use OTHER m_pSR->hWnd=m_hWnd; //Need to set the hWnd member to the window handle of the application if (DriverLINX(m_pSR) == NoErr) { // success // Initialize our DriverLINX variables m_samples = 64; // how many samples m_logicalChannel = 0; // starting channel } else // problem has occured { showMessage(m_pSR);} // display the error message box*/ }

b)Calibration void CTomographyDlg::TabDasCalibrate() {((CButton*)((CDasConfig*)m_oaTab1[0])->GetDlgItem(IDC_BCALIBDAS))->EnableWindow(FALSE); ((CButton*)((CDasConfig*)m_oaTab1[0])->GetDlgItem(IDC_BSTARTDAS))->EnableWindow(FALSE); ((CButton*)((CDasConfig*)m_oaTab1[0])->GetDlgItem(IDC_CANCEL))->EnableWindow(FALSE); ((CButton*)((CDasConfig*)m_oaTab1[0])->GetDlgItem(IDC_RONLINE))->EnableWindow(TRUE); ((CButton*)((CDasConfig*)m_oaTab1[0])->GetDlgItem(IDC_ROFFLINE))->EnableWindow(TRUE); ((CButton*)((CDasConfig*)m_oaTab1[0])->GetDlgItem(IDC_RPDNORMAL))->EnableWindow(TRUE); ((CButton*)((CDasConfig*)m_oaTab1[0])->GetDlgItem(IDC_RPDIDEAL))->EnableWindow(TRUE); if(Calibrate==1) {((CButton*)((CDasConfig*)m_oaTab1[0])->GetDlgItem(IDC_RFLOW1))->EnableWindow(TRUE); ((CButton*)((CDasConfig*)m_oaTab1[0])->GetDlgItem(IDC_RFLOW2))->EnableWindow(TRUE); ((CButton*)((CDasConfig*)m_oaTab1[0])->GetDlgItem(IDC_RFLOW3))->EnableWindow(TRUE); ((CButton*)((CDasConfig*)m_oaTab1[0])->GetDlgItem(IDC_RFLOW4))->EnableWindow(TRUE); ((CButton*)((CDasConfig*)m_oaTab1[0])->GetDlgItem(IDC_RFLOW5))->EnableWindow(TRUE); ((CButton*)((CDasConfig*)m_oaTab1[0])->GetDlgItem(IDC_RFLOW6))->EnableWindow(TRUE); ((CButton*)((CDasConfig*)m_oaTab1[0])->GetDlgItem(IDC_RFLOW7))->EnableWindow(TRUE); ((CButton*)((CDasConfig*)m_oaTab1[0])->GetDlgItem(IDC_RFLOW8))->EnableWindow(TRUE); ((CButton*)((CDasConfig*)m_oaTab1[0])->GetDlgItem(IDC_RFLOW9))->EnableWindow(TRUE); ((CButton*)((CDasConfig*)m_oaTab1[0])->GetDlgItem(IDC_RFLOW10))->EnableWindow(TRUE); Calibrate=0;} ((CButton*)((CDasConfig*)m_oaTab1[0])->GetDlgItem(IDC_RCONTRATE))->EnableWindow(TRUE); ((CButton*)((CDasConfig*)m_oaTab1[0])->GetDlgItem(IDC_RVELOCITY))->EnableWindow(TRUE); ((CButton*)((CDasConfig*)m_oaTab1[0])->GetDlgItem(IDC_RMFR))->EnableWindow(TRUE); ((CButton*)((CDasConfig*)m_oaTab1[0])->GetDlgItem(IDC_BSTARTDAS))->EnableWindow(TRUE); ofs20.close(); //close again reference up ofs21.close(); //close again reference down ofs20.open("referenceup.dat");//open again ofs20start.u.diEvent.pattern=1; //falling edge m_pSR->timing.typeEvent=RATEEVENT; //timing will be determined by the rate generator m_pSR->stop.typeEvent=TCEVENT; //Stop on terminal count

Appendix F.I

DriverLINX programming

200

m_pSR->channels.nChannels=2; //A start/stop channel range will be used m_pSR->channels.chanGain[0].channel=m_logicalChannel; //starting channel defined by m_logicalChannel m_pSR->channels.chanGain[0].gainOrRange=Gain2Code(m_logicalDevice,AI,1.0); //gain1=unipolar 0-9.999V,gain2=unipolar 04.999V m_pSR->channels.chanGain[1].channel=m_logicalChannel+63; // stop on chan 64 m_pSR->channels.chanGain[1].gainOrRange=Gain2Code(m_logicalDevice,AI,1.0); m_pSR->channels.numberFormat=tNATIVE; //use the native format (integer counts)// Declare two hundred fifty four buffers m_pSR->lpBuffers=(DL_BUFFERLIST*) new BYTE[DL_BufferListBytes(254)]; //create a buffer list pointer for 254 buffers m_pSR->lpBuffers->notify=NOTIFY; //enable the buffer filled message m_pSR->lpBuffers->nBuffers=254; //use only one buffer m_pSR->lpBuffers->bufferSize=Samples2Bytes(m_logicalDevice,AI,m_logicalChannel,m_samples); //set the size of the buffer (in bytes) to hold the number of samples for(int j=0; jlpBuffers->BufferAddr[j]=BufAlloc(GBUF_INT,m_pSR->lpBuffers->bufferSize);} // the rate information for the DI_EXTCLK typeEvent // m_pSR->timing.u.rateEvent.channel=DEFAULTTIMER; //DEFAULTTIMER is a symbol representing the default counter/timer channel used for pacing. m_pSR->timing.u.rateEvent.mode=BURSTGEN; //Set the counter/timer mode to rate generation m_pSR->timing.u.rateEvent.clock= EXTERNAL; m_pSR->timing.u.rateEvent.gate=DISABLED; //no gating will be used m_pSR->timing.u.rateEvent.period=Sec2Tics(m_logicalDevice,AI,DEFAULTTIMER,(float)0.01248); //80.13hz m_pSR->timing.u.rateEvent.period=Sec2Tics(m_logicalDevice,AI,DEFAULTTIMER,(float)0.005); //200hz m_pSR->timing.u.rateEvent.onCount=Sec2Tics(m_logicalDevice,AI,DEFAULTTIMER,(float)0.000015); //time interval between sample m_pSR->timing.u.rateEvent.pulses=m_samples; //pulses sets the number of rate pulses to generate. Setting this to 0 means generate continous rate pulses DriverLINX(m_pSR); //Execute the service request to start the acquisition showMessage(m_pSR); //show any errors m_task=m_pSR->taskId; //Need to save the task ID of the AI task so it can be stopped later*/ }

c) Start Data Acquisition void CTomographyDlg::TomoDasStart() {clearBuffers(); //de-allocate any exisiting buffers in the service request m_pSR->operation=START; //Start the acquisition m_pSR->subsystem=AI; //using the AI subsystem m_pSR->mode=DMA; //use interrupt mode...could be DMA if your board supports DMA mode m_pSR->start.typeEvent=DIEVENT; //Start on Digital trigger m_pSR->start.u.diEvent.channel=DI_EXTTRG; m_pSR->start.u.diEvent.mask=1; m_pSR->start.u.diEvent.match=0; m_pSR->start.u.diEvent.pattern=1; //falling edge m_pSR->timing.typeEvent=RATEEVENT; //timing will be determined by the rate generator m_pSR->stop.typeEvent=TCEVENT; //Stop on count event m_pSR->channels.nChannels=2; //A start/stop channel range will be used m_pSR->channels.chanGain[0].channel=m_logicalChannel; //starting channel defined by m_logicalChannel m_pSR->channels.chanGain[0].gainOrRange=Gain2Code(m_logicalDevice,AI,1.0); //gain1=unipolar 0-9.999V,gain2=unipolar 04.999V m_pSR->channels.chanGain[1].channel=m_logicalChannel+63; // stop on chan 64 m_pSR->channels.chanGain[1].gainOrRange=Gain2Code(m_logicalDevice,AI,1.0); m_pSR->channels.numberFormat=tNATIVE; //use the native format (integer counts) // Declare 254 buffers // m_pSR->lpBuffers=(DL_BUFFERLIST*) new BYTE[DL_BufferListBytes(254)]; //create 254 buffers. m_pSR->lpBuffers->notify=NOTIFY; //enable the buffer filled message m_pSR->lpBuffers->nBuffers=254; m_pSR->lpBuffers->bufferSize=Samples2Bytes(m_logicalDevice,AI,m_logicalChannel,m_samples); //set the size of the buffer (in bytes) to hold the number of samples for(int j=0; jlpBuffers->BufferAddr[j]=BufAlloc(GBUF_INT,m_pSR->lpBuffers->bufferSize);} // the rate information for the DI_EXTCLK typeEvent // m_pSR->timing.u.rateEvent.channel=DEFAULTTIMER; //DEFAULTTIMER is a symbol representing the default counter/timer channel used for pacing. m_pSR->timing.u.rateEvent.mode=BURSTGEN; //Set the counter/timer mode to rate generation m_pSR->timing.u.rateEvent.clock= EXTERNAL; m_pSR->timing.u.rateEvent.gate=DISABLED; //no gating will be used m_pSR->timing.u.rateEvent.period=Sec2Tics(m_logicalDevice,AI,DEFAULTTIMER,(float)0.01248); //80.13hz m_pSR->timing.u.rateEvent.onCount=Sec2Tics(m_logicalDevice,AI,DEFAULTTIMER,(float)0.000015); //interval between sample m_pSR->timing.u.rateEvent.pulses=m_samples; //pulses sets the number of rate pulses to generate. Setting this to 0 means generate continous rate pulses DriverLINX(m_pSR); //Execute the service request to start the acquisition showMessage(m_pSR); //show any errors m_task=m_pSR->taskId; //Need to save the task ID of the AI task so */}

d) Stop Data Acquisition void CTomographyDlg::TabDasStop() { int i_TabSel1= ((CButton*)((CDasConfig*)m_oaTab1[0])->GetDlgItem(IDC_RONLINE))->GetCheck(); if(i_TabSel1==1)

Appendix F.I

DriverLINX programming

{m_pSR->taskId=m_task; // restore the task ID m_pSR->operation=STOP; m_pSR->start.typeEvent=NULLEVENT; m_pSR->mode=DMA; DriverLINX(m_pSR); showMessage(m_pSR); cycle=0;} else {KillTimer(ID_TIMER2);//kill ID_TIMER2->offline UpdateData(FALSE);} }

e) WinProc LRESULT CTomographyDlg::WindowProc(UINT message, WPARAM wParam, LPARAM lParam) { // TODO: Add your specialized code here and/or call the base class if(message==m_DLmsg) //Did DriverLINX post a message? We only want to act on the DriverLINX buffer filled message { switch(wParam) {case DL_BUFFERFILLED: //Was the DriverLINX message the buffer filled message? // determine the index of the buffer that if full m_bufnum=getBufIndex(lParam); //Get index of the full buffer done(m_bufnum); //if so, call the done() function to process the results break; case DL_STOPEVENT: // Place the code to process the STOPEVENT here break; case DL_DATALOST: // call a handler for this condition MessageBox("Data lost!!"); ((CButton*)((CDasConfig*)m_oaTab1[0])->GetDlgItem(IDC_ROFFLINE))->SetCheck(1); ((CButton*)((CDasConfig*)m_oaTab1[0])->GetDlgItem(IDC_RONLINE))->SetCheck(0); break; } } return CDialog::WindowProc(message, wParam, lParam); }

f) Data conversion void CTomographyDlg::done(int bufNum) {int i_TabSel2= ((CButton*)((CDasConfig*)m_oaTab1[0])->GetDlgItem(IDC_RONLINE))->GetCheck(); float *readings; readings = new float[m_samples]; //make a temporary array to hold the even converted readings float *readings1; readings1 = new float[m_samples]; //make a temporary array to hold the odd converted readings int index; int indexvel; int ofst; double tempup; double tempdown; m_pSR->operation=CONVERT; //Use the convert operation to convert the raw counts in the buffer to voltages m_pSR->mode=OTHER; //Convert is not a polled, interrupt, or DMA operation m_pSR->start.typeEvent=DATACONVERT; //Set the start type to convert the data m_pSR->start.u.dataConvert.startIndex=0; //start at index 0 of the buffer m_pSR->start.u.dataConvert.nSamples=m_samples; //set the number of samples to convert m_pSR->start.u.dataConvert.numberFormat=tSINGLE; //convert the counts to tSINGLE (float) m_pSR->start.u.dataConvert.scaling=0.0f; //no scaling will be used m_pSR->start.u.dataConvert.offset=0.0f; //no offset will be applied m_pSR->start.u.dataConvert.wBuffer=bufNum; // if (bufNum%2==0){m_pSR->start.u.dataConvert.lpBuffer=readings; //put the converted readings in the even temporary buffer} else{m_pSR->start.u.dataConvert.lpBuffer=readings1; //put the converted readings in the odd temporary buffer} DriverLINX(m_pSR); //Execute the conversion showMessage(m_pSR); //show any errors if(bufNum%2==0)//even { if(m_bDasOption==0) //calibration { if(bufNum==252) { for(index=0;index