L UDWIG -M AXIMILIANS -U NIVERSITÄT M ÜNCHEN Department “Institut für Informatik” Lehr- und Forschungseinheit Medieninformatik Prof. Dr. Heinrich Hußmann
Student research project
Touch Sensing with Time Domain Reflectometry Markus Zimmermann
[email protected]
Handling period: Advisor: Professor:
1st March 2010 to 31th May 2010 Raphael Wimmer Prof. Dr. Heinrich Hußmann
Abstract The advancing miniaturisation of devices entails the consequence, that any shaped prototypes get smaller and smaller while the claim of touch sensing is indispensable. A new approach for this purpose is touch sensing with Time Domain Reflectometry (TDR). By covering any shaped surfaces with a pair of conductors (e.g. wrapping with cable or coating with conducting paths), touches can be recognized through measuring with TDR. This present student research project firstly deals with the basics of TDR and afterwards goes into details for digitalisation and signal processing of a Tektronix 1502 TDR analyser’s signals. The work is closing with the conception of prototypes of applications in the two dimensional space and the demonstration of their feasibility. Based on the rapid development time and the flexibility in adaption, though accompanied by the high acquisition cost of the TDR analyser, the technology is outstandingly suited for the development of prototypes.
Zusammenfassung Die fortschreitende Geräteminiaturisierung zieht die Konsequenz nach sich, dass beliebig geformte Prototypen immer kleiner werden, auf Berührungserkennung jedoch nicht verzichtet werden soll. Einen neuen Ansatz bietet hierfür die Berührungserkennung mittels Zeitbereichsreflektometrie (TDR). Indem beliebige geformte Oberflächen mit einem Leiterpaar überzogen werden (z. B. durch das Umwickeln von Kabel oder dem Aufdampfen von Leiterbahnen) können darauf durch Vermessung mittels TDR Berührungen erkannt werden. Die vorliegende Projektarbeit beschäftigt sich zunächst mit den Grundlagen der TDR und geht anschließend ausführlich auf die Digitalisierung und Signalverarbeitung der Signale aus einem Tektronix 1502-Messgerätes ein. Abschließend werden Prototypen von Anwendungsfällen im zweidimensionalen Raum vorgestellt und die Machbarkeit demonstriert. Aufgrund der schnellen Entwicklungszeit und flexiblen Anpassung, der allerdings hohe Anschaffungskosten des Messgerätes gegenüberstehen, ist die Technologie hervorragend zur Erstellung von Prototypen geeignet.
Assignment of tasks The goal of this student research project is to demonstrate the feasibility of touch sensing using time domain reflectometry in three steps: 1. Examination of the theoretical background 2. Signal capturing and analysis using the Tektronix 1502 3. Development of applications for touch-sensing via TDR
With this, I declare that I have written this paper on my own, distinguished citations and used no other than the named sources and aids. Munich, 23th July 2010
Markus Zimmermann
Contents 1
Introduction 1.1 Time Domain Reflectometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Present Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Reflections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1 1 1 2
2
Hardware 2.1 Tektronix 1502 TDR Analyser . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Point Grey Firefly Camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5 5 6
3
Signal Analysis 3.1 Acquisition . . . . . . . . . 3.2 Moving Average Filter . . . 3.3 Adaptive Average Smoothing 3.4 Calibration . . . . . . . . . 3.5 Detection . . . . . . . . . . 3.6 Mapping . . . . . . . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
7 7 9 11 12 13 14
Applications 4.1 »Wire« . . . . . . 4.2 »Snake« . . . . . . 4.3 »Hilbert« . . . . . 4.4 Use Cases: Outlook
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
15 15 16 16 18
Discussion and Conclusion 5.1 Limitations of the Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19 19 19
4
5
. . . .
. . . .
. . . .
. . . .
. . . .
Bibliography
21
A CD-Index
25
B Python Prototype
26
C Creating Printed Circuit Boards with the Direct Toner Transfer Method
32
D Open CV D.1 Open CV Patch for the Firefly Camera . . . . . . . . . . . . . . . . . . . . . . . D.2 Open CV on Max OS X . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
33 33 33
I
II
1
INTRODUCTION
1
Introduction
The technological trend in the last few years is the continuing miniaturisation of devices. This results in the requirement of small, any shaped and tangible prototypes with the possibility of touch recognition [14]. Especially the claim for arbitrary shapes and touch recognition calls for a novel and flexible touch recognition technology1 . Time Domain Reflectometry is a promising new approach for such a technology. In the first section of this work, the technical background of Time Domain Reflectometry is described, as well as the interpretation of its signals is shown. Moreover some present use-cases of the technology are highlighted. The second section describes the hardware used in this project, a Tektronix 1502 Time Domain Reflectometer and a Firefly Camera used for data capturing. In the third section the whole data processing chain is established (the main part of this work), from the signal acquisition, an introduction in digital signal processing up to the mapping of data to two dimensional objects. The fourth sections the tangible prototypes of applications and use-case scenarios are demonstrated and evaluated. In the final chapter, all the work is concluded and future work is motivated.
1.1
Time Domain Reflectometry
Time Domain Reflectometry (TDR), also known as "Cable Radar" or "Echometer" is a technology to determine run lengths of electromagnetic waves by their reflections. The technology was described for the first time by Smith-Rose in 1933, when he was measuring moisture content in soils using a radar beam [26]. Since then, TDR has been refined, mainly to detect line faults on electronic conductors.
(a) Transmission of pulse
(b) Pulse at discontinuity
(c) Reception of reflected pulse
Figure 1.1: Initial pulse (a) is (horizontally) travelling along a wire. When reaching the discontinuity (represented by the vertical line) (b), the signal is both partially reflected back and transmitted further (c) (original image in [2]).
The idea of TDR is depicted in Figure 1.1: A short pulse (a) is sent down a wire until it reaches a discontinuity (like a short circuit, a wire break or a insulation fault). A fraction of the pulse bounces off the discontinuity (b) and is sent back. This reflection can be detected (c) by the TDR analyser, based on the signal propagation delay and the characteristics of the medium, the distance between the analyser and the discontinuity can be calculated.
1.2
Present Usage
TDR is a well established technology used in different technological fields: for wiring maintenance, for agricultural and geotechnical purposes as well as circuit analysis. Agricultural and Geotechnical The basics that Smith-Rose described in 1933 have been advanced. Today TDR is used for the measurement of the water content in soil for whole plants by inserting probes into the terrain [16, pg. 195-199]. 1 Technologies for malleable touch interfaces are being developed (see work of Vogt et al. [31]), but in the meantime
rather inflexible
1
1.3
Reflections
1
INTRODUCTION
By attaching conductors to infrastructural buildings, their movement can be detected using TDR. This is today adopted for real time monitoring of streets and tracks (O’Connor and Dowding [19]), bridge scour detection (U.S. Army Engineer Research and Development Center [30]) and even at mining sites for examining deformation of rocks (Dowding et al. [9]). The Federal Institute for Materials Research and Testing is working on a technique for testing of building materials via TDR [12]. Wiring Maintenance An obvious area of application for TDR is the wiring maintenance in telecommunication (as described in Rako [22], Mullett [18], Schmitt [23]). When using TDR, it is easy to discover an interception in a cable (e.g. caused by a digger). For aviation wiring maintenance (the Airbus A380 consists of about 500 km of wire with over 40,000 conjunctions), TDR is a very helpful tool. Using a new approach of Spread Spectrum TDR it is even possible to monitor operating wires transmitting other signals at the same time [24]. Circuit Analysis The digitalisation of measurement instruments meant a leap in technology for TDR. As the measurable resolution dramatically increased, it is now possible not only to measure wires, but also to analyse entire (populated) circuits [27, 28]. Quality checks can be automated this way. The current state of research is the ultra-wideband analysis of compact chip packages (Han et al. [13]). New Approach: Touch Sensing An unusual and rather unexplored approach is the usage of TDR for touch sensing. The basic idea is that the approximation of a finger towards a conductor is measurable as a discontinuity with TDR. That attitude has been tested by Huang and Hung in 2009 [15]. This present work will be a further disclosure of this approach.
1.3
Reflections
To understand TDR for touch recognition purposes, the TDR reflections and their origins are discussed in the following. ZS
ES
L
C
G
R
L
C
R
G
ZL
Figure 1.2: Model of a transmission line (original image in [1]).
Signal propagation on a transmission line The electrotechnical model of a two wire transmission line is shown in fig. 1.2 and consists of a continuous set of series and shunt inductors (L), capacitors (C), conductance (G) and resistors (R). Depending on those L’s, C’s, G’s and R’s, the characteristic impedance (Z0 ) of the cable is defined. A signal pulse introduced at the beginning of the line (ES ) will travel down the line with a velocity (vρ ) that is an approximation2 to the speed of 2 for
coaxial cables vρ typ. 0.6 ∗ c0 , for FR4 printed circuit boards vρ typ. 0.5 ∗ c0 , microstrip vρ typ.
2 ∗ 108 ms
2
2 3
∗ c0 ≈
1
INTRODUCTION
1.3
Discontinuity Z0
Z
Reflections
Discontinuity Z0
Z0 Z2
Z1
100 t2
t1
Z2 Z1 incident 0
Time Figure 1.3: Example of a TDR waveform (original image in [29]).
light (c0 ). This pulse will be attenuated by each set of resistance and conductance. A transmission line can be terminated with a load (ZL ). If somewhere in the line the load is different from Z0 (means different L, C, G, R), a new wave is originated and reflected back to the source (and forward again, resulting in a standing wave). The resulting waveform that can be measured and displayed (see fig. 1.3) by a TDR device is a combination of the initial pulse and all reflections. [1, 6, 7] Reflection Interpretation The step generator of a TDR device produces a short positive incident pulse, that travels down the tested line (see fig. 1.3). While the load of the line is equal to the caracteristic impedance, no signal is reflected and the TDR’s oscilloscope displays only the voltage of the pulse. If a discontinuity is reached, the initial wave is partly reflected. This reflection will appear as addition to the incident wave (e.g. Z1 or Z2 in fig. 1.3). [1, pg. 6] The distance (D) to the discontinuity can be calculated if knowing the velocity of propagation (vρ ) that is defined through the conductor’s dielectric constant (er )3 and the temporal distance between the incident pulse and the discontinuity (propagation delay TP )4 [1, pg. 7]: c0 vρ = √ er TP 2 Not only the distance and the strength of the discontinuity, but also the kind of the mismatch is detectable with a TDR analyser. Figure 1.4 shows some possible mismatches of the termination. Compared to a matched load termination ZL = Z0 in (a), the short circuit termination in (b) shows a negative and the open circuit termination in (c) shows a positive deflection. In addition, the amount of capacity (d) and inductivity (e) at the line’s termination can be identified optically. Furthermore discontinuities amongst a line are detectable and can be characterised, the basic discontinuities (see fig. 1.5) are capacitive (a) and inductive (b) ones. They result in positive/negative distractions showing the magnitude of the mismatch with a duration corresponding to the length of the discontinuity. Any combinations (c)/(d) are possible, with some experience, the D = vρ ∗
3 e can be determined with an experiment using r 4 T can be identified on the TDR’s oscilloscope P
a known length of the same cable type
3
1.3
Reflections
1
TP
TP V
2TP V Z0
0
ZL = 0
0
(b) Short circuit termination
TP
TP
2V 2TP
2TP V
2TP Z0
ZL = Z0
(a) Matched load termination 2V
INTRODUCTION
Z0
V
Z0
ZL =
ZL = C
0
0
(c) Open circuit termination
(d) Capacitor load termination
TP 2TP V
Z0
ZL = L
0
(e) Inductor load termination Figure 1.4: Possible terminations of the transmission line (original image in [29]).
discontinuities can be identified for example as splice/joint, wet splice, high resistance splice, transceiver, tap, water ingress, split/re-split, cross and lots more (see [18, pg. 66]). Finger on a transmission line Important for this work is the fact, that a finger close to a transmission line means a capacitive change of the characteristic impedance at the position of the finger. This capacitive discontinuity (like in fig. 1.5 (a)) can be identified and the position of the finger can be calculated when knowing the length and the course of the transmission line. This procedure is even capable of detecting multi touches. TP
TP V
2TP
2TP V Z0
0
Z0
Z0
C
(a) Shunt capacitance discontinuity
(b) Series inductance discontinuity TP
TP 2TP
2TP V
V Z0 0
Z0
0
C
Z0
Z0 0
C
Z0
(c) Series inductance - shunt capacitance discontinuity (d) Shunt capacitance - series inductance discontinuity Figure 1.5: Possible (combinations of) discontinuities on the transmission line (original image in [29]).
4
2
HARDWARE
Figure 2.1: The Tektronix 1502 Time Domain Reflectometer (own photography).
2
Hardware
The whole hardware test setup consists of a Tektronix 1502 TDR analyzer, a Point Grey Firefly camera and an arbitrary Linux PC.
2.1
Tektronix 1502 TDR Analyser
The Tektronix 1502 analyser has been developed in 1975 for the measurement of telecommunication cables. It is analogue and consists of a pulse generator and an integrated oscilloscope [5]. The generated pulse is produced by a tunnel diode. This enables an incident pulse with a rise time of 140 ps. The faster the rise time of the incident pulse, the better the resolution for detecting discontinuities. The coherence is described by Tektronix [27] in fig. 2.2. The TDR analyser’s resolution to separate two different discontinuities (a) is 12 of its rise time. If the discontinuities are closer together, they are not ignored, but melt together to one discontinuity. The detection and positioning 1 of the rise time’s length. of a single discontinuity (b) is much better. It is detected up to 10 tseparate a1
a2
To resolve a1 and a2 as separate discontinuities: tseparate > trisetime / 2
(a) TDR Discontinuity Resolution
tsingle a1
To observe a1: tsingle > trisetime / 10
(b) TDR Discontinuity Detection
Figure 2.2: Minimum peak risetime to separate two discontinuities (a) and to observe a single discontinuity (b) (adapted from [27]).
This means that the resolution of the Tektronix 1502 with 140 ps rise time is high enough to separate discontinuities with a distance of about 10 mm (see tab. 2.1) and also sufficiently 5
2.2
Point Grey Firefly Camera
2
HARDWARE
accurate to locate the discontinuities with a precision of about 2 mm on printed circuit boards. The distance of two close-fitting fingertips is about 10 mm (which enables multitouch abilities) and thus a precision of 2 mm is adequate for a prototype. TDR Risetime [ps] 10 ∗∗ 15 20 40 140 150 200
Resolution [ps] 5 7.5 10 20 70 75 100
Resolution in air [mm] 1.50 2.25 3.00 6.00 21 22.5 30.0
Resolution in PCB ∗ [mm] 0.67 1.00 1.34 2.68 9.37 10.04 13.39
* Printed Circuit Board, FR4, vρ = 0.446 ∗ c0 ** State of the art risetime in 2010
Table 2.1: Resolution of TDR systems (adapted from [28]).
2.2
Point Grey Firefly Camera
A Point Grey Research Firefly MV FFMV-03M2M black and white camera with the configured resolution of 640x480 Pixel (VGA) is mounted onto the Tektronix 1502 display’s sun shield. The whole screen is captured and transmitted via IEEE-1394 to the PC. [21]
Figure 2.3: The Point Grey Firefly Camera (original image in [21]).
The settings for the camera are 30 fps trigger and a corresponding shutter value of 0.03 s (475). Those values equate to the refresh rate of the oscilloscope. A captured image can be found in figure 3.1. There are some flares around the trace and rapid oscillations are underexposed. But these are the characteristics of a cathode ray screen altogether resulting in a maintainable image.
6
3
SIGNAL ANALYSIS
Figure 3.1: The captured and analyzed waveform on the left and the corresponding mapping on a twodimensional object on the right (screenshot from prototype).
3
Signal Analysis
Signal analysis was the main part of this work. Based on the waveform of the Tektronix 1502 (see fig. 3.1 left), the goal is to gather the exact position of a touch on an object containing the waveguide (see fig. 3.1 right). The analysis is done in six stages: 1. Acquisition: capture and digitalise the waveform 2. Filter: remove high frequency interferences 3. Smoothing: glaze noise and jitter 4. Calibration: gather only the difference of signals 5. Detection: find capacitive discontinuities 6. Mapping: allocating a corresponding position on the object A prototype has been realised in Python. Important parts will be shown in place. The whole code listing can be found in appendix B. General informations about digital signal processing can be found in Smith [25].
3.1
Acquisition
The first step is the signal acquisition. Because of the absence of other (digital) interfaces, the data has to be gathered optically. The mounted camera in front of the Tektronix 1502 delivers an image as shown in figure 3.2 (a). The length of the conductor can be read off the image between the incident pulse and the short circuit termination. The image contains a capacitive discontinuity nearly disappearing in the noise. The image has to be processed to provide digitalised array of values representing the waveform (see fig. 3.2 (b)). The system has to guarantee fast response times, Python does not satisfy this request in the area of image processing. Hence the highly optimised computer vision librariy OpenCV has been used (see Bradski and Kaehler [3]). 7
3.1
Acquisition
3
SIGNAL ANALYSIS
Noise Discontinuity
Measured Wire (a) Captured image
(b) Analyzed data
Figure 3.2: Example of a captured image and the corresponding image analysis. The length of the wire is shown, a capacitive discontinuity marked, the rest of the signal has to be treated as noise (captured data).
Listing 1: Analysis Function 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
d e f a n a l y z e I m a g e ( image , mask ) : """ By d e t e c t i n g t h e h i g h l i g h t −p o i n t column by column u t i l i s i n g OpenCV , t h i s f u n c t i o n w i l l r e t u r n an a r r a y o f s a m p l e s . mask : w i l l f i l t e r dead p i x e l s . """ trace = [] xv = [ ] # analyze trace for x in range (0 ,640) : column = cv . G e t C o l ( image , x ) ( minVal , maxVal , minLoc , ( maxLocX , maxLocY ) ) = cv . MinMaxLoc ( column ) i f maxVal > 100 and maxLocY > 1 : i f n o t ( ( x , maxLocY ) i n mask ) : t r a c e . a p p e n d ( maxLocY ) xv . a p p e n d ( x ) r e t u r n a r r a y ( xv ) , a r r a y ( t r a c e )
Listing 1 shows the utilisation of OpenCV for the image analysis. Line 10 iterates over each of the 640 horizontal image pixels, cv.GetCol() extracts the corresponding column (line 11) and cv.MinMaxLoc() (line 12) locates the brightest point in this column. The y-value of this point is stored in an array. After the 640 iterations, the trace basically contains the digitalised waveform of the image. There are two problems in this approach. First, the camera contains dead white pixels. That causes the cv.MinMaxLoc() functions to select them as a highlight. By passing an array containing the affected columns, analyzeImage() will ignore those columns. Second, some columns will not contain any highlights because of the grid on the TDR’s screen or the rapid movement of the cathode ray. Those blackouts have to be interpolated in the next step. Listing 2: Image Analysis and Value Interpolation 1 2 3 4 5 6 7
#ANALYZE x , t r a c e = a n a l y z e I m a g e ( img , p i x e l m a s k ) # interpolating function f f = i n t e r p o l a t e . i n t e r p 1 d ( x , trace , bounds_error = 0 , f i l l _ v a l u e = 0) # i n t e r p o l a t e with f in range (0 ,640) xa = a r a n g e ( 0 , 6 4 0 ) i n t e r p o l a t e d = f ( xa )
The whole image analysis and value interpolation can be done in few steps using the optimised math library SciPy shown in listing 2. After analyzeImage() is called an interpolation function is generated with SciPy’s interp1d()-function. The blackouts are interpolated by simply calling the interpolation function onto the domain of definition (see line 7). The acquisition stage is finished resulting in a digitalised, steady waveform in the whole measurement range. 8
3
SIGNAL ANALYSIS
3.2
Moving Average Filter
Figure 3.3: The original signal and the signal with applied moving average filter (captured data).
3.2
Moving Average Filter
The second stage, that removes high frequency interferences, consists of a moving average filter. The moving average filter works as a lowpass filter. A data sample is shown in figure 3.3, where the HF (high frequency) peaks are suppressed. A moving average filter is an optimal and performant filter for noise removal (for the following explanations see Smith [25]). Every (discrete) value is averaged with a certain number of its neighbours. That means, for a five point symmetrical moving average filter, the n-th filtered value is calculated as follows: f ilteredn =
signaln−2 + signaln−1 + signaln + signaln+1 + signaln+2 5
That has to be repeated for each signal value and means that the frame borders can not be clearly averaged (e.g. the first value has no left predecessor). This filter is a convolution filter. To understand this, digital signal processing (DSP) basics have to be introduced. Convolution is about changing a system’s input signal (x) into an output signal (y). The input signal can be split into a set of impulses (n). The output resulting from each impulse is always a shifted and scaled version of the impulse response, also called filter kernel or convolution kernel (h). The impulse response basically means the response of the filter system to a single pulse. All the output pulses can be composed to the output signal. Convolution is the operation (∗) of joining the input signal and the impulse response. An example for convolution in DSP is shown in figure 3.4: xn ∗ hn = yn
Figure 3.4: Example of a low-pass filtering using convolution (original image in [25]).
A signal convolved with a kernel results in a signal. The convolution operation y = x ∗ h is an easy algorithm (the convolution sum) to calculate each output point based on input points and kernel: |h|−1
yn =
∑
hi xn−i
i=0
9
3.2
Moving Average Filter
3
SIGNAL ANALYSIS
This can be visualised by a convolution machine that moves along every output point, collects values from the input signal and applies the (flipped5 ) convolution kernel (see fig. 3.5):
Figure 3.5: The convolution machine in action, demonstrating an example. Output samples y0 , y3 , y8 and y11 are composed of different input samples and the impulse response (original image in [25]).
The corresponding iterative algorithm shown in listing 3 is very simple: Listing 3: Convolution Algorithm (adapted from [25]) 1 2 3 4 5 6 7 8 9 10
x = l i s t () h = l i s t () y = l i s t ()
# input signal # impulse response # output signal
for n in range (0 , len ( y ) ) : # loop f o r each p o i n t i n y y[n] = 0 # zero t h e o u t p u t sample for i in range (0 , len ( h ) ) : # loop f o r each p o i n t i n h i f ( n−i < 0 ) : c o n t i n u e # check l e f t boundary i f ( n−i > l e n ( x ) ) : c o n t i n u e # check r i g h t boundary y [ n ] = y [ n ] + h [ i ] ∗ x [ n−i ] # c a l c u l a t e c o n v o l u t i o n sum
As mentioned before, the moving average filter is considered as a convolution filter. The filter kernel is quite trivial, a rectangular pulse with an area of one. A five point moving average filter kernel for example is [ 15 , 51 , 51 , 15 , 15 ]. The resulting implementation of a moving average filter is the iterative convolution algorithm (see lst. 3) and a filter kernel, that was the basic filtering in the TDR signal processing prototype. Based on the length of the filter kernel, the amount of HF filtering is determined. As mentioned, the OpenCV and SciPy packages are used in the prototype for performance reasons. That includes the usage of native fixed (SciPy/NumPy) C-style arrays (not the Python managed dynamic lists). OpenCV and SciPy routines are optimised for this kind of fast array access. This is not the case for plain Python, it has to access the arrays via some kind of wrapper. Because of that, the iterative convolution algorithm is not performant while accessing every single array element via the Python wrapper. In fact, all the moving average filtering is done with SciPy and a FFT (fast fourier transformation) in the signal’s frequency space (see listing 4) with the signal.fftconvolve() function. kernel(5) creates the five point kernel with the area of one mentioned above. Listing 4: Moving Average Filtering FFT Implementation 1 2
#MOVING AVERAGE FILTER ( f f t ) avg = s i g n a l . f f t c o n v o l v e ( i n t e r p o l a t e d , k e r n e l ( 5 ) , mode= ’ same ’ )
As a result, the TDR signals are HF filtered and strong peaks are suppressed. The peaks’ extrema are better locatable. But the image wise jitter between the frames remains and has to be removed in the next step. 5 The
10
kernel has to be flipped because of the reverse calculation based on the output signal
3
3.3
SIGNAL ANALYSIS
3.3
Adaptive Average Smoothing
Adaptive Average Smoothing
Av er
ag e
Ti
Signal Value
m
e
To remove the image wise jitter, the signal has to be eased further. The moving average filter (described in the anterior section 3.2) worked in the two dimensions time (representing the one dimensional space in a conductor) and signal value (voltage representing the discontinuity’s magnitude). An additional dimension is necessary to smooth the signal over the time: the frame (finally representing a discrete time for the two dimensional discontinuity’s space/magnitude). The average smoothing (illustrated in fig. 3.6) again is also a customised (25 point asymmetric) average filter. For each signal impulse, the average of 25 frames is calculated, resulting in a very stable and calm signal. The problem of this method is the slow propagation of a rapid change in the discontinuity’s magnitude. That means, the response time for a touch event if capturing with 30 fps is almost a second in the worst case, that is an inadmissible system’s interaction time. To correct this behaviour, the average smoothing has been designed to be adaptive. If the acclivity (the speed of alteration) is greater than a certain threshold, the concerning part of the signal will be adopted immediately after smoothing with the Moving Average Filter (MAF). As a result, the signal stays stable and calm however being highly dynamic.
Frame Figure 3.6: Functionality of the Adaptive Average Smoothing based on inter-frame smoothing (own illustration).
The implementation of the Adaptive Average Smoothing is shown in listing 5. The whole data is stored in the SciPy/NumPy array history with the size of 25x640, containing 25 waveforms at 640 sampled data values. For each capturing cycle the data in the array is rotated one frame backwards. Then the latest waveform is stored at the last position of the array (lines 2 and 3). Next, the acclivity of the current and the past waveform is calculated via the discrete derivative (∆y). After that the derivative is smoothed (30 pt MAF) because of the utilisation of quantised data (lines 6 and 7). Afterwards two masks are created. The dynamic mask is a waveform with ones where the acclivity is greater than the threshold and zeros otherwise. The static mask is the inverse. For soft crossover between static and dynamic areas, both masks are smoothed with a 15 pt MAF (lines 8 to 11). In line 12, all 25 waveforms in the history array are leaved unchanged (for the static parts of the signal) or overwritten with the averaged signal (for the dynamic parts of the signal). So far, the adaptive filtering part has been discussed. The averaging part is the average() command in line 14. This particular order of partially overwriting the whole waveform history before averaging it is important, the averaging is computationally intensive and has to be done in one single and unconditional SciPy call.
11
3.4
Calibration
3
SIGNAL ANALYSIS
Listing 5: Adaptive Average Smoothing Implementation 1 2 3 4 5 6 7 8 9 10 11 12 13 14
#ADAPTIVE AVERAGE SMOOTHING h i s t o r y = r o l l ( h i s t o r y , s h i f t =−1, a x i s = 0 ) # r o l l b a c k s a m p l e s h i s t o r y [ −1] = i n t e r p o l a t e d # a p p l y new i n t e r p o l a t e d s i g n a l # a d a p t i v e by a c c l i v i t y a l t e r a t i o n S p e e d = a b s ( h i s t o r y [ −1] − h i s t o r y [ − 2 ] ) # d e t e r m i n e a l t e r a t i o n a l t e r a t i o n S p e e d = s i g n a l . f f t c o n v o l v e ( a l t e r a t i o n S p e e d , k e r n e l ( 3 0 ) , mode= ’ same ’ ) # s m o o t h a l t e r a t i o n maskDynamic = where ( a l t e r a t i o n S p e e d > t h r e s h o l d [ 0 ] / 3 , 1 , 0 ) # mask f a s t c h a n g e s m a s k S t a t i c = where ( maskDynamic , 0 , 1 ) # mask s t a t i c s maskDynamic = s i g n a l . f f t c o n v o l v e ( maskDynamic , k e r n e l ( 1 5 ) , mode= ’ same ’ ) # s m o o t h f i l t e r mask m a s k S t a t i c = s i g n a l . f f t c o n v o l v e ( m a s k S t a t i c , k e r n e l ( 1 5 ) , mode= ’ same ’ ) # s m o o t h s t a t i c mask h i s t o r y = h i s t o r y ∗ m a s k S t a t i c + avg ∗ maskDynamic # a p p l y masks and s m o o t h i n g f i l t e r e d = a v e r a g e ( h i s t o r y , a x i s =0 ) # a p p l y a v e r a g e f i l t e r
3.4
Calibration
By combining the adaptive average smoothing with multiple moving average stages a reliable signal is reached. This is the best precondition for the signal calibration an further processing of the data. At any time, with no load on the measured conductor, the signal is calibrated as the idle signal (manually done by pressing a key). Afterwards, the delta of the recent waveform and the idle state signal addicts the baseband signal. In the Python prototype, the calibration delta is done in a single line (see lst. 6). A resulting baseband signal is shown in figure 3.7. The noise and the signal can be separated by a threshold. Listing 6: The Calibration Stage 1 2
#CALIBRATION calibrated = filtered − idle
Range Threshold
Figure 3.7: The calibrated version of the signal in fig. 3.2. A discontinuity originated by a gentle finger press is capable of being differentiated from the noise (own illustration).
The idle signal is the first parameter that has to be calibrated. The second one is the threshold, that bounds the signal-to-noise ratio. On the one hand it determines the acclivity of the Adaptive Average Smoothing, on the other hand the threshold is necessary for the detection stage (see section 3.5). The third parameter is the setup of the area of interest. This range defines the length of the touchable region of the conductor (connectors or the feed cable are excluded). There are some limitations of this calibration approach. As the TDR’s incident pulse is travelling along the conductor, the signal is attenuated all its way down (see section 1.3). The farer away a discontinuity is located from the beginning of the conductor, the more reduced is the voltage of the reflected signal and the smaller is the discontinuity’s peak. Because of that, the threshold should be inclined. Another effect is (at least on the Tektronix 1502) a seemingly elongation of the conductor, the more and the larger the discontinuities are. The conductor’s termination then misaligns to the right, causing an incorrect position of all following discontinuities, while the first fits its correct position. Because of that, a simple floating range is not sufficient. Within this work, a coherence with the size of the discontinuity’s area was assumed, but this could not be approved. For this work’s applications those two effects are insignificant for few fingers with normal pressure. Anyway the problem is rather insolvable because of the poor resolution of the Tektronix 1502. This does not allow a precise calculation of the discontinuity’s magnitude after the filtering and smoothing stages (see section 5.1). 12
3
3.5
SIGNAL ANALYSIS
3.5
Detection
Detection
In the next stage, the detection has to recognise the capacitive discontinuities. This detection uses the threshold (defined in sec. 3.4) and the first derivative of the signal. The conditions for a recognised discontinuity are very simple: Signal > T hreshold Signal 0 & 0 If the signal exceeds the threshold and the derivative declines to zero, then a touch is recognised. An example is shown in figure 3.8: 60 50 40 30 20 10 0 -10 -20 -30
Signal
Derivative
Threshold
Touch
Figure 3.8: A capacitive discontinuity is recognised at the vertical line based on the signal, the first derivative and the exceeding of the threshold (own illustration).
In the prototype, the detection is implemented as follows. With SciPy, the discrete calculation of the derivative is shown in listing 8. The detection itself is done iteratively. For each signal value, the conditions are checked (see lst. 8). Listing 7: Calculation of the Derivative 1 2
def d i s c r e t e D e r i v a t i v e ( t r a c e ) : return t r a c e − r o l l ( trace , 1 , 0 )
Listing 8: The Detection Stage 1 2 3 4 5
d e r i v a t i v e = s i g n a l . f f t c o n v o l v e ( d i s c r e t e D e r i v a t i v e ( c a l i b r a t e d ) ∗10 , k e r n e l ( 1 0 ) , mode= ’ same ’ ) detected = [] for i in range ( area [ 0 ] , area [ 1 ] ) : i f ( ( c a l i b r a t e d [ i ] > t h r e s h o l d [ 0 ] ) and ( ( d e r i v a t i v e [ i ] > 0 and d e r i v a t i v e [ i + 1] 100 and maxLocY > 1 : i f n o t ( ( x , maxLocY ) i n mask ) : t r a c e . a p p e n d ( maxLocY ) xv . a p p e n d ( x ) r e t u r n a r r a y ( xv ) , a r r a y ( t r a c e ) d e f p r i n t T r a c e ( t r a c e , image , c o l o r , v i s i b l e , s h i f t =0 , s c a l e = 1 ) : """ Function p r i n t s the given array of samples . trace : array of samples image : image t o p r i n t on color : color of line v i s i b l e : a r r a y o f d i s p l a y e d a r e a [ s t a r t , end ] s h i f t : pixels to s h i f t scale : scale factor """ pts =[] first , last=visible for x in range ( f i r s t , l a s t ) : p t s . a p p e n d ( ( x , i n t ( ( ( t r a c e [ x ]∗ s c a l e ) + s h i f t ) ) ) ) ; cv . P o l y L i n e ( image , [ p t s ] , 0 , c o l o r ) d e f on_mouse ( e v e n t , x , y , f l a g s , param ) : """ OpenCV Mouse C a l l b a c k H a n d l e r """ i f e v e n t == cv . CV_EVENT_LBUTTONDOWN: area [0] = x i f f l a g s & cv . CV_EVENT_FLAG_LBUTTON > 0 : i f x > area [0 ]: area [1] = x i f e v e n t == cv . CV_EVENT_LBUTTONUP : i f x > area [0 ]: area [1] = x rearrangeMapping ( ) i f e v e n t == cv . CV_EVENT_RBUTTONDOWN: t h r e s h o l d [ 0 ] = y − 240 print " threshold " + s t r ( threshold ) def s i n c ( i , f ) : """ SinC F u n c t i o n """ r e t u r n s i n (2∗ math . p i ∗ f ∗ i ) / i ∗math . p i def k e r n e l ( i ) : """ Moving A v e r a g e F i l t e r K e r n e l """ return ones ( i ) / i def autorange ( c a l i b r a t e d ) : """ A u t o r a n g e −D e t e c t i o n ( b e t a ) """ first = 0 l a s t = 640 for i in range (0 ,640) : # find f i r s t silence i f c a l i b r a t e d [ i ] > threshold [0] / 5: first = i i f i > f i r s t + 100: break f i r s t = f i r s t + 10 for i in range ( f i r s t ,640) : # find last silence i f c a l i b r a t e d [ i ] > threshold [0] / 5: last = i break l a s t = l a s t − 10 area [0] = f i r s t area [1] = l a s t def readMappingFile ( filename ) : """ Reads a t u r t l e mapping− f i l e . P a r a m e t e r : P a t h t o mapping f i l e Return : parsed parameters ( unit , size , s t a r t pt [ ,] , path [ , . . . ] ) """ f i l e = open ( f i l e n a m e , ’ r ’ ) unit = 0; size = [] start = [] path = [ ] for l i n e in f i l e : split = line . split () i f s p l i t [ 0 ] == ’ u n i t ’ : unit = int ( split [1])
27
APPENDIX
152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249
e l i f s p l i t [ 0 ] == ’ s i z e ’ : w, h = s p l i t [ 1 ] . s p l i t ( ’ , ’ ) s i z e = [ i n t (w) , i n t ( h ) ] e l i f s p l i t [ 0 ] == ’ s t a r t ’ : x,y = split [1]. split ( ’ , ’) start = [ int (x) , int (y) ] else : for c in l i n e : i f c == " u " or c == " d " or c == " l " or c == " r " : p a t h . append ( c ) return ( unit , size , s t a r t , path ) ; d e f d r aw M a pp i n gP a t h ( image , u n i t , s t a r t , p a t h ) : """ Draws t h e mapping p a t h on an image . """ pts = [] l a s t = ( s t a r t [0] ∗ unit , s t a r t [1] ∗ unit ) p t s . append ( l a s t ) #turtle for t in path : next = l a s t i f t == " u " : next = ( l a s t [0] , l a s t [1] − unit ) ; i f t == " d " : next = ( l a s t [0] , l a s t [1] + unit ) ; i f t == " l " : next = ( l a s t [0] − unit , l a s t [ 1 ] ) ; i f t == " r " : next = ( l a s t [0] + unit , l a s t [ 1 ] ) ; l a s t = next p t s . append ( n e x t ) cv . P o l y L i n e ( image , [ p t s ] , 0 , ( 2 4 0 , 2 4 0 , 2 4 0 ) ) # c v . L i n e ( image , ( 3 0 , 0 ) , ( 3 0 , 4 8 0 ) , ( 7 5 , 7 5 , 1 5 0 ) ) def assignMapping ( unit , s t a r t , path , o f f s e t , width ) : """ C r e a t e s mapper by a s s i g n i n g n−s a m p l e s t o xy−p o i n t s u n i t : m u l t i p l i e r for path s t a r t : s t a r t of path path : t u r t l e path w i d t h : r a n g e o f a n a l y z e d window o f f s e t : s t a r t o f a n a l y z e d window """ #INFO p a t h l e n = u n i t ∗ l e n ( p a t h ) # l e n g t h o f p a t h i n px nPixelWidth = double ( width ) / double ( p a t h l e n ) #n : x y r a t i o xyPixelWidth = double ( p a t h l e n ) / double ( width ) # xy : n r a t i o #TRACE XY , MAP N t o XY xyPixelTrace = [] mapNtoXY = {} l a s t = ( s t a r t [0] ∗ unit , s t a r t [1] ∗ unit ) x y P i x e l T r a c e . append ( l a s t ) #turtle lastN = double ( o f f s e t ) for t in path : dx = 0 dy = 0 i f t == " u " : dy = −1 i f t == " d " : dy = 1 i f t == " l " : dx = −1 i f t == " r " : dx = +1 # foreach pixel for i in range (0 , u n i t ) : l a s t = ( l a s t [ 0 ] + dx , l a s t [ 1 ] + dy ) ; x y P i x e l T r a c e . append ( l a s t ) i f i n t ( lastN + nPixelWidth ) > i n t ( lastN ) : #new d i s c r e t e n mapNtoXY [ s t r ( i n t ( l a s t N ) ) ] = l a s t lastN = lastN + nPixelWidth r e t u r n mapNtoXY # and x y P i x e l T r a c e def rearrangeMapping ( ) : " " " r e a r r a n g e mapper a f t e r c a l i b r a t i o n " " " g l o b a l u n i t , s t a r t , m a p p i n g P a t h , mapNXY , a r e a mapNXY = a s s i g n M a p p i n g ( u n i t , s t a r t , m a p p i n g P a t h , a r e a [ 0 ] , a r e a [1] − a r e a [ 0 ] ) #GUI window = cv . NamedWindow ( "TDR" , cv . CV_WINDOW_AUTOSIZE) mappingWindow = cv . NamedWindow ( " Mapping " , cv . CV_WINDOW_AUTOSIZE) cv . S e t M o u s e C a l l b a c k ( "TDR" , on_mouse , 0 ) app = QtGui . Q A p p l i c a t i o n ( s y s . a r g v ) # Capturing c a p = cv . CaptureFromCAM ( 0 ) cv . S e t C a p t u r e P r o p e r t y ( cap , cv . CV_CAP_PROP_FPS , 3 0 ) # Imaging i m a g e C o l o r = cv . C r e a t e I m a g e ( [ 6 4 0 , 4 8 0 ] , cv . IPL_DEPTH_8U , 3 ) # Global
28
APPENDIX
250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347
area =[0 ,639] threshold = [30] shot = 0 pause = 0 grid = 1 traces = 0 detection = 0 pixelmask = [ ( 9 8 , 187) , (106 , 240) , (120 , 425) , (133 , 238) , (519 , 199) ] detected = [] markers = [ ] mappingPath = [ ] mapNXY = [ ] topicName = " " calibrationTimer = 0 history = zeros ([25 ,640]) c a l i b r a t e d = zeros (640) idle = zeros (640) corrSample = zeros (50) s c = a b s ( s i n c ( a r a n g e (0 −320 ,640 −210) , 0 . 0 3 ) ) # I N I T I A L I Z E MAPPING u n i t , s i z e , s t a r t , m a p p i n g P a t h = r e a d M a p p i n g F i l e ( ’ s c h l a n g e 4 . model ’ ) ; mappingImage = cv . C r e a t e I m a g e ( [ u n i t ∗ s i z e [ 0 ] , u n i t ∗ s i z e [ 1 ] ] , cv . IPL_DEPTH_8U , 3 ) mappingImage8 = cv . C r e a t e I m a g e ( [ 8 ∗ u n i t ∗ s i z e [ 0 ] , 8∗ u n i t ∗ s i z e [ 1 ] ] , cv . IPL_DEPTH_8U , 3 ) # a s s i g n M a p p i n g ( u n i t , s t a r t , m a p p i n g Pa t h ) #mapN = r a n g e ( 0 , 6 4 0 ) while True : #CAPTURE img = cv . QueryFrame ( c a p ) i f p a u s e == 0 : # n o t p a u s e d cv . C v t C o l o r ( img , i m a g e C o l o r , cv . CV_GRAY2RGB) # c o n v e r t t o g r a y s c a l e #GRID i f g r i d != 0 : drawGrid ( imageColor ) #ANALYZE x , t r a c e = a n a l y z e I m a g e ( img , p i x e l m a s k ) i f x . size < 2: x = array ([0 ,640]) trace = zeros (2) # interpolating function f f = i n t e r p o l a t e . i n t e r p 1 d ( x , trace , bounds_error = 0 , f i l l _ v a l u e = 0) # i n t e r p o l a t e with f in range (0 ,640) xa = a r a n g e ( 0 , 6 4 0 ) i n t e r p o l a t e d = f ( xa ) #MOVING AVERAGE FILTER ( f f t ) avg = s i g n a l . f f t c o n v o l v e ( i n t e r p o l a t e d , k e r n e l ( 5 ) , mode= ’ same ’ ) #ADAPTIVE AVERAGE SMOOTHING h i s t o r y = r o l l ( h i s t o r y , s h i f t =−1, a x i s = 0 ) # r o l l b a c k s a m p l e s h i s t o r y [ −1] = i n t e r p o l a t e d # a p p l y new i n t e r p o l a t e d s i g n a l # a d a p t i v e by a c c l i v i t y a l t e r a t i o n S p e e d = a b s ( h i s t o r y [ −1] − h i s t o r y [ − 2 ] ) # d e t e r m i n e a l t e r a t i o n a l t e r a t i o n S p e e d = s i g n a l . f f t c o n v o l v e ( a l t e r a t i o n S p e e d , k e r n e l ( 3 0 ) , mode= ’ same ’ ) # s m o o t h a l t e r a t i o n maskDynamic = where ( a l t e r a t i o n S p e e d > t h r e s h o l d [ 0 ] / 3 , 1 , 0 ) # mask f a s t c h a n g e s m a s k S t a t i c = where ( maskDynamic , 0 , 1 ) # mask s t a t i c s maskDynamic = s i g n a l . f f t c o n v o l v e ( maskDynamic , k e r n e l ( 1 5 ) , mode= ’ same ’ ) # s m o o t h f i l t e r mask m a s k S t a t i c = s i g n a l . f f t c o n v o l v e ( m a s k S t a t i c , k e r n e l ( 1 5 ) , mode= ’ same ’ ) # s m o o t h s t a t i c mask h i s t o r y = h i s t o r y ∗ m a s k S t a t i c + avg ∗ maskDynamic # a p p l y masks and s m o o t h i n g f i l t e r e d = a v e r a g e ( h i s t o r y , a x i s =0 ) # a p p l y a v e r a g e f i l t e r
#CALIBRATION calibrated = filtered − idle #DERIVATIVE # d e r i v a t i v e = m i s c . d e r i v a t i v e ( f , xa ) # d e r i v a t i v e = s i g n a l . f f t c o n v o l v e ( d i s c r e t e D e r i v a t i v e ( f i l t e r e d ) , m a v 3 k e r n e l , mode=’ same ’ ) #DISCRETE DERIVATIVE d e r i v a t i v e = s i g n a l . f f t c o n v o l v e ( d i s c r e t e D e r i v a t i v e ( c a l i b r a t e d ) ∗10 , k e r n e l ( 1 0 ) , mode= ’ same ’ ) #DETECT FINGER PRESS detected = [] detectionTrace = zeros (640) i f d e t e c t i o n == 1 : for i in range ( area [ 0 ] , area [ 1 ] ) : i f ( ( c a l i b r a t e d [ i ] > t h r e s h o l d [ 0 ] ) and ( ( d e r i v a t i v e [ i ] > 0 and d e r i v a t i v e [ i + 1] = 0 : c a l i b r a t i o n T i m e r += 1 i f c a l i b r a t i o n T i m e r == 3 0 : calibration = array ( f i l t e r e d ) i f c a l i b r a t i o n T i m e r == 3 1 : autorange ( calibrated ) traces = 1 detection = 1 rearrangeMapping ( ) #SIGNALS key = cv . WaitKey ( 7 ) key &= 1048575 # h a n d l e o p e n c v s i g n e d bug . . . i f key == 2 7 : # e s c break i f key == o r d ( ’ c ’ ) : # c print " Calibrating . . . " idle = array ( f i l t e r e d ) detection = 1 i f key == o r d ( ’ a ’ ) : print " Autorange " idle = array ( f i l t e r e d ) autorange ( calibrated ) i f key == o r d ( ’m’ ) : p r i n t " Mask d e a d p i x e l s " for i in range ( x . s i z e ) : pt = ( x[ i ] , trace [ i ]) i f not p t in pixelmask : pixelmask . append ( p t ) print pixelmask i f key == o r d ( ’ f ’ ) : corrSample = c a l i b r a t e d [ area [ 0 ] : area [ 1 ] ] print " Catched s i g n a l f o r c o r r e l a t i o n . " print corrSample i f key == o r d ( ’ 2 ’ ) : area [0] = area [0] + 1 rearrangeMapping ( ) i f key == o r d ( ’ 1 ’ ) : area [0] = area [0] − 1 rearrangeMapping ( ) i f key == o r d ( ’ 4 ’ ) : area [1] = area [1] + 1 rearrangeMapping ( ) i f key == o r d ( ’ 3 ’ ) : area [1] = area [1] − 1 rearrangeMapping ( ) i f key == o r d ( ’ , ’ ) : markers . extend ( d e t e c t e d ) p r i n t " Added t o m a r k e r s . " i f key == o r d ( ’ . ’ ) : markers = [ ] p r i n t "New m a r k e r s . " i f key == o r d ( ’ t ’ ) : i f t r a c e s == 1 : t r a c e s = 0 else : traces = 1 print " Traces : " + s t r ( t r a c e s ) i f key == o r d ( ’ g ’ ) : i f g r i d == 1 : g r i d = 0 else : grid = 1 print " Display grid : " + s t r ( grid ) i f key == o r d ( ’ d ’ ) : i f d e t e c t i o n == 1 : d e t e c t i o n = 0 else : detection = 1 print " Detection : " + s t r ( detection ) i f key == 3 2 : # s p a c e i f p a u s e == 1 : p a u s e = 0 e l s e : pause = 1 print " Pause : " + s t r ( pause ) i f key == 9 : # t a b # g e t t o p i c name t n , ok = QtGui . Q I n p u t D i a l o g . g e t T e x t ( None , "Name f u e r S c r e e n s h o t s " , " Thema " ) i f ok : topicName = s t r ( t n ) d t = d a t e t i m e . d a t e t i m e . now ( ) i f n o t o s . p a t h . e x i s t s ( " s h o t s / " + d t . s t r f t i m e ( "%Y−%m−%d " ) ) : o s . m k d i r ( " s h o t s / " + d t . s t r f t i m e ( "%Y−%m−%d " ) ) s h o t += 1 f i l e n a m e = " s h o t s / " + d t . s t r f t i m e ( "%Y−%m−%d " ) + " / " + d t . s t r f t i m e ( "%Y−%m−%d %H:%M:%S " ) + " S h o t " + s t r ( s h o t ) + " ( " + t o p i c N a m e + " ) . png "
30
APPENDIX
445 446 447 448 449
p r i n t " S a v i n g image : ’ " + f i l e n a m e + " ’ " cv . SaveImage ( f i l e n a m e , i m a g e C o l o r ) # Release & Destroy cv . D e s t r o y A l l W i n d o w s ( )
31
APPENDIX
C
Creating Printed Circuit Boards with the Direct Toner Transfer Method
With the direct toner transfer method, PCBs (printed circuit boards) can be produced with a laser printer and a thermal transfer, without the need of an UV exposition. Transferred toner on the board is acidoresistant, that shields the copper beneath the toner from being etched. The method results in distinguished PCBs. Dr. Lex7 and Thomas Pfeifer8 provide helpful guidance. This document provides additional information for a successful transfer. Printing the Layout for Transfer The first step is to print the (flipped) board layout on a very thin, but coated paper, that acts as transfer medium. The transfer works best on a sheet of the "Reichelt" catalogue. To avoid a paper jam in the printer, the upper lap of the paper sheet has to be glued onto an A4 sized paper. After drying, the transfer medium is ready for printing. In the printer’s settings, a maximum toner density and the highest fixation level have to be chosen (a HP LaserJet P3010 has been used for this project). When using the layout software Eagle, do not use the menu entry "print", but execute the CAM-Processor instead. Write the layout to an PS file, that affords the best possible resolution. Toner Transfer The next step transfers the toner to the board. The board has to be cleaned very carefully with acetone. If any fatty substances (like fingerprints) remain on the board, the transfer will not work. Transfers with an electric iron did not work well, instead an unmodified laminator (UnitedOffice KH 4418) achieved good performance. After aligning the printed and trimmed transfer paper (without the A4 sheet) on the board, it has to pass the laminator. In this case, ten cycles at "125 mic" were good (too much heat results in a smeary transfer). Removing the paper The paper on the board then has to macerate for about five minutes in water with bath cleanser (the alkaline cleanser dissolves the paper slightly). After that the paper can be rubbed of very carefully with the fingertip. The paper should be constantly moisturised. Etching the board The final step is the etching process. Small scratches in the toner lines can be repaired utilising a permanent overhead marker. Some teaspoons of Natriumpersulfate (Na2 S2 O8 ) have to be dissolved in boiling water (use only plastic basins and spoons, only add acid into water, never water into acid, wear protective goggles, wash your hands after the etching process). The lotion should be kept at 60-80 ◦ C, use a water bath (or very carefully the hot air gun).
7 http://www.dr-lex.be/hardware/tonertransfer.html 8 http://thomaspfeifer.net/platinen_aetzen.htm
32
APPENDIX
D
Open CV
D.1
Open CV Patch for the Firefly Camera
The capturing in the present Project was done with a Point Grey Research Firefly MV FFMV03M2M Firewire camera. Using OpenCV 2.1, there is a bug in the lib-dc1394 bindings: A 16 Bit mode is selected for the Firefly that can not be processed. The following fix has been applied (see lst. 14) after line 288 of the OpenCV source file OpenCV-2.0.0/src/highgui/cvcap_dc1394_v2.cpp in the function CvCaptureCAM_DC1394_v2_CPP::startCapture(). Next, the selection for the 16 bit color coding in line 321 (|| (colorCoding == DC1394_COLOR_CODING_MONO16 && pref < 0)) has to be removed, resulting in the code shown in listing 15. Listing 14: Fix 1 OpenCV-2.0.0/src/highgui/cvcap_dc1394_v2.cpp /∗ u g l y f i x t o a v o i d l i b d c 1 3 9 4 c o m p l a i n i n g t h a t Format7 h a s no d e f i n e d f r a m e r a t e s ∗/ i f ( mode >= DC1394_VIDEO_MODE_FORMAT7_MIN && mode import cv • no Error: Congratulations, installation worked! • No module named cv: cv.so at make /opt/local/lib/python2.6/site-packages/cv.so
install
not
placed
at
• Fatal Python error: Interpreter not initialized (version mismatch?): bindings not linked against ports python, but against the system python (PYTHON_LIBRARY: =-framework= parameter in ccmake) Capturing-Test (e.g. with the integrated iSight camera): > cv.NamedWindow("hallo") > cam = cv.CaptureFromCAM(0) > img = cv.QueryFrame(cam) > cv.ShowImage("hallo", img)
13 http://www.tsd.net.au/blog/opencv-python-bindings-macports
34