Human Tracking Using Ceiling Pyroelectric Infrared ... - IEEE Xplore

1 downloads 0 Views 470KB Size Report
Human Tracking Using Ceiling Pyroelectric Infrared Sensors. Xiaomu Luo, Baihua Shen, Xuemei Guo, Guocai Luo, and Guoli Wang. Abstract—This paper ...
2009 IEEE International Conference on Control and Automation Christchurch, New Zealand, December 9-11, 2009

FrMT1.4

Human Tracking Using Ceiling Pyroelectric Infrared Sensors Xiaomu Luo, Baihua Shen, Xuemei Guo, Guocai Luo, and Guoli Wang

Abstract— This paper presents a novel human tracking scheme by using pyroelectric infrared sensors. The scheme includes the visibility modulation of each sensor detector, the layout of the system, the localization and tracking algorithms. The results from the 3D simulations using Webots and Matlab together validate our scheme, and comparisons with other related schemes are made as well. Simulations show that the proposed approach achieves higher tracking accuracy than those of other schemes.

I. INTRODUCTION Human motion tracking is one of the most important research themes for several years. One significant application of human tracking is to monitor single elderly residents [1][2][3]. In general, the computer vision-based processing for human tracking includes following stages: modeling of environments, detection of motions, classification of moving objects, tracking, understanding and description of behavior, human identification, and the fusion of data from multiple cameras [4]. Therefore, there are some disadvantages of continuous surveillance by video cameras: 1) It is susceptible to light illumination, which changes during the day and at night [5]; 2) The computational load for continuous visual surveillance, such as background substraction, is heavy; 3) Using camera might violate privacy, because it will give the resident an uncomfortable feeling of being observed [6]. In order to overcome these weaknesses of video-based tracking, some researchers employ pyroelectric infrared radial (PIR) sensors [7][8][9]. The PIR sensor is of low cost, and could detect the radiation of the human body without the help of lights. In addition, the position and velocity of the target can be represented by only a few bits. Most importantly, it will not have an adverse psychological impact, so not cause privacy problems. However, the biggest challenge of the PIR sensors is their tracking accuracy. The cooperation of the PIR sensors is inevitable to improve the tracking accuracy. The target tracking with wireless sensor networks (WSNs) is an intrinsic multi-sensor data fusion process, which will remove inconsistencies, combine Xiaomu Luo, Baihua Shen and Xuemei Guo are with School of Information Science & Technology, Sun Yat-Sen University, Guangzhou 510275, P.R.China. Guocai Luo is with Optics Department, Institute of Technology, Sun YatSen University, Guangzhou 510275, P.R.China. Guoli Wang, corresponding author, is with School of Information Science & Technology, Sun Yat-Sen University, Guangzhou 510275, P.R.China.

[email protected]

978-1-4244-4707-7/09/$25.00 ©2009 IEEE

the measurements from a lot of sensor nodes and get useful information eventually. Various data fusion schemes and techniques have been proposed, and one important method of them is to control the field of view (FOV) of the sensors by using low-cost Fresnel lens arrays [7][8]. However, we assert that the tracking accuracy could be improved even further. In this paper, we propose a new scheme for the ceiling infrared sensors system. Four sensor modules, each consisted of five sensor detectors, are mounted on the ceiling of the monitored field. The FOV of each detector is modulated by a Fresnel lens array to implement the desired spatial segmentations. The layout of the system, the selection of the localization and tracking algorithms and the tracking process are also included in the scheme. Taking the height of the human target into consideration, a novel three-dimensional (3D) hybrid simulation using Webots and Matlab together is employed to validate our scheme. Simulations show that the proposed approach is feasible, and achieves higher tracking accuracy than those of other related schemes [7][8]. The remainder of this paper is organized as follows. Section II gives the description of the whole system, including the PIR sensor detector, the FOV modulation and the layout of the system. Section III presents the localization algorithm. Section IV describes the system dynamic state and observation models, and gives the mathematical description of the Kalman tracking algorithm. Section V shows the simulation process and results, and the comparisons with other schemes are made in the discussion subsection. Section VI contains the conclusion of the paper and a discussion of future works. II. S YSTEM M ODEL A. Sensor Detector and Fresnel Lens Array The sensor detector employed in our scheme is a standard type PIR sensor equipped with a build-in amplifier and comparator, as shown in Fig. 1. Its output signals are proportional to the temperature changes of the crystal. Therefore, it could only capture a moving human target. The Fresnel lens array, which is composed of a number of small Fresnel lens, is made by light weight and low cost plastic material with good transmission. The Fresnel lens array is located one focal length away from the detector, and could be used to modulate the FOV of the detector. B. Visibility Modulation One sensor module consists of five pyroelectric detectors with Fresnel lenses arrays. The radius of each Fresnel lenses

1716

array is 0.02m, as shown in Fig. 2. These detectors are arranged in a grid shape, as listed in Table I. It is assumed that all the position information throughout this study is in the X-Y plane of the Cartesian coordinate system, and the origin of the coordinate is located at the center of detector 0. The default unit of length is meter.

TABLE III POSITION OF VISIBLE REGION INDICATORS

Visible

Indicator

Horizontal

Vertical

Region

Position

Angle α

Angle γ

01

(0.61, 1.47)

22.5o

28o

(1.47, 0.61)

67.5o

28o

(1.47, -0.61)

112.5o

28o

02 TABLE I

03

S PECIFICATION OF DETECTORS

04

(0.61, -1.47)

157.5o

28o

05

(-0.61, -1.47)

202.5o

28o

(-1.47, -0.61)

247.5o

28o

(-1.47, 0.61)

292.5o

28o

(-0.61, 1.47)

337.5o

28o

(0.00, 0.00)

0o

0o

Detector

Center

Horizontal Detection

Vertical Detection

Index

Position

Range φ

Range β

0

(0.00, 0.00)

0o ∼ 360o

-10o ∼ 10o

1

(0.04, 0.04)

-45o ∼ 90o

-45o ∼ 45o

2

(0.04, -0.04)

-135o ∼ 0o

-45o ∼ 45o

3 4

(-0.04, -0.04)

135o ∼

270o

-45o ∼ 45o

(-0.04, 0.04)

45o ∼

180o

-45o ∼

06 07 08 00

)UHVQHOOHQV

45o

$PSOLILHU 2XWSXW

We use masks to limit the FOV of each Fresnel lenses array. The horizontal detection range φ and vertical detection range β of each detector are shown in Fig. 2 and Fig. 3, respectively. Their values are listed in Table I. The visibility coding scheme is associated by 5 bits, each one representing that the corresponding sensor detects a signal (marked as ‘1’) or not (marked as ‘0’) , as listed in Table II. Note that if detector 0 is triggered, we will neglect the output of other four detectors. TABLE II VISIBILITY- CODING SCHEME OF PYROELECTRIC DETECTORS

7KHUPDOHQHUJ\

Fig. 1.

'XDOHOHPHQW S\URHOHFWULF GHWHFWRU

&RPSDUDWRU

Visible regions and region indicators

C. Layout of the system Four sensor modules are attached to the ceiling of the monitored field. In order to cover as much area as possible, the detection areas of these four sensor modules overlap only at a point. As a result, the monitored field is divided into 36 visible regions (9 visible regions per sensor module), some of them overlapping, as shown in Fig. 4. The positions of these 36 region indicators are based on the values listed in Table III plus the offset of each module.

Visible

Detector

Detector

Detector

Detector

Detector

Region

4

3

2

1

0

01

1

0

0

1

0

02

0

0

0

1

0

03

0

0

1

1

0

III. L OCALIZATION A LGORITHM

04

0

0

1

0

0

05

0

1

1

0

0

06

0

1

0

0

0

07

1

1

0

0

0

08

1

0

0

0

0

00

0 or 1

0 or 1

0 or 1

0 or 1

1

Define zk as the target’s estimated position (xk , yk ) at measurement time tk . The choice of localization algorithm is based on the number of visible regions being triggered simultaneously: 1) One region: the position of the region indicator is regarded as zk ; 2) Two regions: the midpoint of these two region indicators is regarded as zk , as shown in Fig. 5(a); 3) Three regions: Trilateration algorithm, which assumes the distances between zk and three zone indicators are equal, is employed to calculate zk , as shown in Fig. 5(b); 4) Four regions: Maximum Likehood Estimation algorithm is used to figure out zk , as shown in Fig. 5(c). The localization result zk will be used as the measurement of the target’s position in the tracking algorithm.

It can be seen that the angular resolution of the visibility coding scheme is π/4, and the whole detection area of the sensor module is divided into 9 visible regions. Each region is assigned an indicator, which is located around the center of each region. It is assumed that the sensor module is installed on the ceiling 3m above the ground. The position and angle of each region indicator are illustrated in Fig. 3, and their values are listed in Table III.

1717

=

< '9LHZ

φ ;

< P

;

5DGLXVP 

 7RS9LHZ

 



ƒ

Fig. 2.

Sensor Module