Hazardous Object Detection by Using Kinect Sensor in a ... - MDPI

3 downloads 0 Views 10MB Size Report
Dec 18, 2017 - this paper describes the applicability of using a Kinect sensor. Ensuring ... a hazardous object detection system using Kinect sensors, detected ...
sensors Article

Hazardous Object Detection by Using Kinect Sensor in a Handle-Type Electric Wheelchair Jeyeon Kim 1, *, Takaaki Hasegawa 2 1 2 3

*

ID

and Yuta Sakamoto 3

Department of Creative Engineering at National Institute of Technology, Tsuruoka College, Tsuruoka, Yamagata 997-8511, Japan Division of Mathematics, Electronics and Informatics, Graduate School of Science and Engineering, Saitama University, Saitama 338-8570, Japan; [email protected] Takaoka Toko Co., Ltd., Tokyo 110-0005, Japan; [email protected] Correspondence: [email protected]; Tel.: +81-235-25-9038

Academic Editor: Yasufumi Enami Received: 8 October 2017; Accepted: 13 December 2017; Published: 18 December 2017

Abstract: To ensure the safety of a handle-type electric wheelchair (hereinafter, electric wheelchair), this paper describes the applicability of using a Kinect sensor. Ensuring the mobility of elderly people is a particularly important issue to be resolved. An electric wheelchair is useful as a means of transportation for elderly people. Considering that the users of electric wheelchairs are elderly people, it is important to ensure the safety of electric wheelchairs at night. To ensure the safety of an electric wheelchair at night, we constructed a hazardous object detection system using commercially available and inexpensive Kinect sensors and examined the applicability of the system. We examined warning timing with consideration to the cognition, judgment, and operation time of elderly people. Based on this, a hazardous object detection area was determined. Furthermore, the detection of static and dynamic hazardous objects was carried out at night and the results showed that the system was able to detect with high accuracy. We also conducted experiments related to dynamic hazardous object detection during daytime. From the above, it showed that the system could be applicable to ensuring the safety of the handle-type electric wheelchair. Keywords: handle type electric wheelchair; mobility; elderly people; Kinect

1. Introduction In a rapidly aging society, various social problems have arisen related to medical care, welfare, pensions, and ensuring the mobility of elderly people [1,2]. The latter is a particularly important issue to be resolved. Studies on ensuring the mobility of elderly people are actively conducted [3,4]. From the perspective of ensuring the vitality of an aging society as shown in Figure 1, it is important to ensure the autonomous (self-supportable) mobility of elderly people [5]. A handle-type electric wheelchair (hereinafter, electric wheelchair) is useful as a means of transportation for elderly people [6,7]. About 15,000 electric wheelchairs are sold each year in Japan [8]. The user of an electric wheelchair (maximum speed is 6 km/h) is a pedestrian under the Road Traffic Law in Japan. Elderly people use electric wheelchairs for shopping, hospital visits, participation in communities, walks, and so on. Truly, Reference [9] indicated that elderly people can achieve self-supportable mobility using electric wheelchairs, which underpins improvement in their quality of life (QoL). Nevertheless, falling accidents involving electric wheelchairs occur frequently on stairs, curbs, irrigation canals, and so on [10,11]. Many accidents occur during the daytime when elderly people move the most, but many accidents occur at night, when accidents can easily result in severe injury and death [11]. When elderly people go out during the day and come home late, night travelling

Sensors 2017, 17, 2936; doi:10.3390/s17122936

www.mdpi.com/journal/sensors

Sensors 2017, 17, 2936

2 of 18

is dangerous because the visibility of elderly people becomes worse at night. From these facts, it is an important to ensure the safety of electric wheelchairs for use by elderly people. Sensors 2017, task 17, 2936 2 of 17 Autonomous mobility boundary enhancement

Increase of autonomous mobility people

Senior citizens

Mobility assistance Productive age

A world that productive age people devote their own production activities.

Autonomous mobility possible layer

Young people

Present

Aged society with fewer children

Figure 1. Ensuring the self-supportable mobility of elderly people and the vitality of society. Figure 1. Ensuring the self-supportable mobility of elderly people and the vitality of society.

A handle-type electric wheelchair (hereinafter, electric wheelchair) is useful as a means of To preventfor falling accidents of[6,7]. electric wheelchairs, studieswheelchairs of hazardousare object been transportation elderly people About 15,000 electric solddetection each yearhave in Japan actively conducted [12–21]. Hazardous object detecting sensors in electric wheelchairs are classifiable [8]. The user of an electric wheelchair (maximum speed is 6 km/h) is a pedestrian under the Road into active passiveElderly type, and combined type. The active type includes laser range Traffic Lawtype, in Japan. people use electric wheelchairs forsensors shopping, hospital visits, finders, ultrasonic waves, and range image sensors (Time-Of-Flight type and Structured-Light type). participation in communities, walks, and so on. Truly, Reference [9] indicated that elderly people can The passive type sensors include stereovision and monocular cameras. Combined type sensors include achieve self-supportable mobility using electric wheelchairs, which underpins improvement in their aquality laser range and stereovision,falling monocular cameras and ultrasonic and sooccur on. Considering of lifefinder (QoL). Nevertheless, accidents involving electricwaves, wheelchairs frequently that electric wheelchair users are elderly people, safe driving support from evening to night (awareness on stairs, curbs, irrigation canals, and so on [10,11]. Many accidents occur during the daytime when enhancement) is an important subject. However, few reports describe hazardous object detection using elderly people move the most, but many accidents occur at night, when accidents can easily result in commercially available equipment in out electric wheelchairs. severe injury and deathand [11].inexpensive When elderly people go during the day and come home late, night To prevent fallingbecause accidents electric wheelchairs, we studied the travelling is dangerous the involving visibility ofhandle-type elderly people becomes worse at night. From these applicability of commercially and inexpensive Kinect sensors.for Specifically, we constructed facts, it is an important task toavailable ensure the safety of electric wheelchairs use by elderly people. a hazardous object detection system using Kinect sensors, detected hazardous objects outdoors, To prevent falling accidents of electric wheelchairs, studies of hazardous object detection have and underscored the system effectiveness. The composition of this paper is the following. Section 2 been actively conducted [12–21]. Hazardous object detecting sensors in electric wheelchairs are presents a description of related oncombined hazardous object insensors electricincludes wheelchairs. classifiable into active type, passiveresearch type, and type. Thedetection active type laser A hazardous object detection system using Kinect is proposed in Section 3. Furthermore, we evaluated range finders, ultrasonic waves, and range image sensors (Time-Of-Flight type and Structured-Light the performance static and dynamic hazardous object detectioncameras. during nighttime, in type). The passiveoftype sensors include stereovision and monocular Combined discussed type sensors Section performance ofstereovision, the dynamic monocular hazardous object detection in the daytime shown in include 4, a and laserthe range finder and cameras and ultrasonic waves,isand so on. Section 5. Finally, Section 6 presents conclusions. Considering that electric wheelchair users are elderly people, safe driving support from evening to night (awareness enhancement) is an important subject. However, few reports describe hazardous 2. Related Works object detection using commercially available and inexpensive equipment in electric wheelchairs. We of hazardous detection.electric Furthermore, ‘resolution’ this paper To describe prevent related falling studies accidents involvingobject handle-type wheelchairs, we in studied the means the ability to detect hazardous objects (height ± 0.05 m or more) within the hazardous object applicability of commercially available and inexpensive Kinect sensors. Specifically, we constructed detection area described in Section As explained earlier, hazardous object detecting sensorsand are a hazardous object detection system3. using Kinect sensors, detected hazardous objects outdoors, classifiable into active type, passive type, and combined type. underscored the system effectiveness. The composition of this paper is the following. Section 2 First,a the active type be described. active type includes laserinrange finders, ultrasonic presents description of will related research onThe hazardous object detection electric wheelchairs. A waves, andobject range detection image sensors (Time-Of-Flight Structured-Light The laserwe range finder hazardous system using Kinect istype, proposed in Section 3.type). Furthermore, evaluated can two-dimensional information by scanning with detection a laser. It is possible to detect discussed a hazardous the acquire performance of static and dynamic hazardous object during nighttime, in object high accuracy duringofday night onhazardous the scanning plane. In an earlier of the relevant Sectionwith 4, and the performance theor dynamic object detection in thereport daytime is shown in literature the authors a laserconclusions. range finder to detect curbstones and to navigate. However, it is Section 5.[12], Finally, Section used 6 presents difficult to detect obstacles above and below the scanning plane. Moreover, these sensors are expensive. Time-Of-Flight (TOF) type and structured-light type sensors are range image sensors. The TOF type 2. Related Works We describe related studies of hazardous object detection. Furthermore, ‘resolution’ in this paper means the ability to detect hazardous objects (height ± 0.05 m or more) within the hazardous object detection area described in Section 3. As explained earlier, hazardous object detecting sensors are classifiable into active type, passive type, and combined type.

Sensors 2017, 17, 2936

3 of 18

range image sensor irradiates light from a light source to an object, estimates the time until the reflected light returns for every pixel, and acquires three-dimensional information using the estimated time. Another report [13] describes obstacle detection by an indoor rescue robot. The TOF type range image sensor has resolution capable of detecting hazardous objects, but it is expensive because it estimates the time for each pixel. Additionally, it is difficult to use outdoors in the daytime when there is much infrared ray noise. The structured-light type sensor projects a specifically patterned light from the projector to the object. Then the pattern is photographed by a pair of cameras. The irradiation light pattern and the photographed light pattern are correlated [14]. Furthermore, three-dimensional information up to the object is obtained by triangulation as stereovision. Nevertheless, as with the TOF type, such sensors are difficult to use outdoors during daytime, when there is much infrared ray noise. Ultrasonic sensors estimate the distance using the time until the ultrasonic wave emitted from the sensor is reflected by the object. In one report from the relevant literature [15], navigation such as passing through a door and running along a wall can be performed using an ultrasonic sensor. Because ultrasonic sensors are inexpensive and compact, they are used frequently. They are useful day and night. However, because the resolution is low, it is difficult to detect small steps, which are hazardous objects for electric wheelchairs. Next, passive type sensors include stereovision and monocular cameras. Stereovision acquires three-dimensional information by photographing the same object from two viewpoints. In an earlier report of the literature [16], obstacles and steps were detected using stereovision. Stereovision is inexpensive; moreover, the obtained image has high resolution. However, it is difficult to detect objects at night when the illumination environment is bad. Ulrich et al. used a monocular camera to detect obstacles indoors. The system converts color images to HSI (Hue, Saturation, and Intensity), creates histograms of reference regions, compares the surroundings with the reference region, and detects obstacles [17]. In a method using the monocular camera, calibration is easy because one camera is used. Mounting is easy because the size is small. However, it is difficult to use at night when the lighting environment is bad. Finally, we describe a combined type using both active type and passive type for detecting hazardous objects. Murarka et al. used a laser range finder and stereovision to detect obstacles [18]. The laser range finder detects obstacles on the 2D plane while stereovision detects obstacles in 3D space, and a safety map is built. By combining the laser range finder and stereovision, the difficulty in detecting obstacles above and below the plane of the laser beam, which is a shortcoming of the laser range finder, was resolved. However, the laser range finder is expensive. As described in an earlier report of the relevant literature [19], 12 ultrasonic sensors and 2 monocular cameras were installed in an electric wheelchair to detect hazardous objects. Although sensors are inexpensive, it is difficult to use monocular cameras when the illumination environment is bad. Moreover, it is difficult to detect hazardous objects in electric wheelchairs because the ultrasonic sensor resolution is low. Furthermore, the necessity of using many sensors requires much time for installation and calibration of sensors. Hazardous object detection using Kinect game sensors, which have multiple sensors in one device, will be described. In an earlier report from a study [20], the Kinect sensor was installed on a white cane for visually impaired people. Stairs and chairs are detected using range image data obtained from the range image sensor. Then, in another report [21], navigation is performed while avoiding obstacles using a laser scanner, ultrasound, and Kinect indoors. However, it takes time to install and calibrate the sensors because multiple sensors are used. Reference [22] examined obstacle detection (convex portion only) using Kinect v2 during the daytime and presented the possibility of obstacle detection by Kinect v2. However, no detailed explanation is available for the detectable range or the detection accuracy of obstacles by Kinect v2 during the daytime. Additionally, it is difficult to detect hazardous objects that are a concave area because the installation position (height) of Kinect v2 is low. When the installation position of Kinect is low, it might be estimated as higher than the actual height of the object. Furthermore, the system might not judge it as a hazardous object. It is difficult to detect without approaching the hazardous object.

obstacle detection by Kinect v2. However, no detailed explanation is available for the detectable range or the detection accuracy of obstacles by Kinect v2 during the daytime. Additionally, it is difficult to detect hazardous objects that are a concave area because the installation position (height) of Kinect v2 is low. When the installation position of Kinect is low, it might be estimated as higher than the Sensors 17, 2936 actual2017, height of the object. Furthermore, the system might not judge it as a hazardous object.4 of It 18 is difficult to detect without approaching the hazardous object. To ensure the safety of electric wheelchairs using Kinect outdoors, it is necessary to detect static To ensure the safety of electric wheelchairs using Kinect outdoors, it is necessary to detect static and dynamic hazardous objects using a commercially available and inexpensive sensor. However, and dynamic hazardous objects using a commercially available and inexpensive sensor. However, few reports describe the performance evaluation of static (convex and concave portion) and dynamic few reports describe the performance evaluation of static (convex and concave portion) and dynamic (pedestrian) object detection using Kinect. As described in this paper, we assess the applicability of (pedestrian) object detection using Kinect. As described in this paper, we assess the applicability of the the commercially available and inexpensive Kinect sensor as a sensor for detecting hazardous objects commercially available and inexpensive Kinect sensor as a sensor for detecting hazardous objects to to prevent falling accidents of electric wheelchairs. prevent falling accidents of electric wheelchairs. 3. Hazardous Hazardous Object Object Detection Detection System System by by Using Using Kinect Kinect Sensor Sensor 3. 3.1. Hazardous Hazardous Object Object Detection Detection System System 3.1. A handle-type handle-typeelectric electricwheelchair, wheelchair, often used in Japan, an effective candidate as a A often used in Japan, is aniseffective candidate for usefor as use a means means of transportation for elderly people. To prevent falling accidents of handle-type electric of transportation for elderly people. To prevent falling accidents of handle-type electric wheelchairs on wheelchairs on stairs, curbstones, canals, etc., the in system presented instatic this paper detects stairs, curbstones, irrigation canals, irrigation etc., the system presented this paper detects and dynamic static and objects dynamic hazardous objects using Theansystem not control anpresents electric hazardous using Kinect. The system does Kinect. not control electricdoes wheelchair, but only wheelchair, only presents warnings to usersin(elderly as presented inobjects, Figure 2. Tosystem detect warnings to but users (elderly people) as presented Figure people) 2. To detect hazardous this hazardous objects, this system a Kinect rangeand image and uses an RGB uses a Kinect range image sensoruses during nighttime usessensor an RGBduring Kinectnighttime camera during the daytime. Kinect camera during the daytime. Then, the Kinect sensor is installed at a height (0.84 m) that does Then, the Kinect sensor is installed at a height (0.84 m) that does not disturb the user’s field of view. not disturb the user’s field of view. Five elderly people actually rode in the electric wheelchair. The Five elderly people actually rode in the electric wheelchair. The heights of the five elderly people were heights of elderly people were cm. Ato questionnaire surveyasking was administered to the 155–173 cm.the A five questionnaire survey was155–173 administered the elderly person whether the Kinect elderly person asking whether the Kinect sensor disturbs the front view or not. Based on the result, sensor disturbs the front view or not. Based on the result, we obtained an answer that the view was we obtained not disturbed.an answer that the view was not disturbed.

Figure 2. 2. Image Image of of the the system. system. Figure

Hazardous objects for electric wheelchairs include static and dynamic hazardous objects. Static Hazardous objects for electric wheelchairs include static and dynamic hazardous objects. Static hazardous objects, which do not move with respect to the surrounding environment, include hazardous objects, which do not move with respect to the surrounding environment, include curbstones, grooves, and irrigation canals. The static hazardous object size used in this study is curbstones, grooves, and irrigation canals. The static hazardous object size used in this study is decided based on Japan Industrial Standards (JIS) [5] and ISO 7176 [6] of the handle-type electric decided based on Japan Industrial Standards (JIS) [5] and ISO 7176 [6] of the handle-type electric wheelchair (Figure 3). The convex portion of the static hazardous object is defined as an object with wheelchair (Figure 3). The convex portion of the static hazardous object is defined as an object with height of 0.05 m higher than the surroundings, as shown in Figure 3a. Then, the concave area of the height of 0.05 m higher than the surroundings, as shown in Figure 3a. Then, the concave area of static hazardous object is defined as an object for which all three of the following conditions are the static hazardous object is defined as an object for which all three of the following conditions satisfied: height of −0.05 m or less, width of 0.10 m or more, and depth of 0.1 m or more, as portrayed are satisfied: height of −0.05 m or less, width of 0.10 m or more, and depth of 0.1 m or more, in Figure 3b. In the case of the convex portion, the system judges it as a hazardous object when the as portrayed in Figure 3b. In the case of the convex portion, the system judges it as a hazardous object difference between adjacent estimated values is +0.05 m or more. Moreover, in the case of a concave when the difference between adjacent estimated values is +0.05 m or more. Moreover, in the case of a concave area, the system judges a hazardous object as one with object size satisfying all three conditions above (height −0.05 m or less, width 0.1 m or more, and depth 0.1 m or more). Furthermore, we assume that a dynamic hazardous object moves with respect to the surrounding environment and that it is in contact with the ground. Pedestrians, for example, are dynamic hazardous objects.

Sensors 2017, 17, 2936

5 of 17

area, the system judges a hazardous object as one with object size satisfying all three conditions above (height −0.05 m or less, width 0.1 m or more, and depth 0.1 m or more). Furthermore, we assume that a dynamic hazardous object moves with respect to the surrounding environment and that it5isof in Sensors 2017, 17, 2936 18 contact with the ground. Pedestrians, for example, are dynamic hazardous objects.

(a) Convex hazardous object

(b) Concave hazardous object

Figure 3. Static object. Figure 3. Static hazardous hazardous object.

3.2. Hazardous Object Detection Area 3.2. Hazardous Object Detection Area Here, the warning timing for presenting a warning to the user (elderly person) is determined Here, the warning timing for presenting a warning to the user (elderly person) is determined along with the hazardous object detection area. Time-to-Collision (TTC) [23,24] is used to assess the along with the hazardous object detection area. Time-to-Collision (TTC) [23,24] is used to assess the severity of traffic conflicts in this study. When traveling without changing the traveling direction at severity of traffic conflicts in this study. When traveling without changing the traveling direction at the current speed, TTC is the time until a collision with an object (hazardous object) or an accident the current speed, TTC is the time until a collision with an object (hazardous object) or an accident occurs. Because the hazardous object detection system is aimed at preventing falling accidents, TTC occurs. Because the hazardous object detection system is aimed at preventing falling accidents, TTC is is set as the severity of traffic conflicts in this study. Furthermore, the warning timing and the set as the severity of traffic conflicts in this study. Furthermore, the warning timing and the hazardous hazardous object detection area are decided using TTC. object detection area are decided using TTC. The warning timing is a time when the system can give a warning to the user (elderly) and can The warning timing is a time when the system can give a warning to the user (elderly) and stop safely. The warning timing is decided based on the recognition, judgment, and operation time can stop safely. The warning timing is decided based on the recognition, judgment, and operation of elderly people. The selection reaction time is used for the recognition and judgment time of elderly time of elderly people. The selection reaction time is used for the recognition and judgment time of people. According to a report from an earlier study [25], because the average of the selection reaction elderly people. According to a report from an earlier study [25], because the average of the selection time of an elderly person is 0.7 s and because the standard deviation is 0.13 s, 0.83 s is taken as the reaction time of an elderly person is 0.7 s and because the standard deviation is 0.13 s, 0.83 s is taken recognition and judgment time. When traveling for 0.83 s (recognition and judgment time) at the as the recognition and judgment time. When traveling for 0.83 s (recognition and judgment time) maximum speed (6 km/h) of the electric wheelchair, the traveling distance is about 1.38 m. The at the maximum speed (6 km/h) of the electric wheelchair, the traveling distance is about 1.38 m. operation time is found based on the braking distance of the electric wheelchair, as defined in the JIS The operation time is found based on the braking distance of the electric wheelchair, as defined in guideline [5]. According to JIS, the electric wheelchair must be able to come to a stop within 1.5 m on the JIS guideline [5]. According to JIS, the electric wheelchair must be able to come to a stop within a flat road. The braking distance of many commercially available electric wheelchairs is within 1.2– 1.5 m on a flat road. The braking distance of many commercially available electric wheelchairs is 1.3 m [10]. Traveling 1.5 m at the maximum speed (6 km/h) of the electric wheelchair takes 0.9 s, so within 1.2–1.3 m [10]. Traveling 1.5 m at the maximum speed (6 km/h) of the electric wheelchair takes the operation time is 0.9 s. The warning timing shall be 1.73 s from recognition, judgment, and 0.9 s, so the operation time is 0.9 s. The warning timing shall be 1.73 s from recognition, judgment, operation time of elderly people. Setting the warning timing at 1.73 s when using the inexpensive and operation time of elderly people. Setting the warning timing at 1.73 s when using the inexpensive Kinect can reduce the risk posed by electric wheelchairs. Kinect can reduce the risk posed by electric wheelchairs. Based on the results described above, the front side of the static hazardous object detection area Based on the results described above, the front side of the static hazardous object detection area is is set to 3 m (2.88 m + margin 0.12 m). The width shall be 0.7 m, which is the electric wheelchair width set to 3 m (2.88 m + margin 0.12 m). The width shall be 0.7 m, which is the electric wheelchair width according to JIS. The detection range of dynamic hazardous objects takes into account the relative according to JIS. The detection range of dynamic hazardous objects takes into account the relative speed of the electric wheelchair and dynamic hazardous objects. speed of the electric wheelchair and dynamic hazardous objects. 4. Performance 4. Performance Evaluation Evaluation in in Nighttime Nighttime 4.1. Hazardous Object Detection Method

We willdescribe describethe therelative relative position acquisition method the electric wheelchair to the We will position acquisition method fromfrom the electric wheelchair to the object, object, measures errorby caused by theofchange of roadconversion, gradient conversion, and the process measures against against error caused the change road gradient and the process flow of the flow of the hazardous object detection. hazardous object detection. Figure 4 presents the estimation method of the relative position from the electric wheelchair to the Kinect (Hk(H ) isk )known. The The system obtains rangerange imageimage data (D) from the object. object. Here, Here,the theheight heightofof the Kinect is known. system obtains data (D) the Kinect and calculates the distance (D k ) from the Kinect to the object using Equation (1). Then, the from the Kinect and calculates the distance (Dk ) from the Kinect to the object using Equation (1). systemthe calculates Dy, Dz, andDD using Equations respectively. Here, Dx andHere, Dy, respectively, Then, system calculates Dx using(2)–(4), Equations (2)–(4), respectively. Dx and Dy , y ,x D z , and denote the distance the x direction and the distance thedistance y direction from electric wheelchair to respectively, denotein the distance in the x direction andinthe in the y direction from electric

Sensors 2017, 17, 2936

6 of 17

Sensors 2017, 17, 2936

6 of 17

as described in Section 3.1, it is judged as a hazardous object. Sensors 2017, 17, 2936

6 of 18

the object. Moreover, Dz is the object height. When the object size meets the size of a hazardous object, as in SectionD3.1, it is judged as a hazardous object. thedescribed object. Moreover, z is the object height. When the object size meets the size of a hazardous object,

Dk  D / cos1 (1) D  D / cos1 When the object size meets the size (1) wheelchair to the object. Moreover, Dz is thek object height. of a D (2) k / sin as 2 a hazardous object. hazardous object, as described in Section 3.1,y itisDjudged D y  Dk / sin  2 (2) = D/ coscos θ1  (1) Dz Dk H  D (3) k k 2 DzDy = HkD/Dsin cos (3) (2) k k θ2 2 Dx  Dy tan 3 (4) Dz = Hk − Dk cos θ2 (3) Dx  Dy tan 3 (4) Dx = Dy tan θ3

(4)

Z Z

HK

D

2

1D

2

1

HK Dz

3 3

O

DOz

Dy

Y

(a)OMethodDof estimating Dy and Dz Y y

O

Dx Dx

Dy X

Dy

Y Y

X(b) Method of estimating Dx

(a)Method Methodof of estimating estimating the Dy and Dz position from (b) estimatingtoDthe x Figure 4. relative the Method electric of wheelchair hazardous object. Figure 4. Method Methodofofestimating estimating relative position from the electric wheelchair the hazardous Figure 4. thethe relative position from the electric wheelchair to the to hazardous object. object.

We now describe measures taken against error caused by the change of road gradient We nowHere, describe taken against error caused by the the change of road gradient conversion. the measures reference point (height 0 m) error is justcaused under Assume thatconversion. there is a We now describe measures taken against by Kinect. the change of road gradient Here, the reference point (height 0 m) is just under the Kinect. Assume that there5.isWhen a slightthe slope that slight slope that the electric wheelchair can pass through, as shown in Figure system conversion. Here, the reference point (height 0 m) is just under the Kinect. Assume that there is a the electric wheelchair can passthe through, as shown inthe Figure 5. When the system judges hazardous judges hazardous objects using difference between height of the in reference andthe the system height slight slope that the electric wheelchair can pass through, as shown Figure point 5. When objects using the difference between the height of the reference point and the height of the estimation of the estimation point, the greater the distance between the reference point and the estimation point judges hazardous objects using the difference between the height of the reference point and the height point, the greater the distance between the reference point and the estimation point is, the larger is, the larger the difference between the reference point and the estimation point becomes, as shown of the estimation point, the greater the distance between the reference point and the estimation point the difference reference point and the estimation pointwheelchair becomes, ascan shown Figure 5a. in 5a. the Inbetween spite ofthe having a slope pass,inthe is, Figure the larger difference between the through referencewhich point the andelectric the estimation point becomes, assystem shown In spitethat of having through which the electric wheelchair can pass, the system judges that it is aa judges aa slope hazardous (concave judges in Figure 5a.itInisspite of havingobject a slope througharea). whichTo theresolve electricdifficulties, wheelchair the can system pass, the system hazardous object (concave area). Todifference resolve difficulties, the system judges apoints, hazardous object in using the hazardous using the height between adjacent estimation as shown Figure judges that it is a hazardous object (concave area). To resolve difficulties, the system judges a height difference between adjacent estimation points, as shown in Figure 5b. 5b. hazardous object using the height difference between adjacent estimation points, as shown in Figure 5b.

(a) Before measurement

(b) After measurement

(a) Before measurement error caused by a change of road (b) After measurement Figure Figure 5. 5. Measures Measuresagainst against error caused by a change of road gradient gradient conversion. conversion. Figure 5. Measures against error caused by a change of road gradient conversion.

The process flow of hazardous object detection at night occurs as explained below. First, the The process flow of hazardous object detection at night occurs as explained below. First, system the height information of estimation points inoccurs the hazardous object detection Theacquires process flow hazardous object detection at night as hazardous explained below. First,area the the system acquires theofheight information of estimation points in the object detection using the range image obtained from the Kinect. Then the height difference between adjacent system acquires the height of estimation in the hazardous object detection area area using the range image information obtained from the Kinect. points Then the height difference between adjacent estimation points in each obtained column direction of Kinect. the range image isheight acquired. Next, when an estimation using the range image from the Then the difference between adjacent estimation points in each column direction of the range image is acquired. Next, when an estimation point with a heightindifference of +0.05 m or more detected, theissystem judges the object a convex estimation each column direction theis range image acquired. Next, when anas estimation point with apoints height difference of +0.05 m or of more is detected, the system judges the object as a convex point with a height difference of +0.05 m or more is detected, the system judges the object as a hazardous object. Furthermore, when an estimation point with a height difference of −0.05 mconvex or less is detected, a region where the depth of the concave portion is 0.1 m or more and the width is 0.1 m

Sensors 2017, 17, 2936 Sensors 2017, 17, 2936

7 of 17 7 of 17

hazardous object. Sensors 2017, 17, 2936

Furthermore, when an estimation point with a height difference of −0.05 m or7 of less 18 hazardous Furthermore, when of anthe estimation with a height difference of −0.05 is detected,object. a region where the depth concave point portion is 0.1 m or more and the widthmisor 0.1less m is detected, a region where the depth of the are concave portion 0.1 m or morethe andobject the width is 0.1 m or more is detected. If all three conditions satisfied, the issystem judges as a concave or more is detected. If all three conditions are satisfied, the system judges the object as a concave or more is object. detected. If all the three conditions are satisfied, point the system object as aregion concave hazardous Finally, position of the estimation closestjudges to the the sensor in this is hazardous object. Finally, Finally, the the position position of of the the estimation estimation point point closest closest to to the the sensor sensor in in this this region is hazardous object. region is inferred as the position of the hazardous object. inferred as the position of the hazardous object. inferred as the position of the hazardous object. Figure 6 shows the installation and the hazardous object detection area. The installed Kinect Figure 66 shows the installation and the hazardous object area. The Kinect Figure shows theThe installation andand theTilt hazardous object detection detection area. The installed installed Kinect sensor height is 0.84 m. Pan, Swing of the installation angle are, respectively, 0 degrees, sensor height is 0.84 m. The Pan, Swing and Tilt of the installation angle are, respectively, 0 degrees, sensor height is 0.84 m. The Pan, Swing and Tilt of the installation angle are, respectively, 0 degrees, 0 degrees, and −26 degrees. The detection area of a static hazardous object by the installation angle 00 degrees, and 26 degrees. degrees. The detection detection area of of static hazardous hazardous object by the degrees, andof− −26 The area static object by theininstallation installation angle and the angle view of a Kinect sensor is from theaa sensor to 1.2–3.0 m as shown Figure 6. angle and the angle of view of a Kinect sensor is from the sensor to 1.2–3.0 m as shown in Figure 6. and the angle of view of a Kinect sensor is from the sensor to 1.2–3.0 m as shown in Figure 6. Z Z Pan Pan 1.2m

0.84m

1.2m

0.84m

1.8m 1.8m Swing Swing Y 0.7m

Tilt

0.7m

Y

X Tilt X

Figure 6. Installation of the Kinect, and the static hazardous object detection area. Figure object detection detection area. area. Figure 6. 6. Installation Installation of of the the Kinect, Kinect, and and the the static static hazardous hazardous object

4.2. Experimental Method 4.2. Experimental Method 4.2. Experimental Method The electric wheelchair used for this study is an ET 4 E (Suzuki) [26]. The Kinect sensor (version The electric wheelchair used forthis this study is anET ET4 4EE(Suzuki) (Suzuki) [26]. TheKinect Kinect sensor (version 1) specifications are presented in for Table 1.study The static objects areThe formed assensor convex portions The electric wheelchair used is an hazardous [26]. (version 1) 1) specifications are presented in Table 1. The static hazardous objects are formed as convex portions and concave are portions of an interlocking block (Figure 7). The dynamic objects are specifications presented in Table 1. The static hazardous objects are formed hazardous as convex portions and and concavein portions of an interlocking block (Figure 7). The dynamic hazardous objects are pedestrians this concave portions ofstudy. an interlocking block (Figure 7). The dynamic hazardous objects are pedestrians in pedestrians in this study. this study. Table 1. Specifications of Kinect v1. Table 1. Specifications of Kinect v1.

Model Kinect Version 1 Model Kinect Version ResolutionModel of RGB camera 640 × 480 1 1 Kinect Version Resolution of RGB camera 640 480 Resolution of range image sensor 320 ×× 240 Resolution of RGB camera 640 × 480 Resolution range image sensor 320 × 240 Range of of range image 0.8~4.0 Resolution of range imagesensor sensor 320 × 240 m Range of range image sensor 0.8~4.0 Angleofofrange viewimage (horizon) 57 degrees Range sensor 0.8~4.0 mm Angle of view (horizon) 57 degrees Angleof of view view (horizon) 5743degrees Angle (vertical) degrees Angleof ofview view (vertical) 4343degrees Angle (vertical) degrees

(a) Convex hazardous object (b) Concave hazardous object (a) Convex hazardous object (b) Concave hazardous object Figure Figure 7. 7. Static Static hazardous hazardous objects objects in in experiments. experiments. Figure 7. Static hazardous objects in experiments.

The experimental method is shown in Figure 8. In the static hazardous object detection The experimental method is shown in Figure 8. In the hazardous object detection experiment, The experimental method is runs shown in Figure 8. static In the static hazardous object detection experiment, the electric wheelchair straight at 4 km/h toward the hazardous object (convex and the electric wheelchair runs straight at 4 km/h toward the hazardous object (convex and concave experiment, the electric wheelchair runs straight at 4 km/h toward the hazardous object (convex and

Sensors 2017, 17, 2936

8 of 18

Sensors 2017, 17, 2936

8 of 17

hazardous objects) installed 4 m ahead from the from start point. Additionally, a pedestrian–a dynamic concave hazardous objects) installed 4m ahead the start point. Additionally, a pedestrian–a hazardous object– approached from the position 6.0 m (front and diagonal 45 degrees). Then, the Kinect dynamic hazardous object– approached from the position 6.0 m (front and diagonal 45 degrees). equipped on the electric wheelchair records range image datarange whileimage moving. system does The not Then, the Kinect equipped on the electric wheelchair records dataThe while moving. distinguish a static hazardous object and a dynamic hazardous object, and judges only whether or not system does not distinguish a static hazardous object and a dynamic hazardous object, and judges it is a hazardous object in this paper. The performance evaluation is conducted by classifying cases of only whether or not it is a hazardous object in this paper. The performance evaluation is conducted static and dynamic objects separately. by classifying caseshazardous of static and dynamic hazardous objects separately.

Figure Figure 8. 8. Experimental Experimental methodology. methodology.

The method of obtaining the true value (position) of the electric wheelchair and the dynamic The method of obtaining the true value (position) of the electric wheelchair and the dynamic hazardous object (pedestrian) is now explained. For accurate performance evaluation of the hazardous object (pedestrian) is now explained. For accurate performance evaluation of the hazardous hazardous detection system, it is necessary to acquire the precise positions of the electric wheelchair detection system, it is necessary to acquire the precise positions of the electric wheelchair and the and the dynamic hazardous object. Because the static hazardous objects are fixed, the position of the dynamic hazardous object. Because the static hazardous objects are fixed, the position of the static static hazardous objects is not acquired. It is assumed that the electric wheelchair and the dynamic hazardous objects is not acquired. It is assumed that the electric wheelchair and the dynamic hazardous hazardous object move in a straight line. Furthermore, the true value of the dynamic hazardous object object move in a straight line. Furthermore, the true value of the dynamic hazardous object is defined is defined as the torso position of the pedestrian. In cases of collisions between an electric wheelchair as the torso position of the pedestrian. In cases of collisions between an electric wheelchair and a and a pedestrian, we defined that the criterion for determining whether or not an electric wheelchair pedestrian, we defined that the criterion for determining whether or not an electric wheelchair and and a pedestrian will contact is the position of the torso of the pedestrian. Figure 9 shows a method a pedestrian will contact is the position of the torso of the pedestrian. Figure 9 shows a method of acquiring the relative positions of the static and dynamic hazardous objects from an electric of acquiring the relative positions of the static and dynamic hazardous objects from an electric wheelchair. Camera A and Camera B are the true value acquisition cameras for performance wheelchair. Camera A and Camera B are the true value acquisition cameras for performance evaluation evaluation experiments: Camera A is for the true value acquisition of electric wheelchairs, while experiments: Camera A is for the true value acquisition of electric wheelchairs, while Camera B is for Camera B is for the true value acquisition of the dynamic hazardous object. The Kinect is for the true value acquisition of the dynamic hazardous object. The Kinect is for performance evaluation performance evaluation experiments. Furthermore, Camera A and Camera B are time-synchronized experiments. Furthermore, Camera A and Camera B are time-synchronized with the Kinect. Camera with the Kinect. Camera A records a measuring tape on the road surface. In addition, the moving A records a measuring tape on the road surface. In addition, the moving distance (de ) from the distance (d ) from the Start Point to the electric wheelchair of each frame from the video is obtained Start Point eto the electric wheelchair of each frame from the video is obtained by visual observation. by visual observation. Then, the relative position from the electric wheelchair to the fixed static Then, the relative position from the electric wheelchair to the fixed static hazardous object is obtained. hazardous object is obtained. Figure 10 depicts the installation of Camera A. Next, the method of Figure 10 depicts the installation of Camera A. Next, the method of acquiring the relative position acquiring the relative position of the dynamic hazardous object (pedestrian) from the electric of the dynamic hazardous object (pedestrian) from the electric wheelchair is described. Acquisition wheelchair is described. Acquisition of the position of the electric wheelchair is the same as described of the position of the electric wheelchair is the same as described above. Camera B photographs the above. Camera B photographs the dynamic hazardous object, and the angle (θ) from the coordinates dynamic hazardous object, and the angle (θ) from the coordinates of the dynamic hazardous object of the dynamic hazardous object on the image inputted from Camera B is then calculated. The on the image inputted from Camera B is then calculated. The position (dp ) of the dynamic hazardous position (d ) of the dynamic hazardous object from the Center Point is calculated using Equation (5). object fromp the Center Point is calculated using Equation (5). Then, the relative position (dr ) from the Then, the relative position (dr) from the electric wheelchair to the dynamic hazardous object is electric wheelchair to the dynamic hazardous object is calculated using Equation (6). calculated using Equation (6).

d p  D tan 

(5)

dr  6  de  d p

(6)

Sensors 2017, 17, 2936

9 of 18

d p = D tan θ Sensors 2017, 17, 2936 Sensors 2017, 17, 2936

dr = 6 − d e + d p

(5) 9 of 17 9 of(6) 17

Here, D is the distance from the Center Point to Camera B. Here, D is the distance from the Center Point to Camera B.

Figure 9. Method Method of acquiring acquiring the relative relative position of of static and and dynamic hazardous hazardous object from from the Figure 9. Figure 9. Method of of acquiring the the relative position position of static static and dynamic dynamic hazardous object object from the the electric wheelchair. electric wheelchair. electric wheelchair.

Figure 10. Installation of Camera A. Figure 10. Installation Figure 10. Installation of of Camera Camera A. A.

The acquisition range of the position of the static hazardous object from the electric wheelchair The acquisition range of the position of the static hazardous object from the electric wheelchair The acquisition range of the position of the static object thedynamic electric wheelchair is within 1.2 m in front of the electric wheelchair fromhazardous 4.0 m. In the casefrom of the hazardous is within 1.2 m in front of the electric wheelchair from 4.0 m. In the case of the dynamic hazardous is within 1.2 m in front of the1.2 electric wheelchair 4.0 m. In the case of the dynamic hazardous object (pedestrian), it is from m ahead to 6.0 m.from In addition, the results obtained when the system object (pedestrian), it is from 1.2 m ahead to 6.0 m. In addition, the results obtained when the system object (pedestrian), it isposition from 1.2of m the ahead to 6.0 m.objects In addition, when thetaken system estimates the relative hazardous from the theresults electricobtained wheelchair are as estimates the relative position of the hazardous objects from the electric wheelchair are taken as estimates the relative position of the hazardous objects from the electric wheelchair are taken as estimated values. Five trials were used for each hazardous object experiment. The experiment estimated values. Five trials were used for each hazardous object experiment. The experiment estimated trials were for each hazardous objectnighttime experiment. experiment location isvalues. SaitamaFive University andused experimental scenes during areThe shown in Figurelocation 11. location is Saitama University and experimental scenes during nighttime are shown in Figure 11. is Saitama University and experimental scenes during nighttime are shown in Figure 11. The performance criteria are the detection rate and the estimation error in the hazardous object detection. Judgment of hazardous objects in the hazardous object detection area is done for each frame. In the case of the detection of a static hazardous object, if it is judged that there is no hazardous object because there are no hazardous objects within 3 m from the Kinect sensor up to the first 1 m after the departure of the electric wheelchair, we call it a True Negative. Moreover, if it is judged that there is a hazardous object in the remaining 3 m, as shown in Figure 8, we call it a True Positive. The detection rate (Accuracy) is the proportion of the sum of the number of frames of True Positive (a) Static hazardous object (b) Dynamic hazardous object (a) Static hazardous object (b) Dynamic hazardous object Figure 11. Experimental scene during nighttime. Figure 11. Experimental scene during nighttime.

Sensors 2017, 17, 2936

10 of 18

Figure 10. Installation of Camera A.

and True Negative torange the total number of frames. Here, hazardous the Actual Positive is a the caseelectric in which there is a The acquisition of the position of the static object from wheelchair hazardous object in the hazardous object detection area and the Actual Negative is a case in which is within 1.2 m in front of the electric wheelchair from 4.0 m. In the case of the dynamic hazardous there is(pedestrian), no hazardous Here, the static when hazardous object object it isobject from in 1.2the m hazardous ahead to 6.0object m. Indetection addition,area. the results obtained the system detection area is 1.2 m to 3.0 m, and the dynamic hazardous object detection area is 1.2 m to 4.0 as m. estimates the relative position of the hazardous objects from the electric wheelchair are taken The estimation error is the difference between the above-mentioned true value and the estimated value estimated values. Five trials were used for each hazardous object experiment. The experiment for each is frame in the hazardous object detection area. location Saitama University and experimental scenes during nighttime are shown in Figure 11.

(a) Static hazardous object

(b) Dynamic hazardous object

Figure Figure 11. Experimental Experimental scene scene during during nighttime. nighttime.

4.3. Results First, we examined the validity of the warning timing before evaluating the system performance. Driving experiments were conducted with students because driving experiments with the participation of elderly people were difficult. Driving experiments, in which subjects stopped when an alarm sounded while driving at 6 km/h, were conducted. Results of 20 trial experiments demonstrated that all subjects were able to stop the electric wheelchair within 1.5 m. Next, static hazardous object detection will be described. Tables 2 and 3 present results of detection rates of convex hazardous objects and concave hazardous objects. The detection rate of convex hazardous objects was about 95% (516/541). The detection rate of the concave hazardous objects was also about 95% (514/540). The cause of false detection (False Negative and False Positive) was the estimation error that results from a change in the road surface slope and the vibration. These are the cases where the system judged that it was not within 3 m although the hazardous object was within 3 m, and where the system judged that it was within 3 m although the hazardous object was not within 3 m. It is noteworthy that all such false detections occurred when hazardous objects were more than 2.7 m distant from the electric wheelchair. Table 2. Detection rate of convex (nighttime).

AP * AN *

CP *

CN *

334 11

15 180

* AP, Actual Positive; AN, Actual Negative; CP, Classified Positive; CN, Classified Negative.

Table 3. Detection rate of concave (nighttime).

AP * AN *

CP *

CN *

324 0

25 192

* AP, Actual Positive; AN, Actual Negative; CP, Classified Positive; CN, Classified Negative.

Figure 12 presents the distribution of estimation errors of convex and concave objects. The standard deviation was about 0.02–0.03 m. The cause of the error is estimation error because of

Sensors 2017, 17, 2936

11 of 18

the change of inclination of the road surface and vibration. In addition, the electric wheelchair cannot run straight while running. The average error of the y direction in convex objects is about 0.03 m, but the average error of the y direction in concave objects is +0.13 m. Figure 13 depicts the reason for the average error at the concave hazardous object. In the case of a convex portion, the estimated value is, in principle, is 17 Sensors 2017, 17, 2936 the same place as the true value (the red solid arrow) because the estimated value 11 of used when the system is judged to be a hazardous object, as shown in Figure 13a. However, in the case Sensors 2017, 17, 2936 11 of 17 of a concave portion, the difference between estimated value true value occurs because a concave portion, the difference between the the estimated value andand thethe true value occurs because the the estimated value (the blue dotted line arrow) is used when the system is judged to be a hazardous estimated value (the blue dotted line arrow) is used when the system is judged to be a hazardous a concave portion, the difference between the estimated value and the true value occurs because the object shown Figure 13b. Thisline the mainis reason for generating generating the average object as as shown inin Figure 13b. This isisthe main reason theis average error theconcave concave estimated value (the blue dotted arrow) used for when the system judgederror to beofof athe hazardous portion detection in the y direction. portion y direction. objectdetection as shownin in the Figure 13b. This is the main reason for generating the average error of the concave

portion detection in the y direction. 0.2

0.25

0.2

0.25

0.15

0.2

0.15

0.05 0.05

y direction[m] y direction[m]

y direction[m]

y direction[m]

0.1 0.1 0.05 0.05 0 00 -0.15 -0.1 -0.05 -0.15 -0.1 -0.05 -0.05 0 -0.05

0.2

0.1 0.1

0.15 0.15

0.1 0.1

0.05 0.05

0

-0.1 -0.1

-0.1 -0.1 x direction[m] x direction[m]

-0.05 -0.05

0 0 0.05 0 0.05 x direction[m] x direction[m]

0.1

0.1

(a) Convex (b) Concave (a) Convex (b) Concave Figure 12.12.Distribution static hazardousobject. object. Figure Distributionofof ofestimation estimation error error of of Figure 12. Distribution estimation error of static static hazardous hazardous object.

ZZ

ZZ

True Truevalue value

True value value True Estimated Estimated value value YY

YY Convex (a)(a) Convex

(b) (b) Concave Concave

Figure 13.Reason Reason of averageerror error in the the static hazardous object. Figure 13. hazardousobject. object. Figure 13. Reasonofofaverage average error in in the static static hazardous

Finally, we describeexperimentally experimentallyobtained obtained results results of dynamic Finally, wewe describe dynamic hazardous hazardousobject objectdetection. detection. Finally, describe experimentally obtained hazardous results of of dynamic hazardous object detection. Tables 4 and 5 present detection rates of dynamic objects in the front direction and the Tables 4 and 5 present detection rates of dynamic hazardous objects in the front direction andthe the Tables 4 and 5 present detection rates of dynamic hazardous objects in the front direction and diagonal direction. The detection rates (Accuracy) in the front direction and diagonal direction were, diagonal direction. The detection rates (Accuracy) in front direction anddiagonal diagonaldirection direction were, diagonal direction. The detection rates (Accuracy) inthe the direction and respectively, 96% (356/370) and 97% (447/490). Figure 14 front presents an example of detection of a were, static respectively, 96% (356/370) and 97% (447/490). Figure 14 presents an example of detection a static respectively, 96% (356/370) 97%range (447/490). 14 Figure presents anisexample of detection ofofaa static static hazardous object. Figure 14aand is the image Figure data and 14b the detection result of hazardous object. Figure 14a range image data and Figure 14b isisthe thedetection detectionresult resultofofa astatic static hazardous Figure 14aisisthe the−0.07 range data and in Figure 14b hazardous object. object with a height of m.image A white circle Figure 14b means the closest place of the hazardous object with a aheight ofof−0.07 m. white circle in Figure 14b means meansthe the closestplace placeofof the hazardous object withsensor height −0.07 m.AA white circle Figure 14b the Kinect sensor. Here, vibration is the main cause of in False Positive and True closest Negative. Because Kinect sensor. Here, sensor vibration isisthe main cause of False False Positive Positiveand andTrue TrueNegative. Negative.Because Because Kinect sensor. Here, sensor vibration the main cause of estimation error, the system determined that a hazardous object exists in the hazardous object estimation error, the system determined thatobject object ininthe of of estimation error, the system that aa hazardous object exists exists thehazardous hazardous object detection area, even when theredetermined is no hazardous in the hazardous object detection area. object detection area, even whenthere thereisisno nohazardous hazardous object in area. detection area, even when in the the hazardous hazardousobject objectdetection detection area. Table 4. Detection rate of dynamic hazardous objects in the front direction (nighttime). Table 4. Detection rate of dynamic hazardous objects in the front direction (nighttime).

CP CN CP AP 188 CN 0 AP 0 AN 188 14 168 AN 14 168

Table 5. Detection rate of dynamic hazardous objects in the diagonal direction (nighttime).

Table 5. Detection rate of dynamic hazardous objects in the diagonal direction (nighttime).

CP

CN

Sensors 2017, 17, 2936

12 of 18

Table 4. Detection rate of dynamic hazardous objects in the front direction (nighttime).

AP AN

CP

CN

188 14

0 168

Table 5. Detection rate of dynamic hazardous objects in the diagonal direction (nighttime).

AP AN

Sensors 2017, 17, 2936 Sensors 2017, 17, 2936

CP

CN

224 8

5 253

12 of 17 12 of 17

(b) A detection result of a static hazardous object with a (b) A detection result of a static hazardous object with a height of −0.07 m height of −0.07 m Figure 14. Example of the static hazardous object detection. detection. Figure 14. 14. Exampleof ofthe thestatic statichazardous hazardous object object Figure Example detection.

(a) Range image data (a) Range image data

-0.2 -0.2

0.2 0.2

0.2 0.2

0.1 0.1

0.1 0.1

0 -0.1 0 0 -0.1 -0.1 0 -0.1

0.1 0.1

-0.2 -0.2 -0.3 -0.3

0.2 0.2

0.3 0.3

y ydirection[m] direction[m]

y ydirection[m] direction[m]

Figure 15 presents the distribution of estimation errors of a dynamic hazardous object in the Figure Figure 15 15 presents presents the the distribution distribution of of estimation estimation errors errors of of aa dynamic dynamic hazardous hazardous object object in in the the front direction and the diagonal direction. The standard deviations were about 0.09–0.30 m. Figure front direction and the diagonal direction. The standard deviations were about 0.09–0.30 m. Figure front direction and the diagonal direction. The standard deviations were about 0.09–0.30 m. Figure 16 16 presents an example of the detection results of the left foot and the right foot. The distribution of 16 presents example of the detection results of the right foot. distribution of presents an an example of the detection results of the leftleft footfoot andand thethe right foot. TheThe distribution of the the left foot and the distribution of the right foot are separated. The average error in the y direction is the left foot and the distribution of the right foot are separated. The average error in the y direction left foot and the distribution of the right foot are separated. The average error in the y direction is is about −0.20 m, which means that the dynamic hazardous object was detected before the true position about about −0.20 −0.20m, m,which whichmeans meansthat thatthe thedynamic dynamichazardous hazardousobject objectwas wasdetected detectedbefore beforethe thetrue trueposition position because the true position (Xt, Yt) of the dynamic hazardous object is immediately under the torso in because because the the true true position position (X (Xtt,, YYt)t )of ofthe thedynamic dynamichazardous hazardousobject objectisis immediately immediately under under the the torso torso in in this experiment, but the actual detected place (Xe, Ye) is the feet as shown in Figure 17. It also includes this experiment, but the actual detected place (X e , Y e ) is the feet as shown in Figure 17. It also includes this experiment, but the actual detected place (Xe , Ye ) is the feet as shown in Figure 17. It also includes errors due to vibration during running. errors errors due due to to vibration vibration during during running. running.

-0.6 -0.6

-0.4 -0.4

0 -0.2 0 0 -0.2-0.1 0 -0.1

0.2 0.2

-0.2 -0.2 -0.3 -0.3

-0.4 -0.4

-0.4 -0.4

-0.5 -0.5 x direction[m] x direction[m]

-0.5 -0.5 x direction[m] x direction[m]

(a) Front (a) Front

0.4 0.4

(b) Diagonal (b) Diagonal

Figure 15. Estimation error in the dynamic hazardous object detection. Figure Figure15. 15.Estimation Estimationerror errorin inthe thedynamic dynamic hazardous hazardous object object detection. detection.

0.6 0.6

Sensors 2017, 17, 2936

0.2

0.1

0.1

0 -0.1

-0.1

0

0.1

0.2

0.3

-0.2

y direction[m]

y direction[m]

-0.2

0.2

-0.6

-0.4

0 -0.2 0 -0.1

0.2

0.4

0.6

-0.2

-0.3

13 of 18

-0.3

Travelling experiments were conducted on a slope. One result is presented in Figure 18. Figure 18a -0.4 -0.4 portrays the scenery of the slope and travel direction given by the arrow. The left of Figure 18b depicts -0.5 the inputted depth data -0.5 from the Kinect. The right of Figure 18b displays the results of the hazardous x direction[m] x direction[m] object detection. The system distinguishes hazardous objects from the travelable slope. These results (a) Front (b) Diagonal show that this system is effective for securing the electric wheelchair at night. However, it is necessary to perform the evaluation onEstimation detectionerror of various hazardous objectsobject underdetection. various environments. Figure 15. in the dynamic hazardous

Sensors 2017, 17, 2936

13 of 17

Z

Y

O Sensors 2017, 17, 2936

Figure object detection. detection. (xe,yehazardous ) Figure 16. 16. Example Example of of the the dynamic dynamic hazardous object

X

(xt,yt)

13 of 17

Figure 17. Reason of the average Z error in the dynamic hazardous object.

Travelling experiments were conducted on a slope. One result is presented in Figure 18. Figure 18a portrays the scenery of the slope and travel direction given by the arrow. The left of Figure 18b depicts the inputted depth data from the Kinect. The right of Figure Y 18b displays the results of the O hazardous object detection. The system distinguishes hazardous objects from the travelable slope. (xe,ye) X is effective for securing These results show that this system the electric wheelchair at night. However, (xt,yt) it is necessary to perform the evaluation on detection of various hazardous objects under various Figure 17. Reason of the average error in the dynamic hazardous object. environments. Figure 17. Reason of the average error in the dynamic hazardous object. Travelling experiments were conducted on a slope. One result is presented in Figure 18. Figure 18a portrays the scenery of the slope and travel direction given by the arrow. The left of Figure 18b depicts the inputted depth data from the Kinect. The right of Figure 18b displays the results of the hazardous object detection. The system distinguishes hazardous objects from the travelable slope. These results show that this system is effective for securing the electric wheelchair at night. However, it is necessary to perform the evaluation on detection of various hazardous objects under various environments.

(a) Scene of the slope

(b) Experiment results

slope. Figure 18. 18. Results of the the hazardous hazardous object object detection detection in in driving driving experiments experiments on on aa slope. Figure Results of

5. Hazardous Object Detection in Daylight 5.1. Hazardous Object Detection Method Dynamic hazardous object detection by optical flow during the daytime is now described. It is (a) Scene of the slope (b) Experiment assumed that the dynamic hazardous object (pedestrian) is standing on the results ground (0 m). The system obtains images from the Kinect camera. Next, optical [27] is used toon distinguish Figure 18. Results of theRGB hazardous object detection in flow driving experiments a slope. between the background and the dynamic hazardous object. The optical flow of the Point Of Interest (POI) is

Sensors 2017, 17, 2936

14 of 18

5. Hazardous Object Detection in Daylight 5.1. Hazardous Object Detection Method Dynamic hazardous object detection by optical flow during the daytime is now described. It is assumed that the dynamic hazardous object (pedestrian) is standing on the ground (0 m). The system obtains images from the Kinect RGB camera. Next, optical flow [27] is used to distinguish between the background and the dynamic hazardous object. The optical flow of the Point Of Interest (POI) is calculated using the gradient information of a total area of 75 pixels that are 5 pixels in the x axis direction, 5 pixels in the y axis direction, and 3 pixels in the time axis direction on the inputted image from the Kinect. The dilation–erosion process is used to remove noise. The area containing the dynamic hazardous object from the inputted image is cut out using optical flow. After performing binarization of the cut-out area, the edge of the dynamic hazardous object is extracted. Image coordinates of the lowest end of the edge of the extracted dynamic hazardous object are found. Figure 18 presents an example of the dynamic hazardous object detection by optical flow. Figure 19a is an inputted image. Sensors 2017, 17, 2936 14 of 17 Sensors 2017, 14 ofthe 17 Figure 19b17, is 2936 a result of the detection. Finally, the obtained image coordinates are projected on 0projected m heighton plane plane) plane to estimate the position of thethe real-world Then, coordinates. the position the 0(x–y m height (x–y plane) to estimate positioncoordinates. of the real-world projected on the 0 m height plane (x–y plane) to estimate the position of the real-world coordinates. (D the dynamic object is estimated as shown in Figure 20. Here, Hk is the x , Dthe y ) ofposition Then, (Dx, Dhazardous 20.Kinect Here, y) of the dynamic hazardous object is estimated as shown in Figure Then, the position (Dx, Dy) of the dynamic hazardous object is estimated as shown in Figure 20. Here, height, which is known. Hk is the Kinect height, which is known. Hk is the Kinect height, which is known.

(a) Inputted image (b) Result of the detection (a) Inputted image (b) Result of the detection Figure 19. Example of the dynamic hazardous object detection by optical flow. Figure 19. 19. Example Example of of the the dynamic dynamic hazardous hazardous object object detection detection by by optical optical flow. flow. Figure

Z Z

 

D  D tan  Dxx  D yy tan  D  H tan  D yy  H kk tan 

H HKK D Dxx O Y D O Y Dyy X X Figure 20. Method of estimating the position (Dx, Dy) of the dynamic hazardous object. Figure 20. 20. Method Method of of estimating estimating the the position position (D (Dx,, D y) of the dynamic hazardous object. Figure x Dy ) of the dynamic hazardous object.

5.2. Experimental Method 5.2. Experimental Method 5.2. Experimental Method For each of five trials in the experiment, the pedestrian approaches from a position 6 m away For each of five trials in the experiment, the pedestrian approaches from a position 6 m away each of five45trials in theas experiment, the pedestrian approaches from positionhazardous 6 m away (frontFor and diagonal degrees), shown in Figure 8. The detection area of theadynamic (front and diagonal 45 degrees), as shown in Figure 8. The detection area of the dynamic hazardous (front and diagonal 45 degrees), as shown in Figure 8. The detection area of the dynamic hazardous object is 5.5 m ahead considering the relative speed of the electric wheelchair (maximum speed 6 object is 5.5 m ahead considering the relative speed of the electric wheelchair (maximum speed 6 object 5.5the m ahead considering theThen, relative ofequipped the electricon wheelchair speed 6 km/h) km/h) is and pedestrian (4 km/h). thespeed Kinect the electric(maximum wheelchair records RGB km/h) and the pedestrian (4 km/h). Then, the Kinect equipped on the electric wheelchair records RGB and the pedestrian km/h). Then, Figure the Kinect on the electric wheelchair records RGB images images (daytime) (4 while moving. 21 equipped portrays the scene of the dynamic hazardous object images (daytime) while moving. Figure 21 portrays the scene of the dynamic hazardous object (daytime) while moving. Figure 21 portrays the scene of the dynamic hazardous object detection detection experiment during the daytime. detection experiment during the daytime. experiment during the daytime.

(front and diagonal 45 degrees), as shown in Figure 8. The detection area of the dynamic hazardous object is 5.5 m ahead considering the relative speed of the electric wheelchair (maximum speed 6 km/h) and the pedestrian (4 km/h). Then, the Kinect equipped on the electric wheelchair records RGB images (daytime) while moving. Figure 21 portrays the scene of the dynamic hazardous object Sensors 2017, 17, 2936 15 of 18 detection experiment during the daytime.

Figure 21.21. Experimental sceneinin daylight. Figure Experimental scene daylight. 5.3. Results 5.3. Results The detection of dynamic hazardous objects during the daytime will now be described. The detection of dynamic hazardous objects during the daytime will now be described. The The detection rates of dynamic hazardous objects are shown in Tables 6 and 7. The detection rate in detectiontherates of dynamic hazardous objects are shown in Tables 6 and 7. The detection rate in the front direction was 83% (354/424). The detection rate in the diagonal direction was 81% (403/498). front direction was 83% (354/424). detection rate in between the diagonal direction 81% (403/498). Many False Negatives show that itThe is difficult to distinguish the background andwas the dynamic Many False it from is difficult towheelchair distinguish between the background objectNegatives in a locationshow that is that distant the electric because the difference between the and the optical flow of the background and the optical flow of the dynamic object was small as a result dynamic object in a location that is distant from the electric wheelchair because the of difference image blurring. Table 6. Detection rate of dynamic hazardous objects in the front direction (daylight).

AP AN

CP

CN

316 7

63 38

Table 7. Detection rate of dynamic hazardous objects in the diagonal direction (daylight).

AP AN

CP

CN

353 12

83 50

Estimation error related to dynamic hazardous object detection will be described next. Figure 22 depicts the distribution of the estimation error in the front direction and the diagonal direction. The standard deviations were about 0.09–0.30 m. The cause of the error is that the true value of the dynamic hazardous object is immediately under the torso, as explained in Figure 17, but the actual detected place is the foot. Furthermore, it is difficult to detect objects using the RGB camera of the Kinect during daytime because the Kinect RGB camera quality is wholly inadequate. The maximum error was 1.55 m. An especially large error in the estimation error results from detection of parts aside from the feet, as shown in Figure 23 (circles in the Figure). From these facts, it is necessary to improve the performance of the dynamic hazardous object detection during daytime.

17, but the actual detected place is the foot. Furthermore, it is difficult to detect objects using the 17, but the actual detected place is the foot. Furthermore, it is difficult to detect objects using the RGB camera of the Kinect during daytime because the Kinect RGB camera quality is wholly RGB camera of the Kinect during daytime because the Kinect RGB camera quality is wholly inadequate. The maximum error was 1.55 m. An especially large error in the estimation error inadequate. The maximum error was 1.55 m. An especially large error in the estimation error results from detection of parts aside from the feet, as shown in Figure 23 (circles in the Figure). results from detection of parts aside from the feet, as shown in Figure 23 (circles in the Figure). From these facts, it is necessary to improve the performance of the dynamic hazardous object Sensors 2936 it is necessary to improve the performance of the dynamic hazardous object 16 of 18 From 2017, these17,facts, detection during daytime. detection during daytime. 2 2

y direction[m] y direction[m]

1 1 0.5 0.5 0 0 0 -0.5 0 -0.5

-0.2 -0.2

-1 -1

y direction[m] y direction[m]

1.5 1.5

-0.5 -0.5

0.2 0.2

0.4 0.4

x direction[m] x direction[m]

(a) Front (a) Front

0.6 0.6

0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 0 0 0 0.5 -0.2 0 0.5 -0.2 -0.4 -0.4 -0.6 -0.6 -0.8 -0.8 x direction[m] x direction[m]

1 1

(b) Diagonal (b) Diagonal

Figure 22. Distribution of measurement error in the dynamic hazardous object. Figure 22. 22. Distribution Distribution of of measurement measurement error error in in the Figure the dynamic dynamic hazardous hazardous object. object.

Figure 23. An example of dynamic hazardous object detection in daylight. Figure 23. An example of dynamic hazardous object detection in daylight. Figure 23. An example of dynamic hazardous object detection in daylight.

6. Conclusions This report has described the applicability of using a commercially available Kinect sensor to ensure the safety of a handle-type electric wheelchair. Considering that the actual users of electric wheelchairs are elderly people, it is important to ensure the safety of electric wheelchairs at night. To ensure the safety of the electric wheelchair during nighttime, we examined warning timing and the hazardous object detection area while considering the recognition, judgment, and operation time of elderly people and constructed a hazardous object detection system using Kinect sensors. The results of detecting static and dynamic hazardous objects outdoors demonstrated that this system is able to detect static and dynamic hazardous objects with high accuracy. We also conducted experiments related to dynamic hazardous object detection during the daytime. Along with the results presented above, it showed that this system is applicable to ensuring the safety of the handle-type electric wheelchair. The results described above demonstrated that the system is useful for hazardous object detection with an electric wheelchair using a commercially available and inexpensive Kinect sensor. Furthermore, the system can reduce the risk associated with the use of electric wheelchairs and contribute to securing the autonomous mobility of elderly people. As future tasks, further performance evaluations on the detection of various hazardous objects under various environments should be undertaken, because only the typical examples of convex and concave portions were evaluated for performance. Additionally, there should also be studies on the detection of the static hazardous objects during daytime and the improvement of dynamic hazardous object detection by tracking. Furthermore, a comparison with other methods of static hazardous object detection should be carried out.

Sensors 2017, 17, 2936

17 of 18

Author Contributions: Jeyeon Kim and Takaaki Hasegawa conceived and designed the system; Jeyeon Kim and Yuta Sakamoto performed the experiments, and contributed analysis tools; Jeyeon Kim, Takaaki Hasegawa and Yuta Sakamoto analyzed the data; Jeyeon Kim wrote the paper. Conflicts of Interest: The authors declare no conflict of interest.

References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

11. 12.

13.

14. 15. 16.

17.

18.

19. 20.

Philips. Healthcare Strategies for an Ageing Society; The Fourth Report in a Series of Four from the Economist Intelligence Unit; The Economist Intelligence Unit Limited: London, UK, 2009. Cabinet Office, Government of Japan. White Paper on Aging Society. Available online: http://www8.cao.go. jp/kourei/whitepaper/index-w.html (accessed on 4 July 2017). Kamata, M.; Shino, M. Mobility Devices for the Elderly: –“Silver Vehicle” Feasibility–. IATSS Res. 2006, 30, 52–59. [CrossRef] Somenahalli, S.; Hayashi, Y.; Taylor, M.; Akiyama, T.; Adair, T.; Sawada, D. Accessible transportation and mobility issues of elderly. J. Sustain. Urban. Plan. Prog. 2016, 1, 1–13. [CrossRef] Hasegawa, T. Sharing Economy and Approaches to Securing Mobility in Ageing Society with Fewer Children; IEICE Technical Report, ITS2017-10; IEICE: Toyama, Japan, 2017; pp. 49–52. (In Japanese) Japanese Industrial Standards Committee. Electrically Powered Scooters; Japan Industrial Standard, JIS T9208:2009; Japanese Industrial Standards Committee: Tokyo, Japan, 2009. (In Japanese) The International Organization for Standardization. Wheelchairs. ISO/TC 173/SC 1 Wheelchairs; ISO 7176-1:2014; The International Organization for Standardization: Geneva, Switzerland, 2014. Masuzawa, T.; Minami, S. Current status and the future of electric wheel chairs in Japan. J. Hum. Environ. Stud. 2010, 8, 45–53. [CrossRef] Metz, D.H. Mobility of older people and their quality of life. J. Transp. Policy 2000, 7, 149–152. [CrossRef] National Institute of Technology and Evaluation. Accident Prevention by Handle-Type Electric Wheelchair. Available online: http://www.nite.go.jp/jiko/chuikanki/press/2010fy/100722.html (accessed on 6 July 2017). (In Japanese) National Police Agency. Occurrence of Traffic Accident of Electric Wheelchair. Available online: https: //www.npa.go.jp/koutsuu/kikaku12/ri_05jiko.pdf (accessed on 6 July 2017). (In Japanese) Kim, S.-H.; Roh, C.-W.; Kang, S.-C.; Park, M.-Y. Outdoor Navigation of a Mobile Robot Using Differential GPS and Curb Detection. In Proceedings of the 2007 IEEE International Conference on Robotics and Automation, Roma, Italy, 10–14 April 2007; pp. 3414–3419. [CrossRef] Bostelman, R.; Hong, T.; Madhavan, R.; Weiss, B. 3D Range Imaging for Urban Search and Rescue Robotics Research. In Proceedings of the 2005 IEEE International Safety, Security and Rescue Robotics, Kobe, Japan, 6–9 June 2005; pp. 164–169. [CrossRef] Rocchini, C.; Cignoni, P.; Montani, C.; Pingi, P.; Scopigno, R. A low cost 3D scanner based on structured light. Eurographics 2001, 20, 299–308. [CrossRef] Levine, S.P.; Bell, D.A.; Jaros, L.A.; Simpson, R.C.; Koren, Y.; Borenstein, J. The NavChair assistive wheelchair navigation system. IEEE Trans. Rehabilit. Eng. 1999, 7, 443–451. [CrossRef] Murarka, A.; Sridharan, M.; Kuipers, B. Detecting Obstacles and Drop-offs using Stereo and Motion Cues for Safe Local Motion. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Nice, France, 22–26 September 2008; pp. 702–708. [CrossRef] Ulrich, W.; Nourbakhsh, I. Appearance-Based Obstacle Detection with Monocular Color Vision. In Proceedings of the AAAI National Conference on Artificial Intelligence, Austin, TX, USA, 30 July–3 August 2000; pp. 866–871. Murarka, A.; Modayil, J.; Kuipers, B. Building Local Safety Maps for a Wheelchair Robot using Vision and Lasers. In Proceedings of the 3rd Canadian Conference on Computer and Robot Vision, Quebec City, QC, Canada, 7–9 June 2006; pp. 25–32. Brooks, R.A. A robust layered control system for a mobile robot. IEEE J. Robot. Autom. 2003, 2, 14–23. [CrossRef] Takizawa, H.; Yamaguchi, S.; Aoyagi, M.; Ezaki, N.; Mizuno, S. Kinect cane: An assistive system for the visually impaired based on the concept of object recognition aid. Pers. Ubiquitous Comput. 2015, 19, 955–965. [CrossRef]

Sensors 2017, 17, 2936

21.

22. 23. 24. 25. 26. 27.

18 of 18

Rockey, C.A.; Perko, E.M.; Newman, W.S. An evaluation of low-cost sensors for smart wheelchairs. In Proceedings of the IEEE International Conference on Automation Science and Engineering, Madison, WI, USA, 17–20 August 2013; pp. 249–254. [CrossRef] Hernández-Aceituno, J.; Arnay, R.; Toledo, J.; Acosta, L. Using kinect on an Autonomous Vehicle for Outdoors Obstacle Detection. IEEE Sens. J. 2016, 16, 3603–3610. [CrossRef] Hayward, J.C. Near-miss determination through use of a scale of danger. In Proceedings of the 51st Annual Meeting of the Highway Research Board, Washington, DC, USA, 17–21 January 1972; pp. 24–34. van der Horst, R.; Hogema, J. Time-to-Collision and Collision Avoidance Systems. In Proceedings of the 6th ICTCT Workshop, Salzburg, Austria, 27–29 October 1994; pp. 109–121. Masataka, I. Research on Characteristic of Mental and Physical Functions of Elderly Driving; Report of Investigation Research in Japan Safe Driving Center; Japan Safe Driving Center: Tokyo, Japan, 1986. (In Japanese) Suzuki ET4E. Available online: http://www.suzuki.co.jp/welfare/et4e/index.html (accessed on 6 July 2017). Lucas, B.D.; Kanade, T. An Iterative image registration technique with an application to stereo vision. In Proceedings of the 7th International Joint Conference on Artificial Intelligence, Vancouver, BC, Canada, 24–28 August 1981; pp. 24–28. © 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

Suggest Documents