Three-Dimensional VLC Positioning Based on Angle ... - IEEE Xplore

0 downloads 0 Views 3MB Size Report
only see a ceiling region of 1-4 square meters, which requires the LEDs to be ...... technology,” IEEE Commun. Mag., vol. 40, no. 2, pp. 112–118, Feb. 2002.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSAC.2017.2774435, IEEE Journal on Selected Areas in Communications 1

Three-Dimensional VLC Positioning Based on Angle Difference of Arrival With Arbitrary Tilting Angle of Receiver Bingcheng Zhu, Julian Cheng, Senior Member, IEEE, Yongjin Wang, Jun Yan and Jinyuan Wang

Abstract—Existing indoor positioning methods for visiblelight communication systems require large database, powerful signal processing units, additional sensors such as gyroscopes, or the receiver must be placed towards a certain angle. These assumptions limit the applications of the indoor positioning systems in low-cost scenarios. In this work, we propose a novel positioning framework based on the angle differences of arrival (ADOA) in 3-dimensional coordinate systems, which can be used in receivers with image sensors or photo diodes. The proposed ADOA positioning does not require the receiver to be placed toward a certain angle and no additional sensor is required. Two positioning algorithms are proposed: one is based on the method of exhaustion (MEX), and the other is based on the least squares method (LSM). The MEX algorithm is analytically proved to be the optimal, while the LSM algorithm has much lower complexity. Both upper and lower bounds are derived for the average discrepancy between the exact position and the estimated position. These performance bounds can facilitate the design of light-emitting diode array. Experimental results show that the MEX algorithm can achieve average error of 3.20 cm with time cost of 0.36 sec, and the LSM algorithm can achieve average error of 14.66 cm with time cost of 0.001 sec. Index Terms—Angle difference of arrival, angle of arrival, localisation, positioning, visible light communications.

I. I NTRODUCTION Global positioning system has been widely applied in outdoor positioning scenarios, but such a satellite-signal based system can fail in indoor positioning due to high signal attenuation [1]. Indoor positioning systems have many applications such as location tracking, navigation in complicated scenarios, robot movement control, etc. Existing indoor positioning systems are based on radiofrequency (RF) electromagnetic waves [2], sound [3], [4], image sensors [5]–[7], light-emitting diodes (LED) and photo diodes (PD) [6], [8], or their combinations [9]–[13]. For lowfrequency RF-based systems, such as the wireless local area Manuscript received February 22, 2017; revised July 15, 2017; accepted September 16, 2017. Bingcheng Zhu, Yongjin Wang, Jun Yan and Jinyuan Wang are with Nanjing University of Posts and Telecommunications, Nanjing 210003, China (emails: {zbc, wangyj, yanj, jywang}@njupt.edu.cn). Julian Cheng is with the School of Engineering, The University of British Columbia, Kelowna, BC, Canada, (e-mail: [email protected]). This work is supported by National Natural Science Foundation of China (61322112, 61531166004, 61701254); NUPTSF(NY216008); Young Elite Scientists Sponsorship Program By Cast (YESS20160042); Natural Science Foundation of Jiangsu Province (BK20170901); Funds for International Cooperation and Exchange of the National Natural Science Foundation of China (61720106003); the open research fund of National Mobile Communications Research Laboratory, Southeast University (2017D06).

networks (WLAN) signal (2.4 GHz), the discrepancy exceeds 6 m in 50% of time, and the positioning accuracy is highly related to the density of the routers [12]. Another RF positioning system is based on the Blue-tooth low energy (BLE) beacons, and it is more accurate and energy-efficient compared to WLAN scheme. The average root-mean-square (RMS) error of BLE positioning is 3.8 m, and the RMS error for WLAN positioning is 5.2 m [14]. This accuracy level cannot be achieved through traditional trilateration due to the multipath propagation and attenuation in indoor environment because low-frequency signals can be reflected by walls and attenuated by human bodies. Instead, the most accurate positioning algorithms for BLE and WLAN are based on the generation and maintenance of a radio map, which is a time-consuming and labor-intensive task. Additionally, when the database grows larger, the searching in the database becomes highly timeconsuming. With ultra-wide-band (UWB) signal, RF-based positioning can achieve approximately 1-meter accuracy [15]. With densely placed radio frequency identification (RFID) tags, the accuracy of positioning can be further improved to several centimeters [16]. However, both UWB and RFID positioning systems require new infrastructure and new hardware components, which limits the application on low-cost devices, such as smart phones [12]. Sound-based positioning can achieve accuracy of 10 cm using 24 microphones in a surrounding with few scatterers [3]. However, it requires the ambient noise to be sufficiently low and the route of the user must be close to the microphone array, which can be impractical for public scenarios. Passive positioning and tracking systems were realized using time-of-flight camera array [17], indoor radars [18] and supervisory stations [19], which achieve accuracy of tens of centimeters. Positioning with visible light communication (VLC) infrastructures can achieve centimeter-level of accuracy with much lower costs and simpler infrastructures [1], [5], [8], [20]–[23]. However, most existing VLC-positioning system prototypes have some practical issues to overcome. For example, the receiver is often assumed to be placed vertically or to be calibrated to a certain direction before positioning [5], [6], [24]–[33], or the performance can deteriorate dramatically [34]. This is generally difficult for handheld devices. For image sensors with small view of angles, the receivers can only see a ceiling region of 1-4 square meters, which requires the LEDs to be placed with high density. In order to remove this limitation, in [1], [8], [20], [21], [35], [36], gyroscopes are equipped in the receiver to measure the pitch angle,

0733-8716 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSAC.2017.2774435, IEEE Journal on Selected Areas in Communications 2

roll angle, and yaw angle. The additional device not only consumes more power, but also introduces more noises in the measurements. A popular assumption in the radiation pattern of LEDs is the Lambertian model, which has been widely applied in received signal strength (RSS) VLC positioning [1], [8], [21], [24], [26], [27], [32], [36], [37]. However, when the incident angle exceeds ±60◦ , the Lambertian model becomes unreliable [27]. The Lambertian model also fails for LEDs with a scattering cover or flat head because the radiation pattern of LEDs is highly related to the shape of the LEDs. This limits the accuracy of RSS VLC positioning in practical scenarios. Another drawback of RSS-based VLC positioning is that the transmit power is unpredictable, making the distance estimation challenging [38]. VLC positioning using time difference of arrival (TDOA) was studied in [39], but it requires perfect synchronization between the transmitters. In fact, as the distances between receivers in indoor applications are typically short, extremely accurate time measurement is required for time-based positioning, such as those using time of arrival, TDOA, phase of arrival and phase difference of arrival [38]. In [7], an image-based positioning algorithm was proposed. It requires a map of the environment precomputed off-line and the camera to be calibrated, and it also requires powerful signal processing units for feature mapping. Similar works based on image matching are [40] and [41], which require a database of photos captured inside the building. The database covers thousands of positions, and at every position, 16 photos of different viewing angles were taken, resulting in a database of several GBytes. Such a system can provide meter-accurate positioning, but continuously transceiving the high-resolution photos is a challenging task. Another imagebased positioning scheme was introduced in [42], and it extracts the position information using geometrical models. However, the applied method involves the solution of a 11variable nonlinear equation sets, which makes the associated processing cumbersome. In [28], the authors showed that angle-difference-of-arrival (ADOA) can be used to obtain the 3-dimensional (3D) camera position, but they did not prove the invariability of ADOAs in rotated coordinate systems, thus the solution had not exploited the invariability for positioning with arbitrarily tilted camera. Recently, the authors in [43] studied the influence of tilting and pitching of cameras on the photos, and proposed a compensation scheme to eliminate the impact of tilting. However, the proposed positioning scheme in [43] still requires gyroscopes to measure the tilting angles, and the compensation method only works for small tilting angles. In this work, we propose an ADOA based VLC positioning framework. The basic principle of the scheme is to extract the information of ADOAs using image sensors or PD arrays, and to exploit the invariability of ADOAs in different 3D coordinate systems in order to locate the user position. Compared with the existing VLC positioning schemes, ADOA VLC positioning has the following advantages: •



Arbitrary tilting and pitching of receivers. The receiver can be placed towards an arbitrary angle as long as signals from more than three LEDs can be received. Low computational complexity. The ADOA positioning

algorithm only requires the solution of a linear equation set when the LEDs are placed in an X-shape. With arbitrarily placed LEDs, the positioning requires a searching process in a 3D finite feasible region, which is computationally acceptable in small devices such as smart phones. • Easy data fusion for multiple sensors. The ADOA positioning algorithms can easily incorporate signals from multiple image sensors and PD arrays. • Small database. The required database only includes the 3D positions of the LEDs and their identifying information, such as the signal frequency or light color. Therefore, the size of the database is on the level of kilobytes. • High accuracy. A simple ADOA VLC positioning platform can achieve cm-level accuracy. • Low-cost transmitters. LEDs are isolated units that do not communicate with each other, and no synchronization or joint modulation process is needed. The major contributions of this work are shown as follows. • We prove the invariability of ADOAs in different 3D coordinate systems. • We generalize the AOA estimation using tilted PDs in [6]. The AOA estimation problem is reduced from the solution of a nonlinear equation set to the solution of a linear equation set; the number of PDs is generalized from 3 to any integer greater than 3. A criterion is proposed to design the receiver with tilted PDs. • We construct a new framework using ADOAs to locate the tilted receiver equipped with image sensor or PDs. Two ADOA algorithms are proposed. The first algorithm is proved to be the most accurate with additive Gaussian angle noise, and the other has low computational complexity that only involves the solution of a linear equation set. • Both upper and lower bounds of the discrepancy of ADOA positioning are derived, which can be used to design the LED arrays. Both analytical results and experimental results are presented to show the feasibility and accuracy of ADOA based VLC positioning. This paper is organized as follows. In Section II, we introduce the system model, including the definition of ADOA and the problem statement. In Section III, we prove the invariability of ADOAs with respect to different coordinate systems. Section IV illustrates how to use camera or PDs to obtain the ADOAs, and the ADOA positioning algorithms are introduced in Section V. Performance analysis of the ADOA positioning is provided in Section VI. Section VII presents numerical and experimental results. Finally, Section VIII draws some concluding remarks. II. S YSTEM M ODEL As shown in Fig. 1, we assume that K LEDs are placed on the ceiling, where the coordinates of the kth LED are denoted as vl,k = (xl,k , yl,k , zl,k )T , ∀k = 1, 2, · · · , K. The coordinates of the K LEDs are known to the user, and the receiver can

0733-8716 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSAC.2017.2774435, IEEE Journal on Selected Areas in Communications 3

ceiling

distinguish different LEDs according to their colors, signal frequency or codes [44, Chap. 14]. The user position is denoted as vu = (xu , yu , zu )T , which is to be estimated. The ADOA between the signals of the mth and the nth LEDs is denoted as rm · rn γm,n = arccos (1) krm k krn k where rm = vl,m − vu and rn = vl,n − vu ; rm · rn denotes the inner products of rm and rn ; k·k denotes the 2-norm of a vector. krm k can also be denoted as rm . In Fig. 1, the ADOA between the mth and the nth LEDs is shown as γm,n = ∠Lm U Ln . However, a tilted receiver with unknown tilting and pitching angles cannot directly measure γm,n ’s or the angles of arrival (AOA) in the absolute coordinate system shown in Fig. 1. Instead, the tilted receiver can only measure the AOAs in a relative coordinate system shown in Fig. 2, whose associated parameters are distinguished using superscript “0 ” in the later context. In the relative coordinate system, the origin is the center of the receiver which is denoted as v0u = (0, 0, 0)T , and the z 0 -axis is collinear with the norm of the image sensor or the PD array, which is perpendicular to the x0 y 0 -plane. Using image sensors or PD arrays, the user can measure the AOAs of the signal of the kth LED in the relative coordinates, which include the inclination angle θk0 and the azimuth angle ϕ0k of vector r0k = v0l,k − v0u = v0l,k . With the AOAs in the relative coordinate system, ADOAs can be computed, which are quantities irrelevant to which coordinate system is chosen. Afterwards, the user can estimate its own coordinates vu based on vl,k ’s and the measured γm,n ’s. In other words, the problem is to find a function G(·) that satisfies vu = G (γ1,1 , γ1,2 , · · · , γK−1,K )

L3 ( xl ,3 , yl ,3 , zl ,3 )

g 3, K g 1,3 g 1ˈ2

z

O

The reason why we choose the ADOAs as the measured parameters is because ADOAs are invariable quantities no matter which coordinate system is chosen. The relationship between the absolute coordinates v = (x, y, z)T and the relative coordinates v0 = (x0 , y 0 , z 0 )T is (3)

where G is the rotation matrix which satisfies GT G = I3 where I3 is a 3×3 identity matrix; ∆ is a 3×1 vector indicating the shift between the origins of the two coordinate system. The relationship between the relative coordinate system and the absolute coordinate system is shown in Fig. 2. According to (1), the cosine of ADOA between the mth LED and the nth LED in the absolute coordinate system can be calculated as T

(vl,m − vu ) (vl,n − vu ) . kvl,m − vu k kvl,n − vu k

LEDs User

y

x Fig. 1. The studied scenario in the absolute coordinate system. K LEDs are placed on the ceiling, where the kth LED has coordinates vl,k = (xl,k , yl,k , zl,k )T . The user has coordinates vu = (xu , yu , zu )T . The ADOA between the mth and the nth LEDs is γm,n . The user knows the coordinates of each LED in the coordinate system, and can measure the ADOAs. The user needs to estimate its own location vu based on vl,k ’s and the measured γm,n ’s.

vl , m

vl , n

z'

z x'

g m, n

(4)

y'

vu

(2)

III. I NVARIABILITY OF A NGLE D IFFERENCE OF A RRIVAL

cos γm,n =

g 2,3

U ( xu , yu , zu )

y

where the function G(·) should be insensitive to the measurement noise on γm,n ’s and be easy to compute. Approaches to measure the ADOAs using a camera or a PD array will be shown in Section IV.

v = Gv0 + ∆

L1 ( xl ,1 , yl ,1 , zl ,1 ) L2 ( xl ,2 , yl ,2 , zl ,2 )

LK ( xl , K , yl , K , zl , K )

x

LEDs User

Fig. 2. The absolute coordinate system and the relative coordinate system. z 0 -axis is collinear with the norm of the receiving plane, and the origin at (x0 , y 0 , z 0 ) = (0, 0, 0) is the center of the receiver.

Based on (3), eq. (4) can be expressed as cos γm,n  T   Gv0l,m + ∆ − (Gv0u + ∆) Gv0l,n + ∆ − (Gv0u + ∆)

=

0



Gvl,m + ∆ − (Gv0u + ∆) Gv0l,n + ∆ − (Gv0u + ∆)  T   Gv0l,m − Gv0u Gv0l,n − Gv0u

=

0



Gvl,m − Gv0u Gv0l,n − Gv0u  T   v0l,m − vu 0 v0l,n − vu 0

=



0

vl,m − v0u v0l,n − v0u =

r0m r0n 0 krm k kr0n k (5)

0733-8716 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSAC.2017.2774435, IEEE Journal on Selected Areas in Communications 4

z'

L

q' Lens

O'

j' f

P

x'

j'

y'

Since the LEDs are placed on the ceiling, and the focal length of the cameras is on the order of millimeters, the approximation in (7) holds in the considered scenario of this work. Figure 3 shows how to use the lens and charge-coupled device (CCD) arrays in a camera to measure an AOA. In the relative coordinate system, the exact polar angle ϕ0 and azimuthal angle θ0 can be obtained as dP Oc , f ϕ0 = atan2(−yP0 , −x0P )

θ0 = arctan

Q CCD Array

T

Oc

Fig. 3. Measuring AOA using an image sensor. z 0 -axis is collinear with the norm vector of the CCD plane. The center of the lens is the origin of the relative coordinate system, denoted by O0 . L is an LED and P is the center of the LED image. OOc is perpendicular to the CCD plane. Oc T is parallel to x0 -axis and P T ⊥Oc T . LQ is perpendicular to x0 y 0 -plane. θ0 and ϕ0 are respectively the inclination angle and the azimuth angle of the LED signal in the relative coordinate system.

which reveals that the ADOA γm,n is unchanged no matter which coordinate system is chosen. In contrast, the angles of arrival are determined by the applied coordinate system. For example, a vector (1, 1, 1)T √ has an azimuthal angle arccos(1/ 3) in the absolute coordinate with orthonormal basis ex = (1, 0, 0)T , ey = T (0, 1, 0)T , √ ez = (0, but with another orthonormal √0, 1) ;√ √ √ basis 0 T 0 ex = (1/√ 6, 1/√ 6, −2/ 6) , e = (1/ 2, −1/ 2, 0)T , y √ T 0 ez = (1/ 3, 1/ 3, 1/ 3) , the same point has a different azimuthal angle 0 because the vector (1, 1, 1)T is collinear with e0z . Unfortunately, the AOAs in terms of the absolute coordinate system are usually difficult to obtain directly because it requires the measuring equipment to be placed towards a specific angle. IV. M EASURING ANGLE D IFFERENCE OF ARRIVAL There exists several approaches to obtain the AOAs in the relative coordinate system, and based on which, one can obtain the ADOAs. In this section, we study camera-based and tiltedPD based receivers to measure the ADOAs, i.e. γm,n ’s. A. Measuring AOAs By Cameras According to [35, eq. (1)], the relationship between the image distance di , the object distance do and the focal length is 1 1 1 + = (6) do di f where f is the focal length of the lens. When do  f , we have di ≈ f. (7)

(8)

where dP Oc is the length of line-segment P Oc ; (x0P , yP0 ) are the coordinates of point P which is the center of the LED image; Oc is the center of the CCD array, and the function atan2(y, x) is defined as   arctan xy  , x > 0    y  arctan x  + π, x < 0, y ≥ 0    arctan xy − π, x < 0, y < 0 (9) atan2 (y, x) = π  2 , x = 0, y > 0    − π , x = 0, y < 0    2 undef ined, x = y = 0. The estimated polar angle ϕˆ0 and azimuthal angle θˆ0 can be obtained as q dˆ2P T +dˆ2T Oc ˆ dP Oc θˆ0 = arctan = arctan f f q 2 2 δp NP,y0 +NP,x0 = arctan f ϕˆ0 = atan2 (−ˆ yP0 , −ˆ x0P ) = atan2 (−δp NP,y0 , −δp NP,x0 ) = atan2 (−NP,y0 , −NP,x0 ) (10) where the hat symbol “ˆ” denotes that the value is an estimated quantity; δp is the length of a single CCD unit; NP,x0 and NP,y0 are integers indicating the discrepancy of point P away from the center Oc in terms of the number of pixels. NP,x0 is positive when x0P > 0 and is negative when x0P < 0. Similarly, NP,y0 has the same sign as yP0 . When the parameters δp and f are not known, one can build a simple platform shown in Fig. 4 and use a simple method ˆ Based on (8), we have to estimate θ.    dP Oc f θ0 = arctan (11) δp δp which leads to q   f 2 2 NP,y 0 + NP,x0 δp  q  2 2 NP,y0 + NP,x0   ≈ arctan  d 0 0 q  Oc O 2 2 N + N 0 0 F,y F,x d 0 0

θˆ0 = arctan

(12)

Oc F

where the last equality is based on the relationship   dOc0 O0 f f dF Oc f q 2 2 = = ≈ NF,y0 + NF,x 0 dOc0 F 0 dF Oc δp δp δp (13)

0733-8716 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSAC.2017.2774435, IEEE Journal on Selected Areas in Communications 5

r0k are collinear, and nm,k ∼ N (0, σP2 D ) models the shot noise and the thermal noise2 . Based on (15), we can obtain ˆr,k = pk V0P D p

F

Oc '

Oc

Image Plane

F'

(16)

ˆr,k where p = (ˆ pr,1,k , · · · , pˆr,M,k )T , and nk = 0 T 0 0 T (n1,k , · · · , nM,k ) , and VP D = (vP D,1 , · · · , vP D,M ) . Multiplying the generalized inverse of V0P D on both sides of (16) and after some modification, we obtain −1 0T −1 0T r0 0 0 ˆr,k − V0T VP D p pk k0 = V0T VP D nk P D VP D P D VP D krk k

O'

f

r0k + nk kr0k k

Object Plane



= g0k Fig. 4. A platform to estimate f /δp . The object plane and the image plane are parallel. Oc0 O0 and Oc O0 are both perpendicular to the image plane and the object plane. dOc0 F 0 and dOc0 O0 can be measured using a ruler. NF,x0 and NF,y0 can be obtained from the photo.

which can be easily obtained from the similar triangles in Fig. 4. In (12), dOc0 F 0 and dOc0 O0 can be measured using a ruler, and NF,x0 and NF,y0 can be obtained from the photo 0 0 of line segment Oq c F . Therefore, the denominator in (12), dO0 O0 2 2 i.e. f /δp ≈ d c0 0 NF,y 0 + NF,x0 , can be measured using a Oc F platform shown in Fig. 4 before taking the photos of LEDs. Since the CCD array cannot be arbitrarily dense, there exists quantization errors on the x0 -axis and y 0 -axis direction, which are defined as ∆

ne,x0 = |ˆ x0P − x0P | , ∆

ne,y0 = |ˆ yP0 − yP0 |

(14)

which are both unknown but bounded quantities [45]. B. Measuring AOAs By Tilted PDs AOAs can also be measured using an array of tilted PDs. In [35], the authors fabricated an angular receiver comprising of three PDs which are perpendicular to each other, and the receiver needs to solve a dual-variable nonlinear equation set to estimate AOAs. In this subsection, we generalize the receiver structure with PDs in [35]. The generalized receiver does not require the PDs to be perpendicular to each other, and the number of PDs is not necessary to be three, and the receiver only needs to solve a linear equation for AOAs. Suppose there are M PDs1 on the receiver where M ≥ 3, and the normalized normal vector of the surface of the mth PD is v0P D,m in the relative coordinate system, then the estimation of the received power of the mth PD from the kth LED, which is proportional to the photocurrent, can be expressed as [46, eq. (6)] r0 (15) pˆr,m,k = pk v0P D,m · k0 + nm,k krk k 0 0 where r0k = (x0r,k , yr,k , zr,k )T is the vector from the center of the receiver towards the kth LED, and pk is the signal power when the normal vector of the receiving plane of the PD and 1 The

PDs are assumed to be bare devices, which means that there are no equipped concentrators on them.

(17) where g0k is observed to be collinear with r0k . Based on (17), the estimated value of the vector g0k is −1 0T ∆ 0 ˆg0k = V0T ˆr,k VP D p (18) P D VP D 0 0 where ˆg0k = (ˆ x0g,k , yˆg,k , zˆg,k )T , and the inclination angle θ0 0 and the azimuth angle ϕ of the incident light signal can be estimated as  0 zˆg,k    θˆ0 = arccos  

0 0 ,z 0 ˆg,k , yˆg,k ˆg,k (19)

x    0 0 0 ϕˆ = atan2(ˆ yg,k , x ˆg,k ).

Comparing (17) and (18), we can express the introduced error as −1 0T 0 ˆg0k − g0k = V0T VP D nk (20) P D VP D and the average power of the error can be calculated as h

2 i E ˆg0k − g0k    −1 T −1 T T T VP D nk VP D VP D VP D = E nTk VTP D VP D =

σP2 D

3 X

λl



−1 T T VP D VTP D VP D

−1 T VP D VTP D VP D



l=1

(21) where λl (M) is the lth eigenvalue of a matrix M. When σP2 D is fixed, eq. (21) is only related to VP D , which contains the normal vectors of the PDs. Therefore, we can designan array of tilted PDs whose normal vectors  minimize  3 −1 T T T −1 T P T λl VP D VP D VP D VP D VP D VP D in orl=1

der to ensure an accurate angle estimation. C. Calculating The ADOAs Based On The AOAs According to Section III, the ADOAs are identical in the absolute and relative coordinate system. Therefore, we can obtain γm,n in the relative coordinate system and apply it in the absolute coordinate system. In Fig. 5, the LED positions Lm , Ln and the user position U form a triangle, where the relative coordinates of the points are expressed in the spherical 2 A sum of the shot noise and the thermal noise can be approximated by a Gaussian random variable when the ambient noise is large [46, eq. (18)] [47, Chap. 2.3].

0733-8716 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSAC.2017.2774435, IEEE Journal on Selected Areas in Communications 6

Lm ( rm ,q m ',jm ' )

 where Dˆ γm,n pr,m  , pr,n is the gradient of function γˆm,n at point pr,m , pr,n . Eq. (25) shows that when the noise on the received power is sufficiently small, the ADOA discrepancy γˆm,n − γm,n follows Gaussian distribution for receivers with tilted PDs.

Ln ( rn ,q n ',jn ' )

z' Sm ( r ,q m ',jm ')

Sn ( r ,q n ',jn ')

r g m, n

V. P OSITIONING S CHEME U SING ADOA S

r

A. Positioning Using Three LEDs y'

U

x' Fig. 5. Calculating γm,n based on the AOAs in the relative coordinate system. The user location is denoted as U , which is the origin of relative 0 , ϕ0 ); coordinate system; the mth LED has spherical coordinates (rm , θm m 0 , ϕ0 ). The auxiliary sphere the nth LED has spherical coordinates (rn , θn n with radius r intersects line-segments U Lm and U Ln , respectively, at points 0 , ϕ0 ) and S (r, θ 0 , ϕ0 ). Sm (r, θm n m n n

coordinate system. An auxiliary sphere with radius r intersects line-segments U Lm and U Ln , respectively, at points Sm and Sn . Based on the law of cosines, we can express γm,n as 2

2r2 − |Sm Sn | 2r2 where |Sm Sn | can be calculated as cos γm,n =

(22)

2

2

0 |Sm Sn | =(r sin θm cos ϕ0m − r sin θn0 cos ϕ0n )

0 + (r sin θm sin ϕ0m − r sin θn0 sin ϕ0n )

+

0 (r cos θm



2

(23)

2 r cos θn0 )

0 0 0 ) are the cosin ϕ0m , r cos θm cos ϕ0m , r sin θm where (r sin θm ordinates of Sm in the relative coordinate system. Substituting (23) into (22) and after simplification, we obtain 1 2 0 cos γm,n = 1 − (sin θm cos ϕ0m − sin θn0 cos ϕ0n ) 2 1 1 2 2 0 0 − (sin θm sin ϕ0m − sin θn0 sin ϕ0n ) − (cos θm − cos θn0 ) . 2 2 (24)

Equation (24) reveals the relationship between the measured 0 AOAs θm , θn0 , ϕ0m , ϕ0n and the ADOA γm,n , which can be used to estimate the user position in the absolute coordinate system. ∆ ∆ r0 Based on (16), we define pr,m = pm V0P D krm 0 k and pr,n = r0

m

pn V0P D kr0n k as the noise-free received power from the mth n and the nth LED. Thus, we have ˆpr,m = pr,m + nm and ˆpr,n = pr,n + nn . According to (18), (19) and (24), the estimated ADOA γˆm,n can be expressed as a function of the ˆr,n , and the function is received power vector ˆpr,m  and p ˆr,m , p ˆr,n , and γˆm,n pr,m , pr,n is the exdenoted as γˆm,n p act ADOA value. Applying the second-order 2M -dimensional Taylor series expansion we obtain  ˆr,m , p ˆr,n γˆm,n p    (25) ≈ γˆm,n pr,m , pr,n + nTm , nTn Dˆ γm,n pr,m , pr,n

The term cos γm,n can be expressed as the ratio between the inner product rm · rn and the product of norms rm rn , thus we can obtain an equation set (26) on the top of the next page where the position (xu , yu , zu ) is to be estimated; the LED coordinates (xl,m , yl,m , zl,m ) are known parameters and γm,n ’s are measured using methods in Section IV. The three equations in (26) correspond to three surfaces, and each surface consists of all points (xu , yu , zu ) satisfying the associated equation. The user position is the intersection of the three surfaces. Figure 6 shows the surface of equation (xu − 1) (xu + 1) + (yu − 0) (yu + 0) + (zu − 0) (zu + 0) k(−1, 0, 0) − (xu , yu , zu )k k(1, 0, 0) − (xu , yu , zu )k = 0.95 (27) which corresponds to an apple-shape surface. Figure 7 shows the cross-section of the surfaces with different ADOA values on x-z plane. It can be observed that the upper and lower half of the curves on the cross-section are arcs, which agree with the inscribed angle theorem that an angle inscribed in a circle is half of the central angle that subtends the same arc on the circle. Note that there might exist multiple intersections of the three surfaces in (26), thus (26) may have more than one solutions. It is challenging to obtain the analytical solution of (26). Thus we resort to numerical methods to obtain the solution to (26), i.e. (xu , yu , zu ). The objective function can be defined as f3 (xu , yu , zu ) X =

2

(ˆ γm,n − γm,n (xu , yu , zu ))

(28)

(m,n)=(1,2),(1,3),(2,3)

where γˆm,n is the measured ADOAs and γm,n (xu , yu , zu )  (xu − xl,m ) (xu − xl,n ) + (yu − yl,m ) (yu − yl,n ) ∆



= arccos 

T



vl,m − (xu , yu , zu ) vTl,n − (xu , yu , zu )  (zu − zl,m ) (zu − zl,n )

 . +

T



vl,m − (xu , yu , zu ) vTl,n − (xu , yu , zu ) (29) The optimization problem can be expressed as minimize f3 (xu , yu , zu ) subject to : (xu , yu , zu ) ∈ Φf (xu , yu , zu )

(30)

where Φf (xu , yu , zu ) is the search region, or the feasible region. If the number of tested points is small, the method of

0733-8716 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSAC.2017.2774435, IEEE Journal on Selected Areas in Communications 7

       xu − xl,1 xu − xl,2 + yu − yl,1 yu − yl,2 + zu − zl,1 zu − zl,2 r1 · r2  



cos γ = = 1,2  

T

 r1 r2 

vl,1 − (xu , yu , zu ) vT  l,2 − (xu , yu , zu )            xu − xl,1 xu − xl,3 + yu − yl,1 yu − yl,3 + zu − zl,1 zu − zl,3 r1 · r3



= cos γ1,3 =

T

r1 r3 

vl,1 − (xu , yu , zu ) vT  l,3 − (xu , yu , zu )            xu − xl,2 xu − xl,3 + yu − yl,2 yu − yl,3 + zu − zl,2 zu − zl,3 r2 · r3  



 = cos γ2,3 = . 

T

 r2 r3

vl,2 − (xu , yu , zu ) vT l,3 − (xu , yu , zu )

(26)

8 Angle Difference of Arrival

0.3

6

0.2

8 6 4

4

0.4

0.6

2

0.

0.5

0

0.4 0.3

z

2

2

z

-2

0 0.3 0.4

0. 2

0.5

-4

0.6

-6

-2

-8

0.4

5

-5 0

0

-4

-5

5

0.2

y

x

-6

Fig. 6. The surface of points with identical angle difference of arrival γ1,2 = arccos(0.95), where LEDs L1 and L2 have coordinates vl,1 = (1, 0, 0)T and vl,2 = (−1, 0, 0)T , respectively.

exhaustion (MEX) can be efficient because the dimension of the searching space is only three. We comment that the LEDs cannot be collinearly placed because this would make one equation redundant in (26). When the user is not coplanar with L1 , L2 and L3 , there are at least two solutions satisfying (26), and the two solutions are symmetrical with respect to the plane of L1 L2 L3 . In most cases, it is easy to select the true position from the two candidates because the user is unlikely to be above the ceiling plane, thus the candidate with lower zu value is the true position. Besides, the mirrored candidate can also be abandoned by defining the feasible region Φf (xu , yu , zu ) below the ceiling.

B. Positioning Using More LEDs In order to make sure that the algorithm leads to a single solution, we need to place more LEDs. When the number of LEDs is K > 3, we can construct the following objective function fK (xu , yu , zu ) =

X

2

(ˆ γm,n − γm,n (xu , yu , zu )) .

m

Suggest Documents