4392
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 12, NO. 9, SEPTEMBER 2013
Maximizing System Energy Efficiency by Exploiting Multiuser Diversity and Loss Tolerance of the Applications M. Majid Butt, Member, IEEE, and Eduard A. Jorswieck, Senior Member, IEEE
Abstract—We address the problem of energy efficient scheduling over fading channels for the loss tolerant applications. The proposed scheduling scheme allows dropping of a certain predefined proportion of data packets on the transmitter side. However, there is a hard constraint on the maximum number of successively dropped packets. The scheduler exploits average data loss tolerance to reduce the average system energy expenditure while fulfills the hard constraint on the number of successively dropped packets. We explore the effect of average and successive packet loss constraints on the system energy and characterize the regions where one parameter is more critical as compared to the other in the sense of achieving better energy efficiency. The scheme is analyzed using asymptotically large user limit and the optimized channel-dependent dropping thresholds are computed by using combinatorial optimization techniques. The numerical results illustrate the energy efficiency of the scheme as a function of the average and successive packet drop parameters. Index Terms—Multiuser diversity, opportunistic scheduling, packet loss–energy trade-off, large system analysis.
I. I NTRODUCTION UALITY of service (QoS) for the wireless applications is defined in terms of throughput, delay and bit error rate requirements. QoS in terms of throughput and delay can further be extended in the dimension of average and minimum guarantees; and respective outage probabilities. These parameters depend on the individual requirements of the applications or services. Depending on the requirements, the applications are classified as delay limited, delay tolerant, lossless and lose tolerant applications. If the network is not able to guarantee these QoS requirements, the end users quality of experience (QoE) deteriorates. Therefore, the design of wireless systems
Q
Manuscript received September 7, 2012; revised January 24 and April 15, 2013; accepted July 11, 2013. The associate editor coordinating the review of this paper and approving it for publication was C.-B. Chae. The work of Majid Butt was carried out during an ERCIM “Alain Bensoussan” Fellowship tenure. This Programme is supported by the Marie Curie Co-funding of Regional, National and International Programmes (COFUND) of the European Commission. The work of E. Jorswieck is partly supported by the German Research Foundation (DFG) in the Collaborative Research Center 912 “Highly Adaptive Energy-Efficient Computing.” M. M. Butt was with the Interdisciplinary Center for Security, Reliability and Trust, University of Luxembourg. He is now with the Department of Computer Science and Engineering, Qatar University, Al Tarfa, Doha 2713, Qatar (e-mail:
[email protected]). E. A. Jorswieck is with the Department of Electrical Engineering and Information Technology, Dresden University of Technology, Germany (e-mail:
[email protected]). Digital Object Identifier 10.1109/TWC.2013.090313.121355
to guarantee QoS has always been a major area of research in communications. More recently, the focus has shifted to energy efficient communication due to high cost of energy in operating cellular networks and the emergence of new applications like wireless sensor network (WSN) which operate with very limited power budget. High energy consumption in mobile networks has environmental effects as well. The information and communication technology sector is estimated to be responsible for about 2 percent of global CO2 emissions and the corresponding figure for the mobile networks is 0.4 percent [1]. The recent work in [2] explains the spectral efficiency-energy efficiency and power-delay tradeoffs for cellular networks in detail. A lot of literature focuses on energy efficient communication for the delay constrained applications [3]–[5]. However, not much work focuses on exploiting the loss tolerance of the application in the scheduling process at the physical layer. In order to improve energy efficiency, it is important to perform resource allocation at the physical layer by considering the QoS constraints on the data. Though, some researchers have considered similar problems in different settings. Reference [6] discusses a scheduler which differentiates traffic based on the loss and delay tolerance of the user in traditional internet. In [7], the authors address the problem of optimal dropping of packets. They obtain an optimal dropping scheme when the size of the packet grows asymptotically large. Reference [8] proposes an algorithm for improving the energy-delay tradeoff for the case of dropping a non zero fraction of packets. We propose a scheduling scheme which exploits the loss tolerance of the application to improve the system energy efficiency. Loss tolerance is characterized by the average packet loss probability θtar for a user and his/her ability to keep the QoE acceptable after n or fewer successively dropped packets. We refer to this constraint on the successively dropped packets as continuity constraint in this work. Average packet loss is a degree of freedom that allows the system to drop certain packets to save energy. However, if more than n packets are dropped successively, it becomes difficult to maintain the QoE by using complex signal processing and error concealment techniques at the receiver, e.g., Hybrid Automatic Repeat Request (HARQ) scheme with incremental redundancy requires transmission of additional data if a packet is not decoded at the receiver in its first transmission. If the subsequent packet containing the incremental data is dropped to save the energy, it makes the already transmitted
c 2013 IEEE 1536-1276/13$31.00
BUTT and JORSWIECK: MAXIMIZING SYSTEM ENERGY EFFICIENCY BY EXPLOITING MULTIUSER DIVERSITY AND LOSS TOLERANCE . . .
packet useless as well and the energy is wasted on the first transmission. Let us point out the relationship between our scenario and the standard rate adaptation applied, e.g., as in IEEE 802.11 [9]. For the standard rate adaptation, the packet drop occurs due to the poor channel conditions (including interference from other transmitters) which are not known at the transmitters a-priori. Then, the rate is adapted according to the scheduling/dropping decisions of the successive packets at the scale of 2 to (up to) 10 successive packets, combined with upper layer protocols like Transmission Control Protocol (TCP). Sliding window ARQ algorithms used in TCP would require that all the packets transmitted in a time slot t are buffered for retransmissions in subsequent time slots until an acknowledgement (ACK) is received. Thus, storage capacity of a finite number of packets is a must. In our work, we assume that a packet (or multiple packets) arriving in time slot t cannot be buffered for possible transmission in subsequent time slots and allowing no buffering (due to latency requirements of the application) is a constraint imposed by the problem. Obviously, allowing buffering of packets results in a decrease in system energy but this topic is out of scope of this work. In our scenario, the packets are dropped intentionally such that the average energy consumption of the links per bit is minimized. We do not consider outages due to channel conditions and assume perfect channel state information (CSI) at the transmitter with perfect rate adaptation. Another different type of adaptation is the adaptation of the source rate according to the dropping probability such that the QoE is not affected by the energy-efficient scheduling. This source rate adaptation is performed at higher layers at a larger time-scale (usually only once per device, constant for one session, see e.g. [10]). WSN are an other example of such applications where random packet dropping of ceratin successive packets at the sensor nodes may result in inaccurate estimation of the measured data at the fusion node. Similarly, multimedia applications can tolerate loss up to a certain limit but dropping of successive packets affects the QoE severely. Thus, it is not sufficient to guarantee average packet loss alone. To the best of our knowledge, not much literature deals with the problem of energy efficient scheduling which fulfills both continuity constraint and the average packet loss requirements. We consider a multiuser system where each user experiences independent fading channels. The fading channels provide us an opportunity to exploit multiuser diversity [11]. The authors in [12] propose the well known Proportional Fairness Scheduling (PFS) algorithm that exploits multiuser diversity to provide high rates to the users with fairness guarantees. In this work, we exploit multiuser diversity to minimize the overall energy expenditure of the system. The optimization problem is to choose the set of packets to be dropped such that the constraints on the average and successive packet dropping are met and average system energy is minimized by exploiting multiuser diversity. A recent work in [5] addresses similar energy–performance tradeoff where energy minimization problem is considered for the packets with hard deadline delay constraints. If a buffered packet is not transmitted by its delay deadline, it has to be dropped. We do not consider delay tolerant data here. A packet has to be
4393
transmitted/dropped immediately after its arrival in the buffer. We are mainly concerned about the packet loss patterns and characteristics. On one side, we would like to drop the packets by a certain probability θtar to save energy. At the same time, we are constrained by the hard limit on the maximum number of packets dropped successively to maintain QoE. The joint consideration of both of the factors give us useful insight about the characterization of the system energy as a function of these parameters as discussed in Section IV which is one of the main contribution of this paper. The rest of this paper is organized as follows. Section II describes the system model. We discuss the proposed scheduling scheme and its large system analysis in Section III. Special cases for extremely large n and θtar are discussed in Section IV. We treat the practical limitations of the proposed scheme in Section V. We show the numerical examples in Section VI and conclude with the main contributions of the work in Section VII. II. S YSTEM M ODEL We assume the channel model used in [5], [13] here. We consider a multiple-access system with K users randomly placed within a certain area. Each user is provided a certain fraction of the total data rate available to the system. Every Γ where Γ scheduled user requires an average rate R = K denotes the spectral efficiency of the system. Γ is normalized by the number of channels M to get spectral efficiency per channel C. We consider a time-slotted system. Arrivals occur at the start of the time slot; and the scheduling and transmission is completed before the end of the time slot. An uplink scenario is considered. The multi-access channel is described by the input (X) and output (Y) relation as K hk (t)X(t) + N (t) Y (t) =
(1)
k=1
where N represents additive independent and identically distributed (i.i.d.) complex Gaussian random variable with zero mean and unit variance. Each user k experiences a channel gain hk (t) in slot t. The distribution of hk (t) differs from user to user. The CSI is assumed to be known at the both transmitter and the receiver sides. The channel gain hk (t) is the product of path loss sk and small-scale fading fk (t) i.e. hk (t) = sk fk (t). Path loss and small-scale fading are assumed to be independent. The path loss is mainly a function of the distance between the transmitter and the receiver and remains constant within the time scale considered in this work. Small-scale fading depends on the scattering environment. It changes from slot to slot for every user and is independent and identically distributed (i.i.d) across both users and slots; but remains constant within each single transmission. This model is often referred to as quasistatic block fading. For a multi-band system of M channels, small-scale fading the best channel is represented by, (1) over(2) (M) fk (t) = max fk (t), fk (t), ..., fk (t) . If only a single user is scheduled per time slot, continuity constraint cannot be satisfied without defining a violation probability that the continuity constraint cannot be met. When
4394
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 12, NO. 9, SEPTEMBER 2013
multiple users have already dropped n packets, only one user gets scheduled while all others have to drop the packets (no buffering allowed) and a non zero violation probability is required to characterize the scheme. Thus, continuity constraint with zero violation probability requires us to allow scheduling of multiple users simultaneously in the same time slot. The scheme follows the results for the asymptotic user case analysis and therefore, there is no limit on the number of users scheduled simultaneously [5]. Those scheduled users are separated by superposition coding and successive interference cancelation (SIC) at the receiver [14]. Let Km be the set of (m) users to be scheduled in frequency band m. Let Φk be the permutation of the scheduled user indices for frequency band m that sorts the channel gains in increasing order. Then, the (m) (m) energy of the user Φk with rate RΦk is given by [13], [15] (m)
EΦk =
(m) N0 i≤k R(m) i 0
(13)
This ordering holds as the problem is analogous to the classical optimal stopping problems where it is necessary to have some (increasing or decreasing) order of the thresholds [19]. By contradiction, if the order is decreasing in our problem, it implies that the probability of transmission decreases as the continuity constraint parameter approaches which contradicts to our goal of increasing the priority of the packet transmission with the states; and is clearly suboptimal.
p=0
All the forward transitions belong to the events of dropping the packet and the summation over the corresponding transition probabilities αp(p+1) gives the average dropping probability. The corresponding channel-dependent optimal dropping threshold vector κ can be computed from the optimized α using (4). Following the framework in [5], we evaluate the probability distribution function (pdf) of the channel gain ph (x) of the scheduled users using MDP. As the scheduling decisions are affected by the small-scale fading distribution only, the resulting pdf of the small-scale fading of the scheduled users is given by ⎧ n ⎨ cp πp pmax{f } (y) y > κp pf (y) = (11) p=0 ⎩ 0 else where pmax{f } (y) and cp denote the small-scale fading pdf and a constant to normalize the pdf. Equation (11) specifies that a packet is scheduled only if y > κp .
B. Optimization by Simulated Annealing Optimization of thresholds belongs to a class of stochastic optimization problem where computation of the exact solution is quite complex and time consuming, e.g., Traveling salesman problem, Knapsack problem, etc. In literature, many acceptable solutions to such problems have been proposed using algorithms such as genetic algorithm, random search, etc. We employ simulated annealing (SA) algorithm for optimization of the thresholds. The advantage of this algorithm is that it accepts a solution with a small probability even if it is worse than the already computed best solution. This step is called muting and helps to avoid local minima. Muting depends on a so called temperature term. At the start of the process, the temperature is very high and muting occurs quite frequently. The temperature decreases as the process progresses and so as the muting. This is called cooling. There are many cooling schedules used in literature, e.g., Boltzmann annealing (BA) [20] and Fast annealing (FA) [21] temperature cooling schedules etc. We employ FA in this work. In FA, it
4396
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 12, NO. 9, SEPTEMBER 2013
is sufficient to decrease the temperature linearly in each step b such that, T0 Tb = (14) csa ∗ b + 1 where T0 is a suitable starting temperature and csa is a constant adjusted according to the requirements of the problem. We skip the details of the algorithm and the interested reader is referred to [22], [23] for the details. System energy in (7) is the objective function for the each iteration of SA. In our problem, a fixed vector α defines a specific configuration. In every iteration, one of the transition probability from the vector α is varied randomly and the average packet dropping constraint in (9) is checked. If the constraint is met, the configuration is evaluated using pdf of the small scale fading of VU and the corresponding energy in (7). If the resulting system energy is less than the previously computed best result, the result is updated. Even if the resulting energy is greater than the previously computed best solution, there are chances that muting occurs and this configuration is treated as the best solution for the next iterations. However, the result is not updated for the muting step. Thus, muting assumes virtual best solutions and helps to avoid local minima. At the end of sufficiently large iterative process, we get a solution α∗ which is believed to be near optimal in most of the cases. We explore the performance of SA with the help of numerical examples in Section VI. IV. S PECIAL C ASES We investigate the impact of extreme values of the average dropping probability and continuity constraint parameters on the system energy. The characterization of these cases is one of the main contributions of this paper. A. Extreme Values of Continuity Constraint Parameter We consider two trivial cases for the extreme values of the continuity constraint parameter n. 1) n = 0: For this case, the system becomes a lossless system regardless of θtar and the MDP reduces to a single state with the dropping threshold equals to zero. 2) n = ∞: It implies that there is no limit on the number of successively dropped packets and average dropping probability is the only constraint to be fulfilled. This condition simplifies the model and the MDP reduces to a single state process where θtar corresponds to the probability of dropping a packet such that α ˜00 = Pf (κ0 ) = θtar .
(15)
α ˜ 00 denotes the probability of dropping a packet in state zero and κ0 is the corresponding dropping threshold. If small-scale fading is less than κ0 , the packet is dropped and scheduled otherwise. The scheduler returns back to state zero regardless of packet scheduling or dropping. B. Extreme values of Average Dropping Parameter We specifically treat the cases when the target average dropping probability θtar is relatively large or small as compared to the continuity constraint parameter n. We first consider the
case of large θtar as compared to the continuity constraint parameter n. To gain motivation for this case, suppose we have no average dropping probability constraint but only continuity constraint. In this case, SA provides us the unconstrained3 (without the average packet drop constraint in (9)) optimized transition probability vector α∗ . We evaluate (10) from α∗ to get the resulting average dropping probability θr∗ . The θr∗ value for this special case is termed as limiting average dropping probability and denoted by θlim . For a fixed n, we will not be able to achieve further energy efficiency by dropping more packets for any θtar > θlim . From these arguments we obtain the following lemma. Lemma 1: For a fixed continuity constraint parameter n, there exists a finite θlim such that for all θtar > θlim , the same maximum energy efficiency is achieved as for more restrictive θlim . Proof: Assume that for some θtar > θlim , a larger energy efficiency can be achieved. Then, this θtar achieves better energy efficiency for the unconstrained (without average packet loss constraint) system also. However, this is a contradiction to the assumption because a constrained system cannot give better performance than an unconstrained one. Example 1: We explain this result with the help of an example. Let us assume a system with n = 1 and M = 1. The unconstrained optimization gives us an optimized transition probability α∗01 = 0.21 and the corresponding value for θlim = θr∗ is obtained from (10). The threshold κ0 corresponding to α∗01 determines the minimum channel conditions where it is better to drop a packet and move to state one. In other words, the cost of transmitting a packet in current channel conditions is larger than the expected cost of transmitting a packet in state one. Remember that a packet must be transmitted in state one due to the continuity constraint and there is an associated risk of a forced transmission in a potential bad channel. Thus, the optimized α∗01 corresponds to the optimized threshold which allows the user to take this risk. For this specific example, the intuitive question is why allowing θtar > θlim does not contribute to the energy efficiency when a user can benefit by dropping more packets? This is explained as follows. The larger θtar means that the user can drop more packets; but it can drop only in state zero as it is prohibitive to do so in state one (because of n = 1). When a user drops more packets in state zero and moves into state one frequently, she increases the risk that the channel in state one will be poor. The increased probability of the forced transmissions on (potentially) bad channels in state one enhances the average energy expenditure instead; and energy efficiency is affected adversely. Therefore, when we optimize the thresholds for θtar > θlim and a fixed n, the optimizer always provides us the set of thresholds obtained after evaluation which for θtar = θlim and rejects all the α result in θr > θlim . The energy consumption is a function of the continuity parameter n and the average dropping probability constraint θtar , i.e., Eb /N0 (θtar , n). The larger n and the lower θtar results in 3 The term unconstrained is only a relative one, referring to the absence of any constraint on average packet drop when n is modeled by the number of states in a finite state MDP. In true mathematical sense, the problem is still constrained due to presence of continuity constraint.
BUTT and JORSWIECK: MAXIMIZING SYSTEM ENERGY EFFICIENCY BY EXPLOITING MULTIUSER DIVERSITY AND LOSS TOLERANCE . . .
smaller Eb /N0 . However, at small θtar the average dropping probability becomes more critical constraint as compared to the continuity constraint. The relationship between target dropping probability θtar and continuity constraint parameter n is characterized in the following lemma. Lemma 2: The reduction of the energy consumption Eb /N0 by increasing the continuity constraint parameter n decreases with decreasing target dropping probability θtar and becomes negligible at small θtar . Proof: We assume n = ∞, i.e., no continuity constraint. As we explained in Section III via (13), the transition probability vector α is ordered in increasing order which implies αp(p+1) ≤ α(p−1)p
∀p > 0
(16)
The steady state probability πp for a state p > 0 depends on the state transition probability α(p−1)p only while π0 contains additional transitions from all the states. Without loss of generality, this implies from (16) that πp ≤ πp−1
∀p > 0
(17)
Combining (16) and (17) yields
In the following, we compute a bound to quantify the effect of continuity constraint parameter on the energy optimal dropping decisions for a given θtar as explained in Lemma 2. We denote a sequence of exactly μ successively dropped packets by τμ ; and Pr(d ∈ τμ ) represents the probability of the event that a packet d is a part of the sequence τμ . For an average dropping probability θtar , we assume there is no continuity constraint (n = ∞). Then, the probability Pr(d ∈ τμ ) is computed by Pr(d ∈ τμ ) = Pr (d ∈ τμ )|(d ∈ Q) Pr(d ∈ Q) (20) where Q is a set of all the dropped packets. Similar to (19) for the case n = ∞, it holds that ∞
For an infinite size MDP, we deduce from (10) when n → ∞
μ=1
∞
αp(p+1) πp = θr .
(19)
p=0
Note that from (18) and (19), it follows that the term αp(p+1) πp is monotonically decreasing with p and converges to zero for n → ∞. The non zero forward transition probability in the product term represents the fact that the packet is dropped in a state p and thus, requires n to be larger than p to achieve unconstrained packet dropping. From (18), we know that the product term gets smaller with increasing p. For a small θtar , the first few product terms are very dominant and the higher order term becomes negligible very fast; thereby requiring very small n for an unconstrained packet dropping. This proves the diminishing energy gain in Lemma 2 for small θtar and increasing n. This result is motivated by the fact that for small dropping probability θtar , it is not much probable to drop more than one packet successively for a user even if the continuity constraint parameter n allows him/her to do that. For the small average dropping probabilities, dropping of more successive packets requires that no (or a few) packet can be dropped any more in the time scale where averaging is performed. Example 2: Let us assume an averaging window size of 100 and θtar = 0.02. This implies that a user can drop packets (on the average) in 2 time slots in a window of 100 time slots. For n = 1, the user can drop packets in any two time slots with the worst channels but they cannot be adjacent. For n = 2, he/she is allowed to drop the packets even if the time slots with the worst channels happen to be the adjacent ones. But the probability of this event is extremely small. If we do not allow n = 2, the negative effect on the system energy is negligible as compared to the system with n = 1. A further increase in the value of parameter n would not improve the system performance.
(21)
as every dropped packet must belong to some sequence. For a finite Nlim , (21) can be split as N lim
∀p > 0
Pr(d ∈ τμ ) = θtar
μ=1
(18)
αp(p+1) πp ≤ α(p−1)p πp−1
4397
∞
Pr(d ∈ τμ ) +
Pr(d ∈ τμ ) = θtar .
(22)
μ=Nlim +1
The second term is not allowed to occur if continuity constraint parameter n = Nlim is defined. Assume that the second term can be upper bounded by a small constant , ∞
Pr(d ∈ τμ ) ≤
(23)
μ=Nlim +1
Then, N lim
Pr(d ∈ τμ ) + ≥ θtar .
(24)
μ=1
The factor upper bounds the sum of probabilities of occurrence of the sequences with μ > Nlim . For a small and a given θtar , this formulation quantifies that the continuity constraint parameter n must at least be equal to Nlim to achieve the continuity constrained packet dropping with bounded factor . The smaller results in larger Nlim . For a given θtar , Pr(d ∈ τμ )∀μ is computed in closed form in Appendix B which helps in computing Nlim numerically. Based on and n, we can identify two cases. 1) Ideally, for = 0 and n = Nlim , the continuity constraint parameter will have no effect on the energy optimal behavior of the scheduler and the scheduling decisions will be determined by the average packet loss probability factor solely. 2) For > 0 and n = Nlim , the continuity constraint does affect the scheduling decisions but the effect is controlled through . For a fixed θtar , the larger the , the larger the effect of continuity constraint on the system energy. We explore and justify the claims in Lemma 1 and Lemma 2 with the help of numerical examples in Section VI.
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 12, NO. 9, SEPTEMBER 2013
V. D ISCUSSION ON P RACTICAL C ONSIDERATION In this work, we investigated the problem of joint consideration of energy efficient physical and media access (MAC) layer4 . Although, avoiding successive dropping of packets is treated largely at upper layers while interleaving is another option at the physical layer, these schemes do not consider underlying physical channel. We provided a theoretical framework to make the radio resource allocation energy efficiently by combining the information. We have not treated the overhead associated due to physical-MAC layer interaction which can enhance the processing requirements at the transmitter side. Though, we show the decentralized nature of the scheduling decisions courtesy of decoupling principle, the required transmit power for the allocated rate for the scheduled users need channel information about the other users. This can be accomplished by having a central identity which orders the channel gains of the scheduled users (for superposition coding) and informs every user about the required transmission power. The proposed framework exploits multiuser diversity as long as the continuity constraint allows the dropping of a packet. However, the main source of the energy loss in this scheme is the high energy expenditure on transmission in the nth time slot where a packet has to be transmitted on a potential bad channel. Practically, it can be difficult to transmit every packet in the nth state due to power limitation of the transmitter and possibility of presence of an extremely poor fading channel. In practice, we cannot guarantee continuity constraint with probability 1 and we need to allow dropping of a packet in state n in case of poor channel conditions. This results in a violation probability γ that a user does not satisfy the continuity constraint n. Note that single user scheduling (instead of multiuser scheduling) will increase the violation probability and avoided here by using superposition coding for multiple scheduled users as explained in Section II. All the thresholds need to be optimized again by taking γ into consideration. It should be noted that the optimized solutions cannot be computed for all the combinations of θtar and γ as some combinations would not be feasible. VI. N UMERICAL R ESULTS We consider a multi-access channel with M bands and assume statistically independent fading on these channels. Every user senses M channels and selects his/her best channel as a candidate for transmission. The users are placed uniformly at random in a cell except for a forbidden region around the access point of radius δ = 0.01. The path loss exponent α equals 2 and the path loss distribution follows the model described in [13]. All the users experience small-scale fading with exponential distribution with mean one on each of the M channels. Table I shows the limiting dropping probability and the associated system energy for the fixed n when we perform optimization without the average packet drop constraint in (9). The system parameters are M = 1 and C = 0.5 bits/s/Hz. We show in the following numerical example that energy 4 However,
we do not discuss protocol level details explicitly.
TABLE I U NCONSTRAINED D ROPPING P ROBABILITY AND S YSTEM E NERGY n 1 2 3 4
θr = θlim 0.18 0.34 0.45 0.53
Eb /N0 -1.42dB -3.05dB -4.07dB -4.80dB
2 1.5
n=1 n=2 n=3 n=∞
Small θ tar Region
1 0.5 0
Eb/N0 [dB]
4398
−0.5
θ lim for n=1
−1 −1.5 −2 −2.5 −3 −3.5
0
0.05
0.1
θ tar
0.15
0.2
0.25
Fig. 2. The system energy as a function of average dropping probability and continuity constraint. The system parameters are M = 1 and C = 0.5 bits/s/Hz.
efficiency cannot be improved by allowing θtar > θlim for a fixed n. Fig. 2 shows the optimized system energy for different target average dropping probabilities and continuity constraint parameters. We plot the results for the special case n = ∞ for reference. For small θtar , energy expenditures are almost identical for different values of n and the effect of the continuity constraint is negligible. This confirms Lemma 2 that the continuity constraint parameter n does not have a big impact on energy efficiency in this region. As θtar increases, an increase in n does contribute to increase the energy efficiency of the system. An other important observation is the optimized energy at θtar > θlim . From Table I, θlim equals 0.18 for n = 1 and the corresponding energy expenditure is −1.42 dB. We observe in Fig. 2 that the optimization solution provided by SA and the optimized energy remains the same for θtar = 0.2 and θtar = 0.25. This justifies Lemma 1 that allowing θtar > θlim does not benefit the energy efficiency of the system for a fixed n. Therefore, QoS should always be defined by taking into account both of the parameters. In Fig. 3, we justify the results for Lemma 2 in Fig. 2. We plot Pr(d ∈ τμ ) for different values of θtar . There is no continuity constraint (n = ∞). For a small θtar ≤ 0.05, we observe that N lim = 1 seems to be sufficient for any small ∞ (even zero) as μ=2 Pr(d ∈ τμ ) → 0. For θtar > 0.2, Pr(d ∈ τ5 ) is extremely small already and for a small > 0, 5 can be treated as Nlim . We observe the effects of the Pr(d ∈ τμ ) on the system energy in Fig. 2. The behavior is quite analogous. For small θtar , n > 1 has negligible effect on the system ∞ energy as μ=2 Pr(d ∈ τμ ) is extremely small and allowing
BUTT and JORSWIECK: MAXIMIZING SYSTEM ENERGY EFFICIENCY BY EXPLOITING MULTIUSER DIVERSITY AND LOSS TOLERANCE . . .
14
0.14 μ=1 μ=2 μ=3 μ=4 μ=5
0.12
12
10
0.08
Δ [%]
Pr(d∈ τμ)
0.1
0.06
θ
tar
θ
tar
θ
tar
= 0.05 = 0.1 = 0.2
8
6
0.04
4
0.02
2
0
4399
0
0.05
0.1
θ tar
0.15
0.2
0.25
7
2
3
θ
tar
= 0.25
5
N
lim
4
3
2
Fig. 5. The accuracy measure Δ as a function of average dropping probability and continuity constraint. The system parameters are M = 1 and C = 0.5 bits/s/Hz.
θr . (25) θtar Δ in expressed in percentage. We observe in Fig. 5 that Δ is quite small for all the cases and the results are acceptable. However, there are more random configurations (transition probabilities) involved in the combinatorial optimization at large n which increases complexity to achieve the required accuracy. For example, Δ is about 13 percent for the case when n = 4 and θtar = 0.1. When we increase the temperature iterations from 100 to 200, Δ reduces to 4 percent and the optimal system energy decreases correspondingly. The proposed scheme is based and analyzed in large user limit and holds for large number of users always. However, it is important to show the convergence of the scheme for the finite number of users to claim it to be practical. If the number of users in the system is not large enough, the scheduled sum rate varies greatly from slot to slot and the system energy does not converge to the average system energy. Moreover, the decoupling principle for the users does not hold for the small number of users as explained in Section III. Fig. 6 illustrates the accuracy of the large user approximation using Monte Carlo simulations. For a fixed number of users and a fixed path loss, we compute variance of system energy for 200 independent small-scale fading realizations. Energy requirement for every scheduled user is computed by (2). Fig. 6 shows that the variance is quite small for a few hundred users and decreases further as the number of users increases. The convergence is faster for the small spectral efficiencies. For the large spectral efficiency values, it requires more scheduled Δ = 1−
1
0.001
0.005 0.01
ε
4
10 random configurations of every transition probability in α . For θtar < θlim , the system is likely to be more energy efficient if θr∗ for the solution approaches θtar closely to benefit from dropping more packets. It does not imply that every configuration which results in larger θr is more energy efficient. Some of them are not energy optimal and rejected by the optimizer. Therefore, we define a relative measure of accuracy for the computed solution in terms of θr which is denoted by Δ and expressed as
θ tar = 0.15
0 0.0001
1
n
Fig. 3. Pr(d 5 ∈ τµ ) is computed as a function of θtar and μ when n = ∞. Note that µ=1 Pr(d ∈ τµ ) θtar for every θtar .
6
0
0.05
0.1
0.2
Fig. 4. The minimum value for the continuity constraint parameter n as a function of different constants to quantify the impact of continuity constraint on the system energy.
n > 1 does not contribute much for the energy efficiency. At large θtar , the systemenergy does get affected by the values ∞ of n significantly as μ=2 Pr(d ∈ τμ ) is sufficiently large. Fig. 4 quantifies the effect of continuity parameter n by the bounding factor and resulting Nlim . For smaller values of , Nlim is larger. This implies that it is important to allow large n to achieve maximum energy efficiency. When increases, Nlim decreases. The scheduling decisions are affected by the continuity constraint parameter severely and the energy efficiency decreases. Thus, works as a tradeoff parameter to control the effect of continuity constraint on the system energy efficiency of the scheduler qualitatively. The results confirm the results in Fig. 3. For example, for θtar = 0.25 and = 0.05, Nlim equals 2 in Fig. 4 which is confirmed by ∞ μ=3 Pr(d ∈ τμ ) < 0.05 in Fig. 3. As we explained in Section III-B, SA is believed to provide a solution that is acceptable in most of the situations. The SA algorithm uses FA temperature schedule where we simulate 100 temperature values. At each temperature, we evaluate
4400
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 12, NO. 9, SEPTEMBER 2013
−5
6
x 10
C = 0.5 bits/s/Hz C = 1 bits/s/Hz
4
b
Var[(E /N )
Fs (x) =
⎩
0 1−
x−2/α −δ 2 1−δ 2
1
x 3 following the same line of arguments. R EFERENCES [1] A. Fehske, G. Fettweis, J. Malmodin, and G. Bicz´ok, “The global footprint of mobil communications: the ecological and economic perspective,” IEEE Commun. Mag., vol. 49, no. 8, pp. 55–62, Aug. 2011. [2] Y. Chen, S. Zhang, S. Xu, and G. Y. Li, “Fundamental tradeoffs on green wireless networks,” IEEE Commun. Mag., vol. 49, no. 6, pp. 30– 36, 2011. [3] R. A. Berry and R. G. Gallager, “Communication over fading channels with delay constraints,” IEEE Trans. Inf. Theory, vol. 48, no. 5, pp. 1135–1149, May 2002. [4] T. P. Coleman and M. Medard, “A distributed scheme for achieving energy-delay tradeoffs with multiple service classes over a dynamically varying channel,” IEEE J. Sel. Areas Commun., vol. 22, no. 5, pp. 929– 941, June 2004. [5] M. M. Butt, “Energy-performance trade-offs in multiuser scheduling: large system analysis,” IEEE Wireless Commun. Lett., vol. 1, no. 3, pp. 217–220, June 2012. [6] A. Striegel and G. Manimaran, “Packet scheduling with delay and loss differentiation,” Comput. Commun., vol. 25, pp. 21–31, 2002. [7] W. Chen, U. Mitra, and M. Neely, “Packet dropping algorithms for energy savings,” in Proc. 2006 IEEE International Symp. Inf. Theory, pp. 227–231. [8] M. J. Neely, “Intelligent packet dropping for optimal energy-delay tradeoffs in wireless downlinks,” IEEE Trans. Automatic Control, vol. 54, no. 3, pp. 565–579, Mar. 2009. [9] M. Lacage, M. H. Manshaei, and T. Turletti, “IEEE 802.11 rate adaptation: a practical approach,” in Proc. 2004 ACM International Symp. Modeling, Analysis Simulation Wireless Mobile Syst., pp. 126– 134. [10] S.-F. Chang and A. Vetro, “Video adaptation: concepts, technologies, and open issues,” Proc. IEEE, vol. 93, no. 1, pp. 148–158, 2005. [11] R. Knopp and P. Humblet, “Information capacity and power control in single cell multiuser communications,” in Proc. 1995 IEEE Int. Conference On Communications. [12] P. Viswanath, D. N. Tse, and R. Laroia, “Opportunistic beamforming using dumb antennas,” IEEE Trans. Inf. Theory, vol. 46, no. 6, pp. 1277–1294, June 2002. [13] G. Caire, R. M¨uller, and R. Knopp, “Hard fairness versus proportional fairness in wireless communications: the single-cell case,” IEEE Trans. Inf. Theory, vol. 53, no. 4, pp. 1366–1385, Apr. 2007. [14] D. Tse and P. Viswanath, Fundamentals of Wireless Communications, 1st ed. Cambridge University Press, 2005. [15] D. Tse and S. Hanly, “Multi-access fading channels—part I: polymatroid structure, optimal resource allocation and throughput capacities,” IEEE Trans. Inf. Theory, vol. 44, no. 7, pp. 2796–2815, Nov. 1998. [16] M. M. Butt, K. Kansanen, and R. R. M¨uller, “Hard deadline constrained multiuser scheduling for random arrivals,” in 2011 WCNC. [17] M. Benaim and J.-Y. Le Boudec, “A class of mean field interaction models for computer and communication systems,” Performance Evaluation, vol. 65, no. 11-12, pp. 823–838, 2008.
4401
[18] P. Viswanath, D. N. Tse, and V. Anantharam, “Asymptotically optimal water-filling in vector multiple-access channels,” IEEE Trans. Inf. Theory, vol. 47, no. 1, pp. 241–267, Jan. 2001. [19] D. P. Bertsekas, Dynamic Programming and Optimal Control, Vol. 1, 3rd ed. Athena Scientific, 2007. [20] S. Geman and D. Geman, “Stochastic relaxation, Gibbs distribution and the Baysian restoration in images,” IEEE Trans. Pattern Analysis Machine Intelligence, vol. 6, no. 6, pp. 721–741, Nov. 1984. [21] H. Szu and R. Hartley, “Fast simulated annealing,” Physics Lett. A, vol. 122, no. 3, 1987. [22] S. Kirkpatrick, C. Gelatt, and M. Vecchi, “Optimization by simulated annealing,” Science, vol. 220, no. 4598, pp. 671–680, May 1983. [23] V. Cerny, “Thermodynamical approach to the travelling salesman problem: an efficient simulation algorithm,” J. Optimization Theory Applications, vol. 45, no. 1, pp. 41–52, Jan. 1985. M. Majid Butt (S’07-M’10) received his Ph.D. in Wireless Communications and M.Sc. Digital Communications degrees from Norwegian University of Science and Technology (NTNU) Trondheim, Norway and Christian Albrechts University (CAU) Kiel, Germany in 2011 and 2005, respectively. He earned B.Sc. electrical engineering degree from University of Engineering and Technology (UET) Lahore, Pakistan in 2002. He is working as a post-doc researcher at Qatar university. He was with National University of Computer and Emerging Sciences, Lahore, Pakistan as a faculty member in 2006. Dr. Majid was awarded Alain Bensoussan post-doctoral fellowship from European Research Consortium for Informatics and Mathematics (ERCIM) in 2011. He held ERCIM post-doc fellow positions at Fraunhofer Heinrich Hertz Institute (HHI) Berlin, Germany and interdisciplinary center for research in Security, Reliability and Trust (SnT) at University of Luxembourg. Dr. Majid’s areas of interest include radio resource allocation schemes, cross layer design, cooperative communication, cognitive radio and energy efficient communication techniques. Eduard A. Jorswieck was born in 1975 in Berlin, Germany. He received his Diplom-Ingenieur (M.S.) degree and Doktor-Ingenieur (Ph.D.) degree, both in electrical engineering and computer science from the Technische Universitt Berlin, Germany, in 2000 and 2004, respectively. He was with the Fraunhofer Institute for Telecommunications, Heinrich-HertzInstitut (HHI) Berlin, in the Broadband Mobile Communication Networks Department from December 2000 to January 2008. Since April 2005 he has been a lecturer at the Technische Universitt Berlin. In February 2006, he joined the Department of Signals, Sensors and Systems at the Royal Institute of Technology (KTH) as a post-doc and became a Assistant Professor in 2007. Since February 2008, he has been the head of the Chair of Communications Theory and Full Professor at Dresden University of Technology (TUD), Germany. Eduard’s main research interests are in the area of signal processing for communications and networks, applied information theory, and communications theory. He has published more than 55 journal papers and some 170 conference papers on these topics. Dr. Jorswieck is senior member of IEEE. He is member of the IEEE SPCOM Technical Committee (2008-2013). Since 2011, he acts as Associate Editor for IEEE T RANSACTIONS ON S IGNAL P ROCESSING. Since 2008, continuing until 2011, he has served as an Associate Editor for IEEE S IGNAL P ROCESSING L ETTERS . Since 2012, he is Senior Associate Editor for IEEE S IGNAL P ROCESSING L ETTER. Since 2013, he serves as Associate Editor for IEEE T RANSACTIONS ON W IRELESS C OMMUNICATIONS . In 2006, he received the IEEE Signal Processing Society Best Paper Award.