tion is derived and the appropriate interconnection weights between the ... are with Department of Electrical and Computer Engineering,State ... output of each neuron Vi was a xed function f of the internal state ui, i.e., ... the channel assignment in one or more certain cells in order to accelerate the ..... TCSC = ? ip jq(dcsc).
Cellular Radio Channel Assignment using a Modi ed Hop eld Network Jae-Soo Kim1, Sahng H. Park2, Patrick W. Dowd3 and Nasser M. Nasrabadi3
Abstract The channel assignment problem is important in mobile telephone communication. Since the usable range of the frequency spectrum is limited, the optimal channel assignment problem has become increasingly important. In this paper, a new channel assignment algorithm using a modi ed Hop eld neural network is proposed. The channel assignment problem is formulated as an energy minimization problem that is implemented by a modi ed discrete Hop eld network. Also a new technique to escape local minima is introduced. In this algorithm, an energy function is derived and the appropriate interconnection weights between the neurons are speci ed. The interconnection weights between the neurons are designed in such a way that each neuron receives inhibitory support if the constrain conditions are violated and receives excitatory support if the constraint conditions are satis ed. To escape the local minima, if the number of assigned channels are less than the required channel numbers, one or more channel is assigned in addition to already assigned channels such that the total number of assigned channels are the same as the required number of channels in the cell even though energy is increased. Various initialization techniques which use the speci c characteristics of frequency assignment problems in cellular radio networks such as co-site constraint, adjacent channel constraint, and co-channel constraint and updating methods are investigated. In the previously proposed neural network approach, some frequencies are xed to accelerate the convergence time. In our algorithms, no frequency is xed before the frequency assignment procedure. This new algorithm, together with the proposed initialization and updating techniques and without xing frequencies in any cells, has better performance results than the results reported previously utilizing xed frequencies in certain cells. Index Terms: cellular mobile network, channel assignment, wireless communication, neural
network.
This paper was presented in part at the 45th IEEE Vehicular Technology Conference (VTC'95), Rosemont, Il, July 26-28, 1995 1 Jae-Soo Kim is with Wireless Design Center, Lucent Technologies, Whippany, NJ 07981 2 Sahng H. Park is with School of Electronics and Information Engineering, Andong National University, Andong, Korea 3 Patrick W. Dowd and Nasser M. Nasrabadi are with Department of Electrical and Computer Engineering,State University of New York at Bualo, Bualo, NY 14260 0
1
1 Introduction There is an increasing demand for mobile telephone communication. However, the usable range of the frequency spectrum is very limited. The channel assignment problem that is discussed in this paper is that of assigning a number of required channels to each cell. Optimal frequency channel assignment is an increasingly important problem in order to use the available frequency spectrum with optimum eciency. This problem is often solved by a graph coloring algorithm [1]. Various algorithms [1{8] have been proposed to solve the channel assignment problem. Most of these algorithms are based on the sequential assignment in order of the degree of diculty of the assignment. For example the order of cells for the channel assignment is decided according to the descending order of required channel numbers for each cell. They have to satisfy the required interference constraints and are dependent on the sequence of the channel assignment. Recent works [9{12] have explored the feasibility of the application of neural network techniques to the channel assignment problem. Kunz [9] used the continuous Hop eld network where the output of each neuron Vi was a xed function f of the internal state ui ,
i.e.,
Vi = f (ui) where
f (x) := 21 (1 + tanh(x)). Kunz neural network model required a large numbers of iterations in
order to reach the nal solution and there were also diculties in nding the proper values for
and the parameters in the interconnection weights and the energy function. Kunz considered co-channel and co-site interference in his neural network model [9]. He also investigated a practical channel assignment scheme with three constraints including adjacent channel interference in [10]. Funabiki and Takefuji [11] proposed a neural network parallel algorithm for channel assignment problem. All input values are sequentially updated, while all output values are xed. Then, all output values are sequentially updated, while all input values are xed. Their neural network model is composed of the hysteresis McCulloch-Pitts neurons. In Funabiki and Takefuji model, four heuristics were used to improve the convergence rate of channel assignment. They also xed the channel assignment in one or more certain cells in order to accelerate the convergence time. The results were favorable in some cases but not in others. Chan et
al.
[12] used a feedforward
neural network which had a learning process prior to actual channel assignment. For the learning 2
CSC: CCC: ACC: n: m: R: ri : rmax: TFmin : C: cij : fik : dcsc: BW : LB: B: Crmax : CNB: THD:
co-site constraint co-channel constraint adjacent channel constraint number of cells in the mobile network number of available channels in the mobile network required channel number (RCN) matrix, R = (ri ) required number of channels for cell i, 1 i n maximum number in the required channel matrix R minimum required number of total frequencies compatibility matrix, C = (cij ) minimum frequency separation between frequencies in cell i and j , 1 i; j n frequency assigned for kth call in cell i, 1 i n, 1 k ri minimal distance between frequencies in the same cell for CSC block width which is the number of frequencies in the block lower bound of required frequency numbers total number of blocks in the frequency domain cell with the maximum required channel number rmax channel number in the block threshold for the output of the neuron Table 1: Notations used in the paper.
process they used a training data that was dependently obtained by other assignment methods. The performance of their algorithm is totally depend on the used training data. Also in [12] only the co-channel constraint was considered. In this paper, a modi ed discrete Hop eld neural network algorithm for channel assignment problem is proposed in order to improve the convergence rate and to reduce the number of iterations. Our experimental results are also compared with that of Funabiki and Takefuji [11]. In this paper, the channel assignment problem is formulated as an energy minimization problem such that the energy is at its minimum when all the constraints are satis ed and the number of assigned frequencies are the same as the required channel numbers (RCNs) in each cell. Three conditions are considered in this paper as in references [3] and [11]: 1. co-site constraint (CSC) : any pair of frequencies assigned to a cell should have a minimal distance between frequencies; 2. co-channel constraint (CCC) : for a certain pair of radio cells, the same frequency cannot be used simultaneously; 3
3. adjacent channel constraint (ACC) : the adjacent frequencies in the frequency domain cannot be assigned to adjacent radio cells simultaneously. The goal of the channel assignment problem in a cellular radio network is the assignment of the RCNs to each radio cell such that the above constraints are satis ed. Gamst [3] de ned the compatibility matrix C = (cij ), which is an n n symmetric matrix where n is the number of cells in the mobile network, and cij is the minimum frequency separation between a frequency in cell
i and one in cell j . For example, cij = 0 indicates that the same frequency can be used in cell i and cell j . Therefore, CCC and ACC can be represented in matrix C by cij = 1 and cij = 2, respectively. CSC can be represented by cii , and CCC and ACC can be represented by cij . The number of channels needed for each cell i is presented by the RCN matrix R = (ri ) where 1 i n and n is the number of cells in the network. Let fik indicate the assigned frequency for the kth call in the cell i where 1 i n and 1 k ri. The condition imposed by the compatibility constraints between fik and fjl is given by
jfik ? fjlj cij
(1)
where 1 i; j n and 1 k; l ri. The channel assignment problem is to nd fik 's that satisfy the constraint conditions when the number of cells n in the mobile network are given along with the compatibility matrix C and the RCN matrix R. The term channel and frequency will be used interchangeably throughout this paper. In channel assignment problem, an energy function is derived which represents the constraints that should be satis ed in order to nd the best assignment. The appropriate interconnection weights are designed such that the constraints on the channel assignment problem are expressed in terms of inhibitory connections between neurons. A 2-D discrete Hop eld network is implemented to minimize the energy function. The state of each neuron in the Hop eld network represents the possibility of assigning a certain channel to a cell. To escape the local minima during the solution search, the assigned channel numbers (ACNs) in a cell are checked and if the ACNs are less than the RCNs then a forced frequency assignment method is used. The forced frequency assignment 4
method is an additional assignment of one or more channels although the energy is increased by the additional channel assignment since the extra assignment has violated the constraints. This simple technique dramatically increases the convergence rate of assignment since most of the local minima are due to a lack of assigned frequencies which satisfy all three constraints for each cell. Various initialization and updating techniques which use the speci c characteristics of the channel assignment problem (e.g., CSC, CCC and ACC) in cellular radio networks are investigated to reduce the number of iterations and to increase the convergence rate of channel assignment. The proposed frequency assignment technique uses a modi ed discrete Hop eld network to achieve a solution that simultaneously satis es all the constraints by using interconnection weights and external inputs. The algorithm presented in this paper does not x the frequency of any cells before channel assignment procedure. This algorithm is simple to implement in VLSI since the algorithm has only simple functions such as adder, inverter and comparator. Also it has a modular structure since each neuron can be constructed as a module. The paper is organized as follows: Section 2 describes the modi ed discrete Hop eld neural network which is developed to increase the convergence rate in order to reach a solution and to decrease the number of iteration for the channel assignment problem. In section 3, our algorithm is applied to the channel assignment problems, initialization and updating techniques for the network is explored. Section 4 presents and discusses the simulation results.
2 Modi ed Hop eld Network for Channel Assignment Each processing element (neuron) is fully interconnected in the Hop eld network. The ith neuron is described by its state which is denoted by Vi. Each neuron has two possible states. The value of each state is determined by the total input from other neurons followed by a thresholding operation. The input of the ith neuron is derived from two sources, the outputs of other neurons scaled by the connection weights and an appropriate external input. The total input to neuron i is denoted
5
by Si
Si =
X
j
Tij Vj + Ii
(2)
where Tij is the connection weight from neuron j to neuron i and Ii is the external input. Each neuron updates its own state according to a thresholding rule with threshold Ui as shown by Eq.(3). 8 >
(3)
The thresholding rule can be applied asynchronously (in series) or synchronously (in parallel). In the asynchronous mode this rule is applied sequentially to each neuron, and the state of each neuron is updated individually. In the synchronous mode this thresholding operation is simultaneously applied to every neuron, and the states of all neurons are updated at the same time. The updating operation is terminated when the states are unchanged or the energy has reached a minimum value. The energy function E is de ned as in [13,14],
E = ? 21
XX
i
j
Tij ViVj ?
X
i
ViIi
(4)
This energy function is minimized by the Hop eld neural network updating procedure. The successive application of the updating procedure will force the network to converge such that the energy of the network becomes smaller during the updating procedure. When the network reaches a stable state, it has fallen into minimum energy state where this could be a local or global minimum. Tij and Ii should be set appropriately for the applications so that E represents the function which is minimized to solve the combinatorial optimization problems. The energy function should represent all the constraints of the problem. In our algorithm, a discrete Hop eld network is implemented which is updated until the energy is equal to zero or a prede ned maximum iteration number has been reached. In the simulation we considered a mobile radio network that has n cells (or base stations) and
m available frequency channels. The value of m is set to be lower bound of required number of 6
channels LB for a given channel assignment problem. Fig. 1 shows a two dimensional Hop eld network for the channel assignment problem. Each ith cell can carry any of the ri frequencies among the m frequencies if the carriage of frequencies does not violate the imposed constraints of the channel assignment problem. The value of the processing unit Vij , 1 i n and 1 j m, indicates if frequency j is assigned to cell i: Vij = 1 meaning that the frequency j is assigned to cell i and Vij = 0 which means that the frequency j cannot be assigned to cell i. In Fig. 1 i and
p represent the cell number; j and q indicate the frequency number, respectively. The state of the current processing neuron to be updated is represented by Vij . An energy function is derived to represent the three constraints of the channel assignment problem. In CSC, a frequency fiq cannot be assigned to cell i if the distance of fiq and any assigned frequency
fij , 1 j m, is less than dcsc (minimum frequency distance for CSC). The energy function for cell i (for CSC) can be de ned as: i = ECSC
where
m m X X j =1 q=1
Vij Viq Xijq
8 >
(5)
(6)
For both ACC and CCC, the frequency fij cannot be assigned to cell i if the distance of fij and any other assigned frequency fpq is less than cip (minimum frequency separation between the frequencies in cell i and cell p). The Energy function of ith cell for ACC and CCC can be de ned as
E(iACC;CCC) = where
m n X m X X j =1 p=1 q=1
Vij Vpq Yijpq
8 >
0 and j ? (cip ? 1) q j + (cip ? 1)
:
0 otherwise
Yijpq = >
(7)
(8)
In addition to the three constraint conditions, the total ACNs in the ith cell must be the same as
7
the RCNs for cell i. The energy function for the ACN can be de ned as i EACN
= (ri ?
m X j =1
Vij )2
(9)
From Eqs.(5), (7), and (9), the energy function for ith cell can be de ned as
E i = (ri ?
m X j =1
Vij )2 +
m m X X j =1 q=1
Vij Viq Xijq +
m n X m X X j =1 p=1 q=1
Vij VpqYijpq
(10)
The total energy function for the channel assignment problem is as follows:
E=
n X i=1
((ri ?
m X j =1
Vij )2 +
m m X X j =1 q=1
Vij Viq Xijq +
n X m m X X j =1 p=1 q=1
Vij VpqYijpq )
(11)
The channel assignment problem is now formulated as an energy minimization problem. This energy function can be minimized by an equivalent Hop eld neural network with the appropriate interconnection weights and the external inputs. The interconnection weights must represent the constraints of the optimization problem such as the RCNs for each cell and the three constraint of the channel assignment problem. Each of the constraints are invoked by inhibitory and excitatory support. If neuron Vij takes a value of unity which means frequency fij can be used at cell i, then the neuron Viq within the interference must be inhibited by CSC condition. The constraint can be speci ed in the form
TCSC = ?ip jq (dcsc)
(12)
where is the Kronecker delta function and de ned with ij as follows: 8 >
ij (x) = >
8
(13)
(14)
In both ACC and CCC, if neuron Vij takes a value of unity, then a neuron Viq within the interference must be inhibited. This constraint can be speci ed in the form:
T(ACC;CCC) = ?j(1 ? ip )jjq (cij )
(15)
To make ACNs the same as the RCNs, we have to feed inhibitory support to the neuron by an amount portional to the number of assigned frequencies. If RCN is less than or equal to the ACNs, the additional frequencies cannot be assigned to a cell. This constraint can be expressed as
TACN = ?ip j(1 ? jq )j
(16)
Eq.(16) shows that the self-inhibition is not allowed. The purpose of TACN is to give the inhibitory support to neurons in the same cell in order to avoid the case of ACN > RCN . From Eqs.(12), (15), and (16), the total interconnection weight is
Tijpq = ?ip jq (dcsc) ? j(1 ? ip )jjq (cij ) ? ip j(1 ? jq )j The interconnection weight Tijpq between Vij and Vpq is symmetrical, i; p n and 1 j; q m. Self-feedback is not allowed, i.e., Tijij = 0.
i.e.,
(17)
Tijpq = Tpqij for 1
The external input Ii is de ned as
Ii = (ri ? 1)
(18)
The external input Ii is used to give excitatory support to neurons in the same cell to make them to be satis ed by the trac demand constraint. If a frequency assignment satis es all the three constraints (CSC, CCC and ACC) for a cell and the ACNs are less than the RCNs, then that cell must receive excitatory support as it relates to reinforcing that assignment. The summation of inputs from all neurons for the current updating neuron is ?(ri ? 1), which should be given an excitatory bias.
9
The input to each neuron (i; j ) in the original Hop eld neural net is de ned as
Sij =
m n X X p=1 q=1
Tijpq Vpq + Ii
(19)
When a neuron receives an input, only frequencies which are satis ed by the three channel assignment constraints are selected as usable frequencies for the cell. A local minimum can be achieved for the channel assignment given in Eq.(19) by using the Hop eld updating procedure. If all the assigned frequencies are satis ed by the three channel assignment constraints, but one or more of the cells have less channels than the RCNs, the frequency assignment will never change and the energy value cannot reach the global minima even though more iterations are performed. To prevent this from occurring, we incorporate a forced assignment method which allows for a frequency to be assigned to a cell by another excitatory input even though the channel assignment constraints are violated and the energy is increased. The forced assigned frequency can change the frequency assignment for other cells too. The algorithm will then search for another solution space in order to attempt to reach the global minima when the current assigned channels does not satis ed the
P trac demand constraint. The algorithm checks for the ACN i.e., mq=1 Viq with the value of RCN ri. The dierence between the RCN and the ACN constraints is used as an additional excitatory
input which is given by:
IEij = (ri ?
m X q=1
Viq )
(20)
The values of TACN and Ii are constant for the neurons in the same cell. However, the value of IEij is variable based on the current neuron states. In IEij , the dierence of (RCN ? ACN ) is fed into the neurons in cell i. To count ACN , Vij is included in ACN of Eq.(20). The examples of the forced assignment is explained in Fig. 2. Fig. 2 (a) shows the case of RCN = 4 with 10 available channels. Three channels are currently assigned, which are indicated by the dark slots. Then, IEij = RCN ?ACN = 4?3 = 1. Hence, every neuron will be excited by IEij in order to get more channels. In Fig. 2 (b), four channels are assigned. Then, IEij = RCN ? ACN = 4 ? 4 = 0. Hence, there is no excitatory support for every neuron.
10
It is possible that the forced reassignment fails i.e., ACN < RCN although the additional term
IEij is introduced. If ACN < RCN even after the forced reassignment is applied, it means that the assignment for some calls are failed, which will cause the call drops. The number of callers who could not have the channels and their calls are dropped, is (RCN ? ACN ). This case is considered as non-convergence case. On the other hand, when all calls have the assigned frequencies without any call dropping, it is considered as convergence case. In the simulation results, the convergence rate is the number of cases that all calls have the assigned frequencies without any call dropping before the maximum number of iteration is reached when a 100 simulation runs were performed. Input to each neuron of the modi ed network is de ned as
Sij =
m n X X p=1 q=1
Tijpq Vpq + Ii + IEij
(21)
The schematic description of the input to a neuron is shown in Fig. 3. The output function consists of a threshold operation. The nal output state of the neuron is,
Vij = fout (Sij ) where
8 >
(22)
(23)
and THD is the threshold value and THD = 0 in this paper.
3 Applying Modi ed Hop eld Network to Channel Assignment In this section, the implementation of the algorithm is discussed. Section 3.1 explains the proposed initialization methods which use speci c characteristics of the problem. Section 3.2 presents the updating procedures after the initialization. The overall algorithm is summarized in the following steps. 11
1. The initial state of neurons are set to 1 or 0 according to the chosen initialization method. 2. Repeat the following steps until all neurons are picked. (a) Pick neuron Vij according to the chosen updating method. (b) Calculate the input to this neuron by Eq.(21). (c) Decide the new state of this neuron by Eq.(22) and Eq.(23). 3. Compute the energy E of the current assignment. If E = 0, stop and go to step 4 otherwise repeat the process from step 2. 4. The output state of the neurons Vij will be the nal assignment based on the compatibility matrix C and the RCN matrix R.
3.1 Initialization In general, a random initialization method is used as the initial states for the neurons in the Hop eld network. In our algorithm, various initialization techniques with the frequency assignment constraints are investigated in order to increase the convergence rate and to decrease the iteration number. The result of our algorithm is also compared with the random initialization. The total frequency spectrum is composed of a certain number of blocks for the initialization with consideration of the constraints. Fig. 4 (a) shows the frequency block structures. LB is the lower bound on total number of frequencies and the total frequency spectrum consists of B blocks. Every frequency block except the last one has the same number of frequency slots. The last block might not have the same number of frequency slots depending on the LB and the Block Width (BW ) which is the number of channels in the block. In Fig. 4 (a), for example, all blocks have BWi = 5; 1 i (B ? 1) except the last block B which has BWB = 2. Frequency j refers to the number of each frequency slot, each slot identi ed by fij ; 1 i n; 1
j m. The Channel Number in Block (CNB) is the frequency number in the block and the sequence is repeated for each block. 12
Experiment No.
I1 I2 I3
Crmax Other cells method A1 method A1 method A1 method A2 method A2 method A2
Table 2: Initialization method for each experiment. In our algorithm the initial state of neurons, Vij = 0 or Vij = 1 where 1 i n; 1 j m indicates whether or not the frequency slot fij is to be assigned to the j th call in the cell i. Fig. 4 (b) and (c) show the xed and random interval initialization methods respectively, which will be explained later in detail. In Fig. 4 (b) and (c), the frequency slots with the initial state value
Vij = 1 are indicated by the dark slots. In the frequency assignment network, the minimum required number of Total Frequencies TFmin can be obtained from the RCN matrix R and the compatibility matrix C for the cell Crmax . The distance between the frequencies for a certain cell should be at least dcsc .
TFmin for the network is de ned by TFmin = (rmax ? 1) dcsc + 1
(24)
The block width BW is de ned by
BW
8 > < dcsc => j k : LB
rmax ?1
if LB = TFmin otherwise
(25)
where bac is the largest integer which is equal or less than a. LB is obtained using the bounds in [5] and LB TFmin . The number of blocks B is de ned by
LB B = BW
(26)
where dae is the smallest integer which is equal or greater than a. When LB = TFmin , BW is equal to dcsc as found in Eq.(25). In Crmax , the frequencies are assigned with a distance dcsc between them, it is the only assignment possible for the cell Crmax since BW is equal to dcsc and the distance 13
between the frequencies should be at least dcsc . Therefore, the assignment frequencies for the cell
Crmax are the same during the whole updating procedure. When LB 6= TFmin , the frequencies for Crmax are assigned with an irregular distance BW for the neural network algorithm. Interconnection weights and external bias input are computed before updating. During updating, they are not calculated again in order to reduce the computation time since these values are not changed. Only the excitatory input for the forced assignment is computed during this procedure. In this paper, two methods are identi ed for the initialization of the neurons: (1) the xed interval initialization method (A1), (2) the random interval initialization method (A2).
Fixed interval initialization method (A1) 1. For the cell i with ri calls, randomly choose the frequency block and channel number in the chosen block (CNB) k; 1 k BW in the chosen frequency block which does not violate CCC and ACC with the previously assigned frequencies of other cells. Assign a \1" to the chosen frequency slot, Vik = 1. If the cell i is Crmax , k = 1. 2. For the remaining calls of the cell, assign a \1" to the frequency slot which has the distance BW with the previous assigned slots, Vik = 1 where k0 = (k + df BW ) mod LB 0
for df = 1; 2; : : :; ri ? 1.
Random interval initialization method (A2) 1. Same as step 1 in the method A1 . 2. For the remaining calls of the cell, assign a \1" to the frequency slot which has a random number of BW distance with the previous assigned slots, Vik = 1 where k0 = (k + dr 0
BW ) mod LB and dr is the random integer number within the possible block range. The three experiments are performed according to the above two initialization methods for the cell Crmax and other cells. Table 2 shows the initialization methods for each experiment. When
LB = TFmin , the method A2 for Crmax is the same as the method A1 since the distance between 14
the frequencies should be at least dcsc and only one initial frequency assignment case is possible. Hence, experiment I2 is same as experiment I3 when LB = TFmin .
3.2 Updating procedure In an asynchronous Hop eld network, the neurons are selected randomly or sequentially by a certain order for the updating. In our algorithm, both random and sequential selection techniques are used along with three updating methods, B1 , B2 and B3 . The procedure for updating methods B1 , B2 and iteration subroutine are shown in Figs. 5, 6 and 7, respectively. The procedural step for each updating method is as follows:
Updating method B1. 1. Make a list of cells according to the descending order of RCN for each cell. 2. Execute the iteration subroutine until the counter a has reached to a prespeci ed maximum iteration number 500 or E = 0. 3. Repeat step 2 for the next cell in the list.
Updating method B2. 1. Make a list of cells according to the descending order of RCN for each cell. 2. Execute the iteration subroutine until the counter a has reached to a prespeci ed maximum iteration number 500 or E = 0. 3. If the energy E is less than threshold energy Eth and it is continued until the threshold energy counter bth , then change the list according to the ascending order of the RCN of each cell. This technique is used to avoid falling into local minima area. 4. Repeat step (2-3) for the next cell in the list. Updating method B3 is similar to method B1 except that the making of cells list are dierent. In method B3 , the list is made with alternating the largest and smallest number of RCN of cells; 15
for example, if RCN of 6 cells, cells from #1 to #6 are 3, 6, 1, 30, 23, 17, respectively, then the updating order of list will be cell #4, #3, #5, #1, #6, and #2. This end-packing technique [2] is used for the maximum utilization of the frequency spectrum. The iteration subroutine of three updating methods are shown in Fig. 7 and the procedure for the subroutine is as follows: 1. Choose the cell i according to the order of the cell list. 2. Randomly choose one neuron j in cell i (neuron (i; j ))and update that neuron. 3. For the next updating neuron, the direction is randomly chosen whether in favor of the left side (neuron (i; j ? 1)) or the right side neuron (neuron (i; j + 1)). 4. After the initial direction is decided by step 3, the next neurons are updated sequentially. 5. Repeat step (1-4) until all frequencies for all the cells are assigned. In the proposed algorithm, a saving in memory is also achieved by the following implementation. The interconnection weights Tijpq which are the weights from neurons (i; j ) to all other neurons (p; q ); 1 p n; 1 q m, needs a total memory size of
Y = nmnm
(27)
The total memory for the interconnection weights required in our implementation is
Y 0 = n m m + (max(cij ) 2 ? 1) n n
(28)
where n m m is the memory size for the interconnection weights between neuron (i; j ) and (i; q ), and (max(cij ) 2 ? 1) n n is the memory needed to store the compatibility pattern for each cell. The ratio of the memory required by the conventional implementation to the memory required by
16
Problem Cell # ACC cii LB Comp. RCN No. Matrix C Matrix R 1 25 1 2 73 C1 R1 2 21 1 5 381 C2 R2 3 21 1 7 533 C3 R2 4 21 2 7 533 C4 R2 5 21 1 5 221 C2 R3 6 21 1 7 309 C3 R3 7 21 2 7 309 C4 R3
Table 3: Simulated problems speci cations. Problem 1 2 3 4 5 6 7
Experiment I1 Experiment I2 Experiment I3 Random Initialization Average Convergence Average Convergence Average Convergence Average Convergence Iter. No. Rate Iter. No. Rate Iter. No. Rate Iter. No. Rate 263.4 61% 264.9 68% 273.7 64% 255.9 60% 67.1 100% 85.5 99% 85.5 99% 292.6 82% 85.9 98% 126.3 100% 126.3 100% 397.2 25% 129.5 99% 136.8 98% 136.8 98% 359.1 56% 65.6 95% 63.9 82% 63.9 82% 104.8 6% 118.2 96% 147.2 98% 147.2 98% 336.2 40% 119.1 40% 156.8 24% 156.8 24% NA 0%
Table 4: Simulation results of initialization method. the implementation used in this paper for the interconnection weights is,
Y0 1 Y n
(29)
For the practical example of 21 cells, the memory size of only 12 Mbytes is used rather than 250 Mbytes.
4 Simulation results and discussion A mobile system consisting of 21 or 25 cells are used in our simulation. The 21 cell system is repeated in Fig. 8. The output threshold (THD) in Eq.(23) is xed at THD = 0. In our algorithm, there are no parameters to be determined for the energy function and the interconnection weights. The simulation results according to the initialization methods and the updating procedures will be discussed in this section and the results will be compared with other prior reported techniques [11].
17
4.1 Simulation results with various initialization methods Table 3 shows the speci cation of the problems with their compatibility and RCN matrices which are also used in [6,9]. A \1" or \2" in ACC column implies the absence or presence of ACC, respectively. The CSC is indicated by the value in column cii . LB is the lower bound on required frequencies for each problem. In Table 4, the simulation results are shown according to the various initialization methods. In each problem, the three experiments from Table 2 are performed according to these methods. The results are then compared with the random initialization method. B1 is used as an updating method for all problems in Table 4 which was previously discussed in Section 3.2. The average iteration number and the convergence rate to the solution are also shown in Table 4. The average iteration number is the average number of iterations which are increased until E = 0. Convergence rate is the probability that the network has E = 0 before the maximum number iteration is reached. In these simulations, the maximum number of iterations is xed at 500. To investigate the number of iterations and the convergence rates, a 100 simulation runs were performed with dierent initial seed values using a random number generator for each of the seven problems. In Table 4, the simulation results have a smaller average iteration number and a higher convergence rate than the usual random initialization method. In Problem 2-7, the results for experiment I2 are the same as the results for experiment I3 given that LB = TFmin for each problem. In all cases, the convergence rate is increased when the maximum number iteration is 1000, while the average iteration number is also increased. It demonstrates that generally experiment I1 has a better performance (i.e., smaller iteration numbers and higher convergence rates) than experiment
I2 . It also shows that our proposed initialization methods have better performance than the random initialization method which is usually used.
4.2 Simulation results with various updating methods
18
Problem 1 2 3 4 5 6 7
Method B1 Method B2 Method B3 Average Convergence Average Convergence Average Convergence Iter. No. Rate Iter. No. Rate Iter. No. Rate 263.4 61% 274.1 72% 279.9 62% 67.1 100% 72.9 100% 67.4 99% 85.9 98% 78.1 100% 64.2 100% 129.5 99% 52.0 100% 126.8 98% 65.6 95% 66.7 95% 62.4 97% 118.2 96% 119.8 97% 127.7 99% 119.1 40% 125.0 40% 151.9 52%
Table 5: Simulation results of the various updating method. Problem Method I1 Result with B3 from [11] 1 62% 9% 2 99% 93% 3 100% 100% 4 98% 100% 5 97% 79% 6 99% 100% 7 52% 24%
Table 6: Convergence rate comparison with recent reported result. Table 5 shows the simulation results according to various update techniques with initialization method I1 . In all cases except for Problems 1 and 7, the average iteration value and the convergence rate are similar. In updating method B1 , the cell lists are made according to the descending order of RCN for each cell. Since it is simple, the iteration number can be smaller than the other complex methods, but the possibility of falling into the local minima is increased. In updating method B2 , the cell lists are made according to the descending and ascending order. The convergence rate can be increased with this technique. However, the performance can be dierent depending on the threshold counter bth. Since the neural network will search the new solution space by changing order of cell lists when the counter reaches to bth even though the algorithm is approaching to the global minima, the iteration number might be increased. In updating method B3 , the alternating ordering technique is used. The neuron state can be changed for each iteration in order to search the global minima. The convergence rates are increased in some cases. However, the iteration numbers are also increased. Table 5 shows that by using updating methods B2 and B3 , the performance of the channel assignment problem can be improved. In Table 6, the results of the initialization method I1 with the updating method B3 are compared 19
with the results in [11]. The continuous Hop eld network model is used in [11]. The continuous model may run systematically into sub-optima [15]. In our algorithm, the discrete model is used since the discrete model is simpler than the continuous model for the implementation of the network. The heuristic initialization and updating techniques are also used in our model for the speci c characteristics of the channel assignment problem. Note that in [11], a couple of frequencies in certain cells are xed for accelerating the convergence time. The iteration numbers and convergence rates will be totally dierent depending on which and how the frequencies in certain cells are xed. Table 6 also shows that the algorithm with the proposed initialization and updating methods have similar or better results than those found in [11] although the frequencies are not xed in any cells in this algorithm. Fig. 9 shows the distribution of the number of iterations to converge to solutions for Problem 2 and 5, respectively. The initialization method I1 and updating method B3 are used. The solid line represents the mean of the iteration numbers for each problem. Table 7 shows three dierent assigned frequencies for each cell by the updating methods B1 , B2 and B3 for the Problem 1. In all three cases, the experiment I1 is used as an initialization method.
5 Conclusion In this paper, an improved Hop eld neural network algorithm is proposed in order to obtain an optimal solution for the channel assignment problem in the mobile cellular environment. This improved algorithm drives itself to the solution by using new interconnection weights, external inputs to the neurons with the initialization and updating methods. The interconnection weight is set initially to take into account the required channel number in each cell and the three channel assignment constraints. The three constraints that are considered are the co-channel constraint, adjacent channel constraint and co-site constraint. The compatibility matrix, the required channel number matrix and the lower bound number of total frequencies are presented as input. In the 20
cell # # Freq. 1 10 2 11 3 9 4 5 5 9 6 4 7 5 8 7 9 4 10 8 11 8 12 9 13 10 14 7 15 7 16 6 17 4 18 5 19 5 20 7 21 6 22 4 23 5 24 7 25 5
Method B1 5 10 22 24 28 39 49 54 62 66 2 4 15 18 26 31 34 37 50 57 73 7 12 16 19 29 32 48 61 65 35 45 50 57 62 6 14 21 23 35 53 59 69 72 20 28 35 42 6 14 23 27 59 1 11 17 47 53 69 72 2 13 21 26 8 30 38 43 52 58 64 71 15 22 31 34 37 44 56 66 20 25 33 41 44 56 63 68 70 3 9 36 40 42 46 51 55 60 67 1 11 13 17 27 45 47 7 16 25 29 33 63 70 5 12 18 28 61 68 15 24 31 38 19 26 32 41 49 2 8 57 64 73 6 14 21 48 54 62 69 1 39 45 52 59 65 46 53 60 67 38 42 51 58 71 4 12 18 25 33 61 68 43 49 55 63 70
Method B2 4 18 22 31 35 38 50 54 62 66 2 5 12 15 21 26 44 48 57 60 68 3 8 10 16 25 29 33 61 67 35 44 50 57 62 6 14 27 30 41 52 59 69 72 20 28 35 42 6 14 23 27 59 1 11 17 45 53 69 72 2 13 21 26 19 32 36 47 58 64 71 73 15 22 28 31 34 48 56 66 7 24 28 34 40 42 56 63 70 9 13 20 39 43 46 49 51 55 65 1 11 17 23 37 45 53 7 16 25 29 33 63 70 5 12 18 26 61 68 15 24 31 38 19 27 32 41 50 2 8 57 64 73 6 14 21 47 54 62 69 1 39 45 52 59 65 46 53 60 67 38 42 51 58 71 4 12 18 25 33 61 68 43 49 55 63 70
Method B3 1 12 17 21 30 33 38 46 61 68 5 18 24 31 35 42 48 50 57 62 72 14 25 34 44 54 60 63 67 70 29 38 50 57 62 2 4 11 16 20 23 29 56 65 21 28 35 42 6 13 20 26 69 2 4 16 41 51 56 58 5 11 18 24 3 8 15 27 37 49 71 73 21 30 33 35 40 45 48 53 7 9 28 36 40 45 53 64 66 10 19 22 32 39 43 47 52 55 59 6 13 26 41 51 58 69 7 14 25 28 34 63 70 5 12 17 24 61 68 16 23 30 37 20 27 36 42 49 1 8 57 64 71 4 11 16 46 50 60 67 38 44 51 59 66 73 47 54 62 69 43 52 58 65 72 6 12 19 26 34 61 68 42 49 56 63 70
Table 7: Assigned frequencies by various updating methods for Problem 1. simulation results, the average iteration number and the convergence rates are shown and compared with the previously reported results. The proposed algorithm using the suggested initialization and updating methods has a similar or a better performance than the previously reported algorithms although no frequencies for any cells are xed in our algorithm.
21
References [1] W. K. Hale, \Frequency assignment: Theory and applications," Proceedings of IEEE, vol. 68, pp. 1497{ 1514, Dec. 1980. [2] F. Box, \A heuristic technique for assigning frequencies to mobile radio nets," IEEE Transactions on Vehicular Technology, vol. VT-27, pp. 57{64, May 1978. [3] A. Gamst and W. Rave, \On frequency assignment in mobile automatic telephone systems," in Proc. IEEE GLOBECOM'82, pp. 309{315, 1982. [4] A. Gamst, \Homogeneous distribution of frequencies in a regular hexagonal cell system," IEEE Transactions on Vehicular Technology, vol. VT-31, pp. 132{144, Aug. 1982. [5] A. Gamst, \Some lower bounds for a class of frequency assignment problems," IEEE Transactions on Vehicular Technology, vol. VT-35, pp. 8{14, Feb. 1986. [6] K. N. Sivarajan, R. J. McEliece, and J. W. Ketchum, \Channel assignment in cellular radio," in Proc. 39th IEEE Vehicular Technology Conference, pp. 846{850, May 1989. [7] J.-S. Kim, S. H. Park, P. W. Dowd, and N. M. Nasrabadi, \Comparison of two optimization techniques for channel assignment in cellular radio network," in Proc. IEEE International Conference on Communications, pp. 1850{1854, Jun. 18-22 1995. [8] J.-S. Kim, S. H. Park, P. W. Dowd, and N. M. Nasrabadi, \Channel assignment in cellular radio using Genetic Algorithm," Wireless Personal Communications, vol. 3, no. 3, pp. 273{286, 1996. [9] D. Kunz, \Channel assignment for cellular radio using neural networks," IEEE Transactions on Vehicular Technology, vol. VT-40, pp. 188{193, Feb. 1991. [10] D. Kunz, \Practical channel assignment using neural networks," in Proc. of 40th IEEE Vehicular Technology Conference, pp. 652{655, 1990. [11] N. Funabiki and Y. Takefuji, \A neural network parallel algorithm for channel assignment problems in cellular radio networks," IEEE Transactions on Vehicular Technology, vol. VT-41, pp. 430{436, Nov. 1992. [12] P. T. H. Chan, M. Palaniswami, and D. Everitt, \Neural network-based dynamic channel assignment for cellular mobile communication systems," IEEE Transactions on Vehicular Technology, vol. VT-43, pp. 279{288, May 1994. [13] J. Hop eld and D. Tank, \Computing with neural circuits: A model," Science, vol. 233, pp. 625{633, August 1986. [14] J. J. Hop eld and D. W. Tank, \Neural computation of decisions in optimization problems," Biological Cybernetics, vol. 52, pp. 141{152, 1985. [15] D. Kunz, \Suboptimum solutions obtained by the Hop eld-Tank neural network algorithm," Biol. Cybern., vol. 65, pp. 129{133, 1991.
22
Frequency #j
#1
#2
V11
V12
#q
.
#1
. T
ijij
#m
.
=0
V
#2
T
V21
#i
.
ij1m
.
Cell
1m
.
Vij
T ijpq
.
.
#p
.
T
.
#n
Vn1
.
Vpq
ijn2
.
.
Vn2
Vnm
Figure 1: A two dimensional Hop eld network for the channel assignment problem.
Channel Number
1
2
(a) RCN = 4 Channel Number
1
3
5
6
7
8
9
10
4
5
6
7
8
9
10
ACN = 3
2
(b) RCN = 4
4
3
ACN = 4
Figure 2: Examples of forced assignment with RCN = 4 (a) ACN = 3 and (b) ACN = 4.
.
Vij11
.
Vijpq
Tijpq
.
Ii Tij11
+1
Σ
Sij
Σ
Tijmn
.
+1
Σ
Vijmn -1
.
.
Vij
ri
-1
-1
Vi1
fout(S ij )
Vij
.
.
Vim
Figure 3: Schematic description of our neuron model. 23
block # 1
block # 2
block # (B-1)
. Freq. #
1
2
3
4
5
6
7
8
9
10 11
CNB
1
2
3
4
5
1
2
3
4
5
.
block # B
.
.
.
(LB-1) LB
.
1
1
2
(a)
Freq. #
j
j+BW
j+2BW
j+3BW
.
.
.
(b)
Freq. #
j
j + d r BW
j + d r’ BW
j + d r" BW
.
.
.
(c)
Figure 4: Structure of frequency domain and initialization (a) frequency and block structure, (b) xed interval initialization method (method A1) and (c) random interval initialization method (method A2 ). Start Initialize counter a=0
Make a list of cells according to descending order of RCN of each cell
Iteration N a > 500
E=0
a=a+1 N
Y
Y End
Figure 5: Flow chart of updating method B1 . 24
Start Initialize counter a = 0, b = 0
Make a list of cells according to ascending order of RCN of each cell
Make a list of cells according to descending order of RCN of each cell
Y b > b th
Iteration
E=0
N
a=a+1
b=b+1
Y Y
a > 500
N
N Descending order ?
N Y
E > E th Y b=0
End
Figure 6: Flow chart of updating method B2 .
25
N
Iteration
Frequency for all cells are assigned ?
Y
Return
N Initialize number of assigned channel c=0 Choose cell # i based on order of list N Randomly choose and update neuron ( i , j ), c = 1
Y
Number of assigned channel c < LB
Randomly decide next update direction in rightside or leftside c=c+1 Is update direction is left or rightside ?
R
Update neuron (i ,j) in cell # i
j=j+1
L
j=j-1
Figure 7: Flow chart of iteration subroutine for updating methods B1 , B2 and B3 .
1
6
13
7
14
2
3
8
15
9
16
19
4
10
17
20
5
11
12
18
21
Figure 8: The 21 cell system used in this paper. 26
No. of iterations
500 400 300 200 100 0
0
20
40
60
80
100
20
40 60 Experiments
80
100
(a)
No. of iterations
500 400 300 200 100 0
0 (b)
Figure 9: Distribution of the number of iterations to converge to solutions (a) Problem 2 and (b) Problem 5.
27