Adv. Radio Sci., 9, 187–193, 2011 www.adv-radio-sci.net/9/187/2011/ doi:10.5194/ars-9-187-2011 © Author(s) 2011. CC Attribution 3.0 License.
Advances in Radio Science
Context-based user grouping for multi-casting in heterogeneous radio networks C. Mannweiler, A. Klein, J. Schneider, and H. D. Schotten Chair for Wireless Communications and Navigation, University of Kaiserslautern, Germany
Abstract. Along with the rise of sophisticated smartphones and smart spaces, the availability of both static and dynamic context information has steadily been increasing in recent years. Due to the popularity of social networks, these data are complemented by profile information about individual users. Making use of this information by classifying users in wireless networks enables targeted content and advertisement delivery as well as optimizing network resources, in particular bandwidth utilization, by facilitating group-based multi-casting. In this paper, we present the design and implementation of a web service for advanced user classification based on user, network, and environmental context information. The service employs simple and advanced clustering algorithms for forming classes of users. Available service functionalities include group formation, context-aware adaptation, and deletion as well as the exposure of group characteristics. Moreover, the results of a performance evaluation, where the service has been integrated in a simulator modeling user behavior in heterogeneous wireless systems, are presented.
1
Introduction
Group communication is of particular interest in wireless (access) networks, one example being the domain of multicasting, i.e. simultaneously delivering the same content to multiple recipients. Chalmers and Almeroth (2001), Janic and Van Mieghem (2009), as well as Baumung and Zitterbart (2009), among others, have shown that multi-casting technologies can considerably contribute to more efficiently exploiting available network resources when applied under appropriate circumstances. A typical use case for exploiting multi-casting is a large-scale sport event with tens of thousands of spectators that can be grouped according to their user profile and context. Based on that classification, different groups can receive adapted content. However, a user Correspondence to: C. Mannweiler (
[email protected])
classification service, as required in this use case, can also be exploited for selecting people with particular characteristics. Consumers with an affinity to a certain product category could be identified for specific marketing purposes. Both use cases would require a reliable means of data privacy and security as well as the user’s approval. The remainder of this paper is organized as follows: Sect. 2 presents the algorithms used for our user classification service and Sect. 3 briefly discusses implementation aspects. Section 4 evaluates the algorithms according to a comprehensive set of criteria. Section 5 concludes the paper with a short summary and outlook.
2
Classification methods
In this section, we will briefly introduce the clustering methods used for a context-aware user classification service. This includes the algorithm of the respective method as well as relevant characteristics such as hard vs. soft mapping or variable vs. fixed number of clusters. The notation employed in this paper is defined below:
– C: Set of all clusters, C = {C 1 ,C 2 ,...,C k } – N: Set of all cluster centers (nodes), N = {N1 ,N2 ,...,Nk } – K: Context space, K ⊆ Rl – X: Set of observations (user’s context) to classify, X = {x 1 ,x 2 ,...,x n },x j ∈ K – n: Number of observations (users) to classify – k: Number of clusters (nodes) – l: Dimension of context space – : Tuple of a cluster C i and an observation x j , = (C i ,x j ) – µi : Center of cluster C i , µi ∈ Rl – w i : Weighing vector of node Ni (cluster center), w i ∈ Rl – t: Iteration counter
Published by Copernicus Publications on behalf of the URSI Landesausschuss in der Bundesrepublik Deutschland e.V.
188
C. Mannweiler et al.: Context-based user grouping for multi-casting in heterogeneous radio networks
2.1
Methods with fixed number of cluster centers
Common to the methods presented in this subsection is the requirement to set the number of clusters before the start of the classification algorithm. During runtime, the number of clusters does not change. 2.1.1
K-Means algorithm
K-Means (MacQueen, 1967) is an algorithm that establishes a hard mapping between C i and an observation x j , i.e. an observation is unambiguously associated with one cluster. Across multiple iterations, the following error function is minimized: E=
k X X
||x j − µi ||2
w i (t + 1) = w i (t) + l(t) · hd (i) · (x c − w i (t))
(2)
where l(t) is the learning rate of the current iteration (l(t) > l(t − 1) for all t) and hd (i) the adjustment for an individual node depending on its ranking in step 3. If the termination criterion (e.g. number of iterations) has not been reached yet, the next iteration can begin, starting with step 2. Otherwise, observations are assigned to the closest node. For further details on the algorithm, the reader is referred to Martinez and Schulten (1991) and Fritzke (1997).
(1)
i=1 x j ∈C i
The iteration steps of the algorithm are: 1. (Random) initialization of k cluster centers (nodes) – cluster centers are randomly initialized in the ldimensional context space. 2. Mapping of observations to centers – each observation is mapped to the closest node based on the selected distance metric such as Euclidean distance. 3. Update of cluster centers – position of nodes is recomputed based on the observations that are assigned to the node. Go back to step 2. The algorithm terminates as soon as a specified termination criterion is reached, e.g. the error function falls below a threshold. For further details on K-Means, the reader is referred to Steinhaus (1957). 2.1.2
4. Adjustment of node positions – The position w i of the graph’s nodes within the context space is adjusted according to the following equation:
Neural Gas
The Neural Gas algorithms establishes a graph of k nodes that, independently of each other, move through the context space during the iterations. The core idea is to present available observations to the graph and to accordingly adjust the cluster centers. In brief, the steps of the algorithms are as follows: 1. (Random) initialization of k cluster centers (nodes) – Cluster centers are randomly initialized in the ldimensional context space.
2.1.3
K-Fixed-Means
None of the presented algorithms considers fixed cluster centers. However, for some of the use cases described in Sect. 1, this is an important alternative for user classification. Therefore, we have developed the so-called K-Fixed-Means algorithm, where the position of a cluster center is fix except for a defined tolerance interval (for each dimension of the context space) within which the cluster center can move. Moreover, the maximum distance to a center is limited, i.e. some observations may not be assigned to any node at all. Figure 1 depicts the idea of tolerance space (purple rectangle) and maximum distance (blue circle). The basic steps of the algorithm are: 1. Initialization – k cluster centers (nodes) are initialized at the determined (and fix) positions. 2. Mapping of observations to centers – each observation is mapped to the closest node based on the selected distance metric such as Euclidean distance. Non-assigned observations (i.e. those whose distance is above the defined thresholds) are collected in a set U . 3. Adjustment of cluster centers – for each observation in U , it is checked whether there exists a node that, if moved within its tolerance room, can accommodate the given observation. If their exists such a node, it is moved according the following equation: w i (t + 1) = w i (t) + l(t) · d(x c ,w i ) · (x c − wi (t))
(3)
2. Presentation of an observation – An observation x c ∈ X is randomly chosen and presented to the graph. (An observation can be drawn multiple times.)
where l(t) is the learning rate and d(x c ,w i ) a factor that takes into account the different tolerance intervals. After all observations in U have been checked, the next iteration starts with step 2, unless the criterion for termination is met.
3. Sorting of nodes – Nodes are sorted according to their Euclidean distance to x c .
As for the Neural Gas algorithm, the learning rate decreases in later iterations.
Adv. Radio Sci., 9, 187–193, 2011
www.adv-radio-sci.net/9/187/2011/
C. Mannweiler al.: Context-based userGrouping grouping for in heterogeneous radio networks Mannweiler et al.:etContext-based User formulti-casting Multi-Casting in Heterogeneous Radio Networks 2.2
189
Methods with variable number of cluster centers
2.2 Methods with Variable Number of Cluster Cente
In contrast to the algorithms presented so far, this class of alIn contrast algorithms presented socenters far, this class of gorithms is capableto of the adjusting the number of cluster duringgorithms runtime. Hence, not only the assignment of observais capable of adjusting the number of cluster cen tions to clustersruntime. but also the number of only clusters optimized. of obser during Hence, not theis assignment Using the algorithms from the previous section, this could tions to clusters but also the number of clusters is optimiz only be achieved by several runs with different amounts of Using the algorithms from the previous section, this co clusters.
only be achieved by several runs with different amounts
1. Concept K-Fixed-Means algorithm. Fig. Fig. 1. Concept of of K-Fixed-Means Algorithm
2.2.1 clusters. Growing Neural Gas
The Growing Neural Gas algorithm is an extended version 2.2.1 Growing Neural Gas of the Neural Gas algorithm as presented in the previous sec2.1.4 Fuzzy-C-Means tion. By the insertion and aging of edges between cluster The Growing Neural Gas algorithm is an extended vers In contrast to the algorithms presented so far, Fuzzy-Ccenters (nodes) according to a set of rules, the topology of Means can assign an observation to several clusters. The of the Neural Gasbealgorithm as presented thea previous s In contrast to the algorithms presented so far, Fuzzy-C-the underlying data shall reflected more precisely. inFor degree u of membership of an observation x to a single tion. By theof insertion andtheaging edges between clu description the algorithm, readerof is referred to Means can ijassign an observation to severalj clusters. Thedetailed cluster C i has to lie within the interval ]0,1] and the sum Fritzke (1995). Here, only a short overview of the algorithm centers (nodes) according to a set of rules, the topology degree uijmemberships of membership of an observation x to a single of of an observation must add upj to 1, i.e. presented to sketch basicbe idea: Pkall underlying datatheshall reflected more precisely. Fo cluster Cui ijhas to lie within the interval ]0,1]the and the sumsteps isthe = 1 for any j . The algorithm minimizes followi=1 detailed description of the algorithm, thethe reader of allingmemberships con- is referred error function: of an observation must add up to 1, i.e 1. Initialization – two nodes are randomly put in Pk Fritzke a short overview of the algorit text space (1995). (without Here, an edgeonly between them). = 1n for any j. The algorithm minimizes the foli=1 uijX k X steps is presented to sketch the basic idea: 2 lowing E =error function: um (4) 2. Selection of observation and calculation of closest ij ||x j − µi || ,1 ≤ m ≤ ∞ i=1 j =1 nodes – from the set of observations, one is picked (ran1. Initialization - Two nodes are randomly put in the c n k X X domly) and the closest node (N1 ) as well as the second m 2 mjis−the The higher its (4) (withoutbased an edge uij ||x µi ||so-called ,1 ≤ m“fuzzifier”. ≤∞ E =The exponent closesttext (N2 )space are determined on thebetween Euclideanthem). disvalue, the fuzzier the mappings of observations to nodes. In i=1 j=1 tance 2. Selection of observation and calculation of clos practice, values between 1 and 2.5 have proven to generate clustering (Bezdek, 1981). The basic of its 3. Insertion of edges – in case there no edge between N1is picked (r nodes - From the set ofisobservations, one The good exponent m isresults the so-called ”fuzzifier”. Thesteps higher the algorithm are: and N , it is inserted. In any case, the age of the edge is as the seco 2 domly) and the closest node (N ) as well 1 value, the fuzzier the mappings of observations to nodes. In set to 0.closest (N ) are determined based on the Euclidean d 2 practice, values between 1 andinitialization 2.5 have proven 1. Initialization – (random) of valuesto uijgenerate tance good clustering results (Bezdek, 1981). The basic steps of 4. Calculation of a node’s statistical error value – for ev2. (Re)calculation of cluster centers – for the current iteraery node, the statistical error ENi is stored. It represents the algorithm are:the cluster centers are calculated according to tion step, Insertion ofobservations edges - In case there nonode. edge between the 3. total error of all assigned to is that and N , it is inserted. In any case, the age For N1 (as determined in step 2), the value is updated by of the edg 2 Pn - m 1. Initialization (Random) initialization of values uij addingset its Euclidean distance to the current observation to 0. j =1 uij x j µi = Pn (5) xc: m 2. (Re)calculation of cluster centers - For the current iterj =1 uij 2.1.4
Fuzzy-C-Means
4. Calculation of a node’s statistical error value - For (7) Ni is stored. It represe the total error of all observations assigned to that no of membership of an observation x j to a cluster C i is 5. Adaptation of node positions – the position of N1 as N1 (as determined in step 2), the as value Pn according calculated to the following equation: well asFor its direct topological neighbors is adapted fol- is updated m j=1 uij ∗ xj adding its Euclidean distance to the current observat lows: (5) µi = Pn 1 m xc : uij = j=1 uij (6) 2 ation step, the cluster centers are calculated according to3. Recalculation of degrees uij of membership – the degree
||x j −µi || l=1 ||x j −µl ||
Pk
m−1
3. Recalculation of degrees uij of membership - The de4. Test of termination of criterion – if the termination crite- Ci gree of membership an observation xj to a cluster rion, e.g. sum of changes of u for all combinations is calculated according to theijfollowing equation:i,j is below a defined threshold, is not satisfied yet, another iteration is performed starting with step 2.
EN1 (t)ery = Enode, 1) +statistical ||wN1 − x c ||. N1 (t − the error E
w i (t + 1) = w i (t) + l · (x c − wi (t)).
(8)
xc ||. all it1) +constant ||wN1 −during =l,Ethough EN1 (t) N1 (t − The learning rate being erations, is different for N1 and its neighbors, i.e. for the 5. Adaptation of node two positions - The position of N1 neighbors, it is approximately orders of magnitude wellfor asNits direct topological neighbors is adapted as lower than . 1
lows: 1 6. Aging of edges – for all edges originating from N1 , the u = (6) ij presented algorithm converges 2 The to a local minimum age is incremented by 1. If the age has passed a defined Pk ||xj −µi || m−1 wi (t +edge 1) =iswremoved. wi (t)). i (t) + l ∗If(x c −results which is not necessarily the optimal solution. Moreover, the threshold, the this in a node l=1 ||xj −µl ||
results depend on the initialization of uij .
4. Test of termination criterion - If the termination critewww.adv-radio-sci.net/9/187/2011/ rion, e.g. sum of changes of uij for all combinations i,j is below a defined threshold, is not satisfied yet, another iteration is performed starting with step 2.
without any edges, it is removed as well.
The learning rate l, though being constant during al erations, is different for N1 and its neighbors, i.e. for Adv. Radio Sci., 9, 187–193, 2011 neighbors, it is approximately two orders of magnit lower than for N1 .
A summarizing overview of the employed algorithms and their characteristics is given in Fig.2.
For a comprehe 190 C. Mannweiler et al.: Context-based user grouping for multi-casting in heterogeneous radio networksscenario casting times for any giv rameters include of groups to be f of context space of an algorithm, ror. In summary, (a more detailed Fig. 2. Overview of algorithm characteristics. rithms can be fou Fig. 2. Overview of Algorithm Characteristics
7. Insertion of a new node – after a defined number of it(Tyagi, 2006) interface and can be made available to basi4.1.1 K-Mean cally any hardware/software platform and (almost) indepenerations has passed, the node NEmax with the highest acdently of the kind of access network. Available functionalcumulated error Emax is determined. Among its direct ity currently includes group formation, context-aware group topological neighbors, the one with the highest accuK-Means 3 EImplementation adaptation, and deletion as well as the provisioning of char- is amo mulated error is chosen and a new node is placed nmax acteristics of active groups. midway the selected nodes. The edge between NEmax ever, computatio and NEn,max is removed and edges between the new node Implementation of the user classification service has been number of users and NEmax and NEn,max , respectively, are inserted. The 4 Results and performance evaluation accumulated error ofby the several old nodes objectives, is decreased by most a guided importantly: sification error, i factor α. The accumulated error of the new node is set For evaluating the performance of the presentedgorithms algorithms, with an to arithmetic mean of these two (updated) values. – high availability of the service the following set of criteria has been defined: 8. Reduction of accumulated error – for all nodes, the ac– high scalability cumulated error is decreased by a factor β α.
standard scenario
– Quality of clustering results – quality is given by the total classification error of a given result.
– simple extensibility 4.1.2 Neural G 9. Test of termination criterion if the termination crite-protocols – Temporal – this criteria measures the total – usage of– widely adopted andperformance data representarion (e.g. current number of nodes) is not fulfilled, the time necessary for a classification. tion formats next iteration starts at step 2. – Stability – evaluates similarity of outcomesSimilar for multipleto K-Mea – adequate latency behavior runs with identical data set. A summarizing overview of the employed algorithms and Moreover, runtim their characteristics is given in Fig. 2. – Flexibility – are algorithms capable of handling other becaus Taking these aspects into account, we designed a service of groups than metric data? creation and delivery environment utilizing the JavaEE platK-Means, is don 3 Implementation – Implementation – evaluates the amount of effort necesform in conjunction with the JBOSS Application Server (verover, with highe sary for algorithm implementation Implementation of the user classification service has been sion 5.1.0.GA). Service functionality can be reached via http reached much fa guided by several objectives, most importantly: 4.1 Quality and temporal performance of clustering requests and, optionally, additional methods transmission of XML 133.54. – high availability of the service data. The service hence implements a so-called RESTful For a comprehensive evaluation within the given multi– high scalability (Tyagi, 2006) interface and can be made available to basi4.1.3 100Growing casting scenario, every algorithm has been executed times for any given input parameter combination. Input pa– simple extensibility cally any hardware/software platform and (almost) indepenrameters included number of entities (840 and 8400), number dently of the kind network. Available Indimension terms of both of groups to be formed functional(4, 6, 9, 13, 19, 28), and – usage of widely adopted protocols andof dataaccess representaof context space (2 and 5). We analyzed total execution time tion formats ity currently includes group formation, context-aware group as well as group of an algorithm, its variance as well as total accumulated er– adequate latency behavior and deletion as well as adaptation, provisioning of chartoresults Neural ror.the In summary, most important observations and are Gas. M (a more detailed performance analysis of the individual algoacteristics of active groups. only slightly hig Taking these aspects into account, we designed a service rithms can be found in Appendix A): creation and delivery environment utilizing the JavaEE platform in conjunction with the JBOSS Application Server (version 5.1.0.GA). Service functionality can be reached via http requests and, optionally, additional transmission of XML data. The service hence implements a so-called RESTful Adv. Radio Sci., 9, 187–193, 2011
4.1.1
K-Means
K-Means is among the best performing algorithms. However, computation times significantly increase with higher www.adv-radio-sci.net/9/187/2011/
A brief summary of the remaining evaluation criteria is depicted in Fig. 3. C. Mannweiler et al.: Context-based user grouping for multi-casting in heterogeneous radio networks
191
Fig. 3. Summary evaluation of classification algorithms.
Fig. 3. Summary Evaluation of Classification Algorithms number of users and groups. In terms of the absolute classification error, it achieves the best result of all analyzed algorithms with an average of 133.52 across 100 runs of the standard scenario.
4.2
Fig. A1.
Overall evaluation
A brief summary of the remaining evaluation criteria is deOverall, the K-Means algorithm disposes of the most appicted in Fig. 3. Overall,multi-casting the K-Means algorithmuse disposes of the most appropriate characteristics in the given case. propriate characteristics in the given multi-casting use case. 4.1.2 Neural Gas Not only good results in terms Not ofonly quality and performance good results in terms of quality and performance but also its capability to handle nominal data makes it the deSimilar to K-Means, Neural is a relatively fast but also itsGascapability toalgorithm. handle nominal makes it the default choice fordata classification requests. Neural Gas is the preMoreover, runtime decreases significantly for larger amount ferred algorithm for large entity and group counts; however, classification of groups fault because choice assignment for of observations, in contrast requests. Neural Gas is the preit should not be used for clustering observations with nomto K-Means, is done only once and not in every iteration. data.group In case that the number of groups is not known fortheir large entityinaland counts; however, Moreover, ferred with higheralgorithm numbers of nodes, final posiyet, Growing Neural Gas, despite its difficult parametrization is reached much faster.not The average classification error it should be used for clustering observations with isnomtion, is the best choice. Fuzzy-C-Means especially recomwas 133.54. mended if the number of groups remains small and the strucinal data. In case that the number of observation groupssetisis rather not complex. knownFinally, the ture of the 4.1.3 Growing Neural Gas K-Fixed-Means algorithm should be chosen if cluster cenyet, Growing Neural Gas, despite its difficult parametrizaters, i.e. group characteristics, are pre-determined and should In terms oftion, both temporal behavior dependency Fuzzy-C-Means on entity only be marginally changed during execution. is the bestandchoice. is especially recomas well as group count, the algorithm behaves very similar mended number groups remains small and the structo Neural Gas. Moreover,if its the average classificationof error was 5 Conclusions only slightly higher, averaging at 137.74. ture of the observation set is rather complex. Finally, the This paper has presented an evaluation of different user 4.1.4 Fuzzy-C-Means K-Fixed-Means algorithm should be chosen cluster cenclassification algorithms if for enabling group communication and multi-casting in wireless networks. Classification tests ters, i.e. group characteristics, pre-determined and should With regard to performance, the algorithm performs worse are have been performed based on simulated context information with increasing numbers of observations, variables per obser-during about users (such as location, music taste, and age) with difonly be marginally changed execution. vation and number of groups. Due to its high computational effort, the algorithm cannot handle more than ten groups in a timely manner. For the calculation of the average classification error (134.69), observations have been assigned to the node they have had the highest affiliation to.
5
4.1.5
Conclusions
K-Fixed-Means
ferent numbers of both users and expected groups. K-Means and (Growing) Neural Gas as well as the newly developed K-Fixed-Means have consistently recognized the basic structure within the set of users and produced fast and stable classification results with low total errors. A major aspect of future work is the development of classification methods that can handle nominal (and ordinal) data since most of the interesting user data enabling multi-casting (such as a user’s profile) fall into these categories. Moreover, the implemented service will be verified in a testbed environment where efficiency gains based on the realization of multi-casting will be analyzed quantitatively.
This paper has presented an evaluation of different user classification algorithms for enabling group communication and multi-casting in wireless networks. Classification tests have been performed based on simulated context information
Overall, algorithm runtime is on a satisfying level and comparable to that of K-Means. For five variables (i.e. a fivedimensional context space), the algorithm arrives in a stable state earlier since (with the given set of test observations) the cluster centers reach their final position faster. A comparison of the accumulated classification error does not make sense since some observations were not classified at all. www.adv-radio-sci.net/9/187/2011/
Fig. A2.
Adv. Radio Sci., 9, 187–193, 2011
ronment where efficiency gains based on the realization of multi-casting Appendix A will be analyzed quantitatively. 6 Appendix A 192
Mannweiler et al.: Context-based User G User G
6 user grouping for multi-casting Mannweiler et al.: radio Context-based C. Mannweiler et al.: Context-based in heterogeneous networks
Algorithms Performance Results Appendix A Algorithms Performance Results Appendix A
Algorithms Performance Results Algorithms performance results
Fig. A1. Performance of K-Fixed-Means Algorithm Fig. A1. Performance of K-Fixed-Means Algorithm Fig. A1. Performance of K-Fixed-Means algorithm.
Fig. A1. Performance of K-Fixed-Means Algorithm
Fig. A3. Performance of Neural Gas Algorithm Fig. A3. Performance of Neural Gas Algorithm
Fig. A3. Performance of Neural Gas algorithm.
Fig. A3. Performance of Neural Gas Algorithm Fig. A3. Performance of Neural Gas Algorithm
Fig. A2. Performance of K-Means algorithm.
Fig. A2. Performance of K-Means Algorithm Fig. A2. Performance of K-Means Algorithm Fig. A2. Performance of K-Means Algorithm
Fig. A4. Performance of Growing Neural Gas algorithm.
Fig. A4. Performance of Growing Neural Gas Algorithm Fig. A4. Performance of Growing Neural Gas Algorithm
Adv. Radio Sci., 9, 187–193, 2011
www.adv-radio-sci.net/9/187/2011/
Acknowledgements. of this work were by the Federal Fig. A4. PerformanceParts of Growing Neural Gasfunded Algorithm Ministry of EducationParts andGrowing Research of the Federal Republic GerAcknowledgements. of this work were funded by the of Federal Fig. A4. Performance of Neural Gas Algorithm
Grouping for Multi-Casting in Heterogeneous Radio Networks GroupingC.for Multi-Casting in Heterogeneous Radio Networksin heterogeneous radio networks Mannweiler et al.: Context-based user grouping for multi-casting
193
References Baumung, P. and Zitterbart, M.: MAMAS – Mobility-aware Multicast for Ad-hoc Groups in Self-organizing Networks, in: Basissoftware f¨ur drahtlose Ad-hoc- und Sensornetze, edited by: Zitterbart, M. and Baumung, P., pp. 33–48, Universitaetsverlag Karlsruhe, March 2009. Bezdek, J. C.: Pattern Recognition with Fuzzy Objective Function Algorithms. Plenum Press, New York, 1981. Chalmers, R. and Almeroth, K.: Modeling the Branching Characteristics and Efficiency Gains of Global Multicast Trees, Proceedings of IEEE Infocom, Anchorage (AK), 2001. Fritzke, B.: A growing neural gas network learns topologies, in: Advances in Neural Information Processing Systems 7, edited by: Tesauro, G., Tesauro, G., Touretzky, D. S., and Leenet, T. K., MIT Press, Cambridge (MA), pp. 625–632, 1995. Fritzke, B.: Some Competitive Learning Methods. Java Paper, 1997, http://www.neuroinformatik.ruhr-uni-bochum.de/VDM/ research/gsn/JavaPaper/, accessed 11 November 2010. Janic, M. and Van Mieghem, P.: The Gain and Cost of Multicast Routing Trees. International Conference on Systems, Man and Cybernetics (IEEE SMC 2004), The Hague, 2004. MacQueen, J.: Some methods for Classification and Analysis of Multivariate Observations: Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, pp. 281– 297, 1967. Fig. A5. Performance of Fuzzy-C-Means algorithm. Martinez, T. M. and Schulten, K. J.: A neural gas network learns topologies, in: Artificial Neural Networks, edited by: Kohonen, Fig. A5. Performance of Fuzzy-C-Means Algorithm T., M¨akisara, K., Simula, O., Kangas, J. Elsevier, North-Holland, Fig. A5. Performance of Fuzzy-C-Means Algorithm Acknowledgements. Parts of this work were funded by the Federal pp. 397–402, 1991. Ministry of Education and Research of the Federal Republic of GerSteinhaus, H.: Sur la division des corps matiriels en parties, Bull. many (Foerderkennzeichen 01 BK 0808, GLab). The authors alone References Acad. Polon. Sci., 4(12), 801–804, 1956. are responsible for the content of the paper. Tyagi, S.: RESTful Web Services. Oracle Technology Network, References http://www.java.sun.com/developer/technicalArticles/ Baumung, P., Zitterbart, M.: MAMAS - Mobility-aware Multicast2006, WebServices/restful/, accessed 28 November 2010.
for Ad-hoc Groups in M.: Self-organizing Networks. In: Zitterbart, Baumung, P., Zitterbart, MAMAS - Mobility-aware Multicast M.,Ad-hoc Baumung, P. (Ed.): Basissoftware fr drahtloseIn: Ad-hocund for Groups in Self-organizing Networks. Zitterbart, Sensornetze, 33-48,Basissoftware Universitaetsverlag Karlsruhe, March M., Baumung,pp. P. (Ed.): fr drahtlose Ad-hocund 2009. Sensornetze, pp. 33-48, Universitaetsverlag Karlsruhe, March Bezdek, 2009. J.C.: Pattern Recognition with Fuzzy Objective Function Algorithms. PlenumRecognition Press, New York, 1981. Objective Function Bezdek, J.C.: Pattern with Fuzzy Chalmers, R., Almeroth, K.:Modeling the Branching CharacterisAlgorithms. Plenum Press, New York, 1981. tics and R., Efficiency Gains of Global Multicast Trees. Proceedings Chalmers, Almeroth, K.:Modeling the Branching Characterisof IEEE Infocom, Anchorage (AK) 2001. tics and Efficiency Gains of Global Multicast Trees. Proceedings Fritzke, B.:Infocom, A growing neural gas network of IEEE Anchorage (AK) 2001. learns topologies. In: Tesauro, G. et al.: Advances in Neural Information Processing Fritzke, B.: A growing neural gas network learns topologies. In: Systems 7, MIT Press, Cambridge (MA), pp. 625-632, 1995. Tesauro, G. et al.: Advances in Neural Information Processing Fritzke, B.: 7, Some Learning Methods. Java Paper, 1997 Systems MITCompetitive Press, Cambridge (MA), pp. 625-632, 1995. [www.neuroinformatik.ruhr-uni-bochum.de/VDM/research/ Fritzke, B.: Some Competitive Learning Methods. Java Paper, 1997 gsn/JavaPaper/, accessed November 11, 2010]. [www.neuroinformatik.ruhr-uni-bochum.de/VDM/research/ Janic, M., Van Mieghem, P.:November The Gain and Cost of Multicast Routgsn/JavaPaper/, accessed 11, 2010]. ing M., Trees. Conference Systems, and CyberJanic, VanInternational Mieghem, P.: The Gainon and Cost ofMan Multicast Routnetics (IEEE SMC 2004), The Hague, 2004. ing Trees. International Conference on Systems, Man and CyberMacQueen, J.: SMC Some2004), methods Classification netics (IEEE Thefor Hague, 2004. and Analysis of Multivariate Observations: Proceedings of theand Fifth Berkeley MacQueen, J.: Some methods for Classification Analysis of Symposium on Mathematical Statistics and Probability, pp. 281Multivariate Observations: Proceedings of the Fifth Berkeley 297, 1967. on Mathematical Statistics and Probability, pp. 281Symposium www.adv-radio-sci.net/9/187/2011/ Martinez, T.M., Schulten, K.J.: A neural gas network learns topolo297, 1967. gies. In:T.M., Kohonen, T. etK.J.: al.: Artificial Networks, 397Martinez, Schulten, A neuralNeural gas network learnspp. topolo402, 1991. gies. In: Kohonen, T. et al.: Artificial Neural Networks, pp. 397-
Adv. Radio Sci., 9, 187–193, 2011