A Distributed Approach for Intelligent Reconfiguration ...

2 downloads 656 Views 96KB Size Report
wireless mobile networks within the EC funded ... Service Manager (SM), which requires the .... The resource controller (RSC) is an application module.
A Distributed Approach for Intelligent Reconfiguration of Wireless Mobile Networks Kambiz Madani1, Tereska Karran1, George Justo1, Mahboubeh Lohi1, David Lund2, Ian Martin2, Bahram Honary,2 Sándor Imre3, Gyula Rábai3, József Kovács3, Péter Kacsuk3, Árpád Lányi3, Thomas Gritzner4, Matthias Forster4 1 2

4

University of Westminster, London, UK HW Communication Ltd, Lancaster, UK 3 MTA, Budapest, Hungary

Netage Solutions GMBH, Munich, Germany

ABSTRACT The paper describes development of a generic architecture for the intelligent reconfiguration of the wireless mobile networks within the EC funded Framework V CAST Project. The CAST demonstrator, currently being developed and integrated, is intended to evaluate some of the fundamental concepts of the proposed architecture, within the technological constraints presently available. Here, we present a generic distributed architecture and discuss problems associated with the organic-based intelligent support, network management, resource optimisation and control, and object orientated reconfiguration of resources. I. DISTRIBUTED APPROACH Figure 1 shows the proposed architecture. In this design, the distributed intelligence has the following components: a) Global Intelligent Reconfiguration Controller (GIRC), located in the Mobile Switching Centre (MSC) b) Locall Intelligent Reconfiguration Controller (LIRC), located in each Base Station Controller (BSC) and each Mobile Station (MS) The GIRC contains the global components of the Network Manager (NM), and the organic based CODA intelligent support sub-system, which share a database. Each LIRC on the other hand, contains the local components of the Resource Controller (RSC), and the Network Manager, both of which have access to the same database. It is assumed that NM is forced to perform a reconfiguration of the whole or part of the network due to one of the following events: a) A service request that was received from the Service Manager (SM), which requires the reconfiguration of the network resources. b) A failure message was received via the Fault Manager (FM), which requires remedial action in the form of some reconfiguration. The reconfiguration is decided by the Network Manager, in consultation with CODA, via the shared database, and the reconfiguration task is then analysed and passed to

the local LIRCs in each corresponding BSC and MS modules. The Layer Controller (LC) in each layer is responsible for the processing of the reconfiguration procedures, normally by forming and executing the required object chains. II. INTELLIGENT SUPPORT SYSTEM The Complex Organic Distributed Architecture (CODA) is an architecture which leverages the data that can be stored in databases and warehouses. CODA provides intelligent information to a system by using data marts in a layered distributed deployment. These layers are based on Beer’s division of the work performed in organisations on organic lines [1]. Filters and wrappers make sure that each component only receives necessary and sufficient information. The critical success factors (CSF) allow components to act with a degree of autonomy [2]. Operating information is collected for a preset timeband, typically every three hours. The time band must be long enough to allow the higher layers to collect data in sufficient quantity regarding the type of devices available, the required service and user locations. The Network Manager and the Reconfiguration Controller both download copies of the updated information on the best performing configurations for the time of the day and the type of calls required. This is stored in look up tables, which are closely monitored and regularly updated. CODA software at the monitoring operations layer is constatly assessing the transactions passing through the network. If a transaction is successful, this is noted. However, failed operations and reconfigurations always alert the higher layers and may result in some immediate changes. Otherwise the system regularly updates the look up tables on the basis of its analysis of the historical data. For further improvement in performance, CODA also alerts the Network Manager and Reconfiguration Controller if there is any predicted shortfall in the provision of services and suggests possible reconfigurations based on analysis of past performance, current failures and current events. At the ´monitoring operations` and the ´monitoring the monitors` layers, the reconfiguration performance is improved by comparing predicted performance for the time band and current (actual) performance data. In

addition, the history data about the performance of the network is stored in a very large data warehouse in the monitoring the monitors layer containing three months past performance data. The software then outputs predicted performance data with suggested service chain data. III. NETWORK MANAGEMENT FOR RECONFIGURABILITY The Network Management framework for reconfigurability is based on the idea of using a Local Network Manager (LNM), in each radio network item, including the Mobile Station (MS) and the Base Station (BS). The Global Network Manager (GNM) resides in the MSC. The Network Management has separate components for single reconfiguration using the Reconfiguration Manager (RM) within either MS or BS or a simultaneous reconfiguration of both MS and BS (called “bi-reconfiguration requests”) via the Bi-Reconfiguration Manager (BRM). The layered Network Management is shown in Figure 2 below. The layered architecture consists of two dimensions: The horizontal dimension is used for Reconfiguration Management and Fault Management. The vertical dimension consists of the five layers as follows: 1. 2. 3. 4. 5.

User Layer (integrated with the MS-LNM), Mobile Station Layer, Base Station Layer (including the handling bireconfiguration requests), Global Layer (MSC), Operations Layer (addressing the Operator’s viewpoint).

The hierarchical Fault Management information propagation path can be described as follows: a)

Faults reported by the Mobile Station are forwarded to the LNM-BS Fault Manager. This is because the managed entities from operations point of view are the Base Stations and not the Mobile Stations.

b) Faults reported from any related Mobile Stations and from the underlying Base Station are forwarded from the LNM-BS Fault Manager to the GNM layer, which provides the fault information to the Operations Layer as the final receiving destination. The Reconfiguration Management uses a central layer for the delegation of reconfiguration tasks as follows: a)

The User API allows the user applications to issue the reconfiguration requests, which arrive at the LNM-MS Reconfiguration Manager. Received reconfiguration requests are transferred to the LNM-BS Bireconfiguration

Manager except if the Mobile Station itself is involved. b) The Operator API is able to initiate global reconfiguration requests, where the global reconfiguration requests are submitted to the GNM Reconfiguration Manager. Globally initiated reconfiguration requests are forwarded from the GNM Reconfiguration Manager to the corresponding LNM-BS Bireconfiguration Manager units. c)

The LNM-BS Bireconfiguration Manager is dealing with bireconfiguration requests concerning a Base Station and a Mobile Station simultaneously. Bireconfiguration requests are split into their MS and BS parts, submitted to the responsible Reconfiguration Management component of MS or BS, and its execution is synchronised according to the successful completion of both parts, MS and BS.

d) Finally, the success or failure is reported back to the requesting Reconfiguration Manager. IV. RESOURCE OPTIMISATION AND CONTROL The resource controller (RSC) is an application module developed to intelligently control the configuration of different reconfigurable hardware devices such as DSPs, ASICs, ASSPs and FPGAs in our System. The application uses Object Oriented technology [3], and is designed to demonstrate the operation of the reconfiguration algorithms. The devices we deal with can be configured for different tasks and they can operate simultaneously. One device can often run more than one application (function) at the same time. Constructing a correct configuration for such hardware is complicated for two reasons: 1) Different devices are suitable for different tasks. For example, programmable ASICs are more appropriate for filtering tasks, but DSPs could be used for channel modem and baseband signal processing tasks. 2) In different scenarios different configuration algorithms should be used to satisfy the different requirements The different requirements that should be taken into account are maximum lifetime, fault tolerance, maximum utilization and low power consumption. Other important aspects can easily be added to the resource controller. The aspects can be handled by various optimization algorithms such as: the maximum capacity, the hottest first, the coldest first or the blind monkey algorithm. For mobile handsets, the preferred algorithm is the hottest first algorithm. The main goal of this algorithm is that it utilizes a minimum number of hardware. If a resource is not in use it can be powered down, thus saving energy. Since a processor can often serve more than one tasks, when new functionality is needed it can be placed onto resources already powered up. Using this method,

considerably lower power consumption can be achieved. For base stations on the other hand fault tolerance is a greater issue. In this case even utilization is the goal, because if we spread our services evenly over a large field of hardware, when part of the hardware fails, only a few services will be effected. For such cases, the two algorithms that accomplishes even utilization are the coldest first and the blind monkey algorithm. In order to use these algorithms a well constructed resource controller architecture is needed. This architecture is based on 3 abstraction layers: The first layer is the Hardware Abstraction Layer (HAL), the second is the Function Abstraction Layer (FAL), and the third is the Chain Abstraction layer (CAL). The role of these abstractions is to hide the irrelevant features of the underlying layers and to provide a clear and well defined interface to layers above. Each abstraction layer consists of a set of JAVA classes and some database tables. The different JAVA classes represent the different reconfigurable building blocks, and the records in the database tables represent the available resources in the system. When a resource is needed, a new instance of the corresponding JAVA class is instantiated and is installed as appropriate. The implementation of these abstraction layers provide a flexible solution for the reconfiguration of different hardware resources and their evolving features. The flexibility can be increased by integrating the intelligent decision making and resource management algorithms into exchangeable modules called plug-ins. These plugins have a well defined interface which allows them to be replaced as the circumstances change or the network evolves. Each plug-in contains a resource management algorithm. When the RSC is deployed, the appropriate algorithm can be selected and installed. In construction these algorithms and described architecture, we assumed that the resource controller software has a way of operating without interfering with the configurable hardware resources. This is achieved by placing the functionality of the RSC on a separate service processor. If such a processor is not available then distributed resource management should be used. V. RECONFIGURATION OF HARDWARE AND SOFTWARE RESOURCES In our system, any FPGA or DSP can be represented by an object with attributes and methods to describe and control its configuration capabilities. Baseband, RF or other network functions, often implemented as ASIC, are also described as objects similarly. In a separate paper, we described a method for the representation of these objects [4]. Once all components of the configurable system are described as objects (both hardware and functions) a mechanism is required to organise and manipulate these objects to place real functions onto real processing hardware. The CAST project uses a Java based mechanism to facilitate the configuration of hardware resources. This provides a bridge whereby one side operates in terms of soft objects and the other

manipulates and configures hardware as the methods in the soft objects are called. The level of native control is highly dependant on the way that the Java Machine (JM) is implemented. The most common implementation of a Java system uses a software emulation of the JM running on an existing processor This is better known as the Java Virtual machine (JVM). In this case the native interface to hardware is simply some assembly code which will run on the processor. This code can be made capable of toggling pins or specific ports on the processor device. External configuration mechanisms of configurable devices can be easily controlled. Another implementation is the byte code processor. In this case the Java code can be written to directly toggle pins or ports of the device to facilitate configurations. Whichever JM implementation is used, the configurable hardware must be accessible to the Java code in an organised way in order to define Hardware objects. This requires a partitioning of hardware into groups of configurable hardware [5]. Each configurable device is defined as the owner of a partition. External peripheral hardware (memory etc.) which may be attached to the configurable devices must also be considered. All peripheral hardware belongs to a partition which contains the configurable device connected to it. In the case where a particular resource is shared (i.e. PCI bus with slave devices) the resource belongs to the partition containing the configurable device which arbitrates the use of the resource. Each partition is given an address to uniquely identify each partition in software. Once the hardware is logically partitioned, with regard to configuration, knowledge of this partition must be electronically stored. Database tables are used to store information regarding: Each partitions resources (address, owning configurable device, peripherals, and connections between partition devices), and interfaces between each partition (configurable connections). The database information regarding each partition is also linked to a Java class which is capable of controlling its configuration. This class is implemented such as to abstract away, or hide, the functionality of the native mechanism which actually carries out the configuration. A major problem associated with the configuration of physical hardware is the numerous different types of hardware available. The major difference is the physical method used to configure. For example, different FPGAs will configure in a different way. Some are partially reconfigurable and others can only be wholly configured. DSPs, on the other hand, configure far differently. A configuration mechanism has to liase with the DSPs own operating system in order to send its processing code and activate it. To make the configuration interface as simple as possible the differences in configuration mechanism must be abstracted away. This should leave a common interface for all types of hardware. Generally, all configurable hardware has a ‘space’ where the configuration information (or function) can be placed. A particular function will take up a specific ‘size’ on a device. The

function will also be placed at a particular ‘location’ on a device. These three units of space, size and location will be interpreted differently by each device therefore a method of understanding this vague interpretation is required. Each partition description has a device specific map which describes what configurable resources are available at the locations within the space on a configurable partition. This allow resource allocation algorithms to choose what types of resource are available for configuration. Resources are described using the Extensible Markup Language (XML). The XML tags are defined which indicate location and groups of location which indicate the specific resource available. For example, a group of locations in an FPGA may represent a Configurable Logic Block (CLB) or a RAM. Each location represents where in the devices configuration space to store the bits which configure that particular resource. Similarly, a DSP memory map can be explained with locations indicating where to place executable code and where to specify the operating systems task list.

specific database in this way. The CAST project implements an Oracle 9i database behind an ODBC connection. XML operations, similarly, are carried out using a freely available Java library. The amalgamation of all of these separate mechanisms can be combined together to illustrate how hardware configuration can be easily managed once all information is gathered together and abstracted. We name this framework ‘Fizzware’. Figure 3 illustrates the whole system which gives both hardware and software perspectives. The dotted line in Figure 3 encloses the Fizzware framework. From the bottom the hardware itself is shown. Above the hardware, separate native drivers are shown each relating to a particular hardware partition. The JVM forms the centre of the system and defines the barrier between the software and hardware. Resource drivers are formed from database entries and java classes and each implements the common interface for device configuration. All of these resource drivers and several supporting runtime classes form the Fizzware API. VI. CONCLUSION

With the hardware constructed and its construction classified and stored, all that remains is to deploy functions to the resources. Current compiler technology can limit the system to only deploy precompiled functions to their specific device. For this reason it is assumed that precompiled functions are stored in a library and deployed where necessary. The Function object is defined as being a collection of precompiled device code for different devices. For example, a turbo decoder function object may contain specific implementations targeted to a Xilinx Virtex FPGA, Analogue Devices TigerSHARC DSP, Texas Instruments C60 DSP and a PC. Specific resource requirements are stored with each specific implementation. Resource requirements are stored using XML similar to resource descriptions of the hardware. The whole function object is stored similar to the hardware resource information by using database tables for information and Java classes for operational functionality. A common interface is defined to allow access to operational status during the function’s lifetime while configured. With both hardware and functions classified it is possible for resource allocation methods, such as those implemented in the CAST RSC, to dynamically allocate functions to hardware and monitor the status of any configured functions. Datapath connections between devices are managed using a concept known as ‘endpoints’. A common communication protocol is defined which manages data transfer between different devices. An endpoint is simply a device specific function which implements the device communication protocol and distributes the data to and from the functions on the particular device. Function communication with the same device is also specified but mainly relies on the operating system or other runtime support features of the particular device. The databases which hold all of the information regarding hardware and functions are implemented simple via a Java based Open Database Connectivity (ODBC) driver library. Scalability is ensured by decoupling the implementation from a

In conclusion, a generic distributed architecture was described, for general reconfiguration of wireless mobile networks. In this design, problems associated with the organic-based intelligent support, network management, resource optimisation and control, and object orientated reconfiguration of resources were investigated and discussed.

Acknowledgment This document is based on work carried out in the EUsponsored collaborative research project CAST (http://www.cast5.freeserve.co.uk/) Nevertheless, only the authors are responsible for the views expressed here. We also gratefully acknowledge the assistance of Analog Devices and Mathworks UK for there support throughout this research. REFERENCES [1] R. Espejo, R. Schuhmann, W. and M. Schwaninger, Organisational Transformation and Learning, A Cybernetic Approach to Management. Wiley, 1996. [2] T. Karran, G.R. Justo, M. J. Zemerly An Organic Architecture for component Evolution in Decisions Systems, SCI2000 Conference Proceedings July 2000. [3] Joseph Mitola – Software Radio Architecture, ObjectOriented Approaches to Wireless Systems Engineering, Wireless Architectures for the 21st Century, Fairgax, Virginia, USA. [4] Lund D, Honary B, Madani K, “Characterising Software Control of the Physical Reconfigurable Radio Subsystem”, Proceedings of IST2001, Barcelona, September 2001, pp 357-362. [5] Lund D, Honary B, Darnell M, “A New Development System for reconfigurable digital Signal Processing”, IEE 3G2000, London, March 2000.

LIRC

GIRC NM

CODA

Data Base

Data Base

BSC

BSC MS Service requested

Service offered Application Layer Service requested Reconfigurable

Reconfigurable Protocol Layer

Service requested

Service offered

LC Reconfigurable Physical Layer

BB

D/A A/D

R/F I/F

Antennae

Service offered Service requested PLC D/A Reconfigurable R/F D/A I/FR/F A/DD/A A/DD/A Physical Layer LC Reconfigurable A/D BB I/F R/F A/DD/A Physical Layer Reconfigurable I/F A/D Physical Layer

Figure 1: The Proposed Distributed Architecture

FM = Fault Manager RM = Reconfiguration Manager BRM = Bireconfiguration Manager

Operator

FM GNM

RM GNM Userspace

FM LNM-BS

BRM LNM-BS

FM LNM-MS

RM LNM-MS

RM LNM-BS

Forward to BRM LNM-BS

User API

Figure 2: The Network Management Layered Architecture

Application

Function Object Library

Fizzware API (Hardware Object API)

Other JAVA APIs Databases

Resource Driver

Resource Driver

Resource Driver

Resource Driver

Native Resource Driver

Native Resource Driver

Distributed methods CORBA/Java RMI etc.

Web etc.....

Shared Bus

Component

Component

Native Resource Driver

Component

Component

Native Resource Driver

Component

JAVA Virtual Machine

Component

LC

PLC PLC LC Reconfigurable Service offered Application LayerService requested PLC LIRC Protocol Layer Reconfigurable Service offered Service requested PLC LIRC Service offered Protocol Layer Service requested Reconfigurable LC LIRC Service offered Protocol Layer Service requested L PLC

Service offered

Component

LIRC

BSC BS BS Reconfigurable BS Application Layer Reconfigurable

Reconfigurable Application Layer

Component

LC

NM

RSC

MSC

JVM Either CPU + JVM

Shared Bus

Figure 3 Fizzware Architecture

HW JVM OR

Suggest Documents