Tiziano Inzerilli5, Miguel Gómez6, and Telma Mota7. 1 University of Ulm, ... work of her/his terminal with active sessions (e.g. John is watching a real ... roaming scenarios of the terminals. ... with roaming between domains, where the user's re-.
Multimedia Service Provisioning in a B3G Service Creation Platform Teodora Guenkova-Luy1, Andreas Schorr1, Andreas Kassler2, Ingo Wolf 3, Juan A. Botía Blaya4, Tiziano Inzerilli5, Miguel Gómez6, and Telma Mota7. 1 University of Ulm, Distributed Systems Department 2 Karlstad University, Computer Science Department, 3 T-Systems International 4 University of Murcia, Depto. Ing. de la Info. y las Comunicaciones 5 University of Rome La Sapienza 6 Agora Systems 7 Portugal Telecom Inovacao
Abstract – Future wireless systems will be heterogeneous and highly adaptive. In this environment, it is important to discover, create and adapt (multimedia) services and content and to integrate these processes into a platform so that pervasive systems, application services and other user-centric services can utilise them easily. In this paper, we present a Multimedia Service Provisioning Platform (MMSPP), designed for systems beyond 3G. The MMSPP orchestrates multimedia session control and content adaptation. Adaptation processes, based on MPEG-21 DIA, are coordinated via SIP/SDPng and guided through user/terminal profiles and network characteristics. The platform provides mechanisms for service discovery and interacts with accounting, charging and network QoS mechanisms. I. INTRODUCTION Communication systems beyond 3G will handle diverse types of services across different types of networks and access technologies. This trend is expected to become a universal characteristic in communications by the end of this decade. Adaptivity is recognized to be a key issue in bridging the heterogeneity of networks and service qualities. However, adaptation mechanisms have to be coordinated across several layers and entities in order to prove useful. It is the goal of the IST project Daidalos [1] to enable the end-to-end provisioning, control and adaptation of services over diverse access technologies. The Daidalos vision is to seamlessly integrate heterogeneous network technologies that allow network operators and service providers to offer new and profitable services, giving users access to a wide range of personalised voice, data, and multimedia services. The Multimedia Service Provisioning Platform (MMSPP) is the part of the DAIDALOS architecture that enables enhanced multimedia services in heterogeneous environments in the context of systems beyond 3G. The MMSPP is designed to interact with other components of this architecture to enable user authentication, service and network authorization and
charging (time, event or volume based), and resource reservation. The application of the MMSPP considers scenarios, where several heterogeneous receivers simultaneously request QoS controlled multimedia services. Terminal and session mobility, multicasting and broadcasting are also taken into account. The MMSPP contains four major types of components, MMSP User Agents, MMSP Broker/Proxy, Service Discovery Servers (SDS) and Content Adaptation Nodes (CANs). Interactions between these components are specified according to well-defined APIs and standard protocols or extensions of standard protocols. This paper gives an overview on the MMSPP and its components. Several scenarios concerning the MMSPP are depicted, and the major requirements and functionalities of the MMSPP subsystems are derived from these scenarios. II. USE CASES The following use-case categories of the DAIDALOS architecture where selected to represent multimedia support. Audio and Videoconference: A user requests and acquires an audio and/or video call with one or several other users. Streaming Media Content: A user accesses a multimedia service provider for obtaining streaming media (audio and/or video). Service Discovery: The clients need to search and discover services provided within the MMSPP. Multimedia Messaging: The service applied for sending multimedia messages to single users and usergroups. For each of these use-cases categories specific application scenarios and requirements have been defined, which are described in the following sections. A. Terminal and Session Mobility One function of the Daidalos platform is to enable mobility
of users, devices and sessions. In particular, we consider three different types of mobility: Terminal Mobility (TM) – Enables terminal connectivity regardless of terminal’s current point of attachment to the network. Thus, continuous terminal utilization for its user/-s is guaranteed. User Mobility (UM) – Enables the user to access her/his services through different terminals. Session Mobility (SM) – Ensures that sessions are not disrupted when the user’s terminal changes the point of attachment to the network or whenever a session is transferred from one terminal to another or even from one user to another. Combining the three mobility types results in the following Session Mobility scenarios: Session Mobility and Terminal Mobility (SM + TM) – The user changes the point of attachment to the network of her/his terminal with active sessions (e.g. John is watching a real time video on his laptop and changes to a new access network). Session Mobility only (SMO) – The session is transferred between terminals of different users. User A redirects a session (e.g. a phone call) to user B. Session Mobility and User Mobility (SM + UM) – The session is transferred between two terminals, where the same user has application access. User changes terminal (redirect) with active sessions (e.g. John redirects an already established and running video conferencing session from his PDA to his cell phone). The SM + TM scenario represents typical navigation or roaming scenarios of the terminals. These scenarios are associated with intra domain handover (i.e. a handover within the responsibility area of a single provider) or inter domain handover (i.e. a handover between the responsibility areas of multiple providers). In cases of intra domain handovers, MobileIPv6 fast-handover mechanism is applied [2] to enable continuous communication. In cases of inter domain handover, SIP mobility [3][4] would be applied, as this case is associated with roaming between domains, where the user’s reregistration within the new domain/-s of accessibility might be required. For such re-registrations the IPv6 mobility management alone is insufficient, as A4C (Authentication, Authorization, Accounting, Auditing and Charging) management has to take place if domain is changed. The SMO scenario and the SM + UM scenario consider the transfer of the whole session or only parts of it to a new terminal. In cases of partial session transfer the media flows (e.g. audio, video, etc.) of the session are split between different devices, or only a subset of the involved media flows is transferred to the destination device. In both cases, the SMO procedure will rely on the SIP REFER method [5]. B. Multicasting and Broadcasting Broadcast service provisioning modules are responsible for time, context and technology aware broadcast service delivery. Therefore MMSPP broadcast modules have to interact
with other Daidalos modules, i.e. A4C modules, QoS modules and security modules for Key Management. Broadcast services require a separation of the forward and return channel. The main requirements derived from this separation are: Service Announcement implementation on the forward channel Mediation components are necessary enabling the cooperation between forward and return channel, e.g. on the mobile terminal Access Control and Content Protection in a combined usage of forward and return channel Carousel services for repeated component delivery over multicast and broadcast channels Additional scenarios about session establishment, session handover and terminal change for broadcasting services have been examined together with a description of how to support carousel services. C. QoS, A4C and Security relationships The QoS, A4C and security components within the Daidalos architecture regulate the access and the usage of Daidalos services. Interactions between MMSPP and A4C components [22] enable its client applications and users the provisioning of regulated access to both network and services. This is facilitated using DIAMETER [6][7] and SIP [4]. The proper performance of multimedia services requires traffic prioritization or resources reservation on and resources guarantees from the network. The association with the A4C provides the MMSPP and the QoS components with information about how many resources could be reserved per specific users class and/or client application class. The applied architecture for resources reservation within Daidalos is DiffServ [8]. The Security components guarantee the faultless exchange of security sensitive data between the MMSPP, A4C and QoS components, i.e. data concerning user profiles, user identification, service identification, etc. III. MULTIMEDIA SERVICES PROVISIONING PLATFORM AND SERVICES This section describes the functional parts of the Multimedia Services Provisioning Platform (MMSPP). These are: The user agent which connects the user terminal with the multimedia services and their provisioning logic, The control function responsible for controlling the process of searching and negotiating multimedia service delivery to the user, The adaptation framework which allows to adapt multimedia services to the user profile, access devices and underlying network types, The service discovery part devoted to find the most suitable services under the given requirements on quality of service and available adaptation services. Figure 1 depicts the MMSPP architecture. It consists of the following components: MMSP User Agent (MMSP_UA): The MMSP_UA is a component located in the End-User-Terminal, which is
-
-
-
-
-
-
responsible for requesting/accepting establishment of multimedia sessions, changing their performance parameters and managing SIP mobility and session redirection[4][5]. Multimedia Service Provisioning Broker (MMSP_B): The MMSP_B receives multimedia service requests from clients, and it performs service orchestration interfacing with functions such as Service Discovery, Content Adaptation, QoS level assurance together with the QoS Broker deployed in Access and Core Networks, and with A4C through DIAMETER [6]. Multimedia Service Provisioning Proxy (MMSP_P): This component is the representative of the MMSP_B in the access network for managing the communication with the MMSP services (e.g. SIP communication). It cooperates with MMSP_B to announce decisions about service management to the service users (represented via the MMSP User Agents). Content Adaptation Nodes: Content Adaptation Nodes (CANs) act as proxy nodes for the multimedia data flow adapt different types of contents. During a multimedia session, a number of CANs can be cascaded and arranged in a content distribution tree, forming a content adaptation overlay network (CAON) which adapts real-time content according to capabilities, preferences and constraints of the networks and destination nodes. Service Discovery Directory Agent (SDDA): The SDDA implements a directory service that allows the discovery of Service Providers and Content Adaptation Nodes based upon the service needs. SDDAs are situated in Service Discovery Server (SDS) nodes. Service Discovery User Agents (SDUA): The SDUA queries SDDAs in order to locate third-party services as well as enabling services. SDUAs are located in mobile terminal equipment. Service Discovery Service Agent (SDSA): The SDSAs are located in service elements and perform registration of services in SDDA.
the MMSPP in more detail. A. MMSP User Agent MMSP_UA communicates with the other MMSP User Agents within the system over two logical communication channels. The media transfer between the peers (via RTP/RTCP [9]) can be either direct or indirect via one or several CANs. The application control signalling between the peers (via SIP or another application signalling protocol, e.g. RTSP [10]), can either be direct or via an intermediate Proxy (MMSP_P and MMSP_B). In principle, direct application control signalling can always be applied between MMSP_UAs, but in this case they do not have the benefit of coordinated resource reservation by the MMSP_B with the QoS Broker. Direct communication between the peers may be applied in e.g. Ad-hoc networks. Figure 2 shows the internal components of the SIP-based MMSP_UA. A SIP Stack and a SIP UA are used to process SIP messages and manage call sequences and synchronization of the protocol. SIP messages for multimedia session establishment carry attachments for describing the multimedia session. SIP does not enforce to use a certain format for these attachments. Nowadays, SIP messages typically carry SDP [11] or the newer SDPng [12] attachments. We developed of a new description format based on SDPng and MPEG-21 [13] that is able to cover the more complex scenarios described in Section II. Details about this new description language are provided in [14] and [15]. The generic architecture of the MMSP_UA facilitates the application of different formats of SIP attachments. The Message Manager decides what format (e.g. SDPng) will be used and chooses a Description Language Translator, which creates/parses the attachment of the appropriate format. This allows on the one hand remaining backward compatible with signalling peers that are only able to understand conventional SDP descriptions. On the other hand, additional features of the newer SDPng protocol and of our own description format can be used for the negotiation between SDPng-enabled peers. MMSP UA SD Client
Service Control Network
QoS Client
MMSP Broker
CLIENTS
Coordination Engine
Mobile Node Controller
MMSP Proxy MMSPP Proxy
Message Manager
MMSP UA SDS Agent SDS Agent
SIP UA SD UA SDDA SDDA Service Discovery Servers
Content Adapt Content Adapt
DLT CA Module
SD UA
Content Adaptation Overlay Network
SDS Agent SDS Agent
Figure 1: MMSP Components The following subsections describe the main components of
AN MMSP Proxy or AN CAN
SIP Stack (Lucent)
Media Control Manager
Media Stack
MMSP UA or AN CA
Figure 2: MMSP_UA architecture The Media Control Manager controls the RTP [9] traffic and its accompanying monitoring protocol (RTCP). It contains
a CA Module controlling the MMSP_UA-based Content Adaptation (e.g. dynamic codec switching within the session boundaries negotiated through SIP signalling) using the information provided by RTCP monitoring reports. The Media Stack is responsible for multimedia flows encoding/decoding, rendering and capturing, RTP payload formatting and for RTCP statistics generation and processing. Finally, the Coordination Engine proves the logic of sent messages at object level and takes decisions about replies and message modifications upon control-interactions with the corresponding QoS entity of the QoS-enabled application. The QoS components of the Daidalos architecture provide the Coordination Engine with resource information for the signalling entity. The Coordination Engine makes decisions on what to signal (i.e. message structure) and how to signal it (i.e. specific signalling mechanism – e.g. SIP, SDP, SDPng, etc.). B. Multimedia Service Provisioning The MMSP Broker/Proxy receives multimedia service requests from clients, discovers users and forwards session creation and adaptation requests to them. It also negotiates the network QoS on behalf of the terminal thus guaranteeing endto-end QoS by requesting resource reservations from the QoS Broker managing access network resources (a component within another functional block of the Daidalos architecture, not depicted here). The MMSP Broker/Proxy is also responsible for deciding, when Content Adaptation shall be applied for a requested session. It discovers and configures the Content Adaptation Nodes (CANs). Multimedia Call Control Service
MMSP Broker Engine Home Session Status DB
Session Module
Presence DB
Registrar Server
A4C ext.
A4C Module
A4C
SER
Redirect Server
Coordination Engine
MMSP Broker Engine in other Domain
QoS ext. Charging Module Msg Mgr/DLT
Charging Gateway
QoS and Mobility Module
CA ext.
Content Adaptation Module
SD ext.
Service Discovery Module
AN QoS Broker
AN CAN
CAN/Multimedia Service Discovery
MMSP Proxy Server
MMSP Proxy Engine
SIP
SER SIP Stack
MMSP UA or AN CAN
Figure 3: MMSP Broker/Proxy Architecture Figure 3 shows the architecture of the MMSP Broker/Proxy. The MMSP Broker is the “brain” of the MMSP. It comprises the logic for controlling all the other elements in the MMSP. It provides the session control and routing logic required for multimedia session establishment, control and adaptation, and allows the home operator to control the service subscription, authentication, routing and billing. MMSP Broker/Proxy component controls the states of the session management protocol (e.g. SIP). It validates user requests, mediates terminal capability negotiations and determines
whether content adaptation is required. MMSP Broker/Proxy requests also resource reservations from the QoS Broker and generates charging events with the A4C server. The MMSP Proxy Engine acts as a SIP proxy and it just forwards SIP messages to their destination according to configurations and instructions provided from the MMSP Broker. MMSP Broker/Proxy is comprised of the following sub-components (see Figure 3): Session Module / Session Status Database: are responsible for keeping track of the session status information in its associated Session Status Database. It allows users (or other components) to subscribe to a set of session status events, notifying them when changes in the status occur (e.g. if a session involves floor control, this component will keep track of floor granting and notifying the subscriber participants of changes in that status). SD extension / SD module: implement the interface to communicate with the CAN/Multimedia Service Discovery Service (SDS) and thus enable the MMSP/B to locate a set of Content Adaptation Nodes that can change the format of the compressed media or its representation to match heterogeneous codecs, or capabilities. This interface is based on SLP (Service Location Protocol) [16]. A4C extension / A4C module: implement the interface to communicate with the A4C components of the Daidalos architecture [22]. This interface is based on Security Assertion Markup Language (SAML) Diameter [17]. QoS extension / QoS & Mobility module: implement the interface to communicate with the QoS Brokers in the Access Network. This interface is based on the Common Open Policy Service (COPS) protocol [18] and allows to configure the provisioning of QoS for multimedia sessions inline with the users subscription rules. QoS Brokers [21] perform admission control and configure edge routers accordingly. QoS Brokers in different domains may negotiate inter-domain SLAs. CA extension / CA module: implement the required logic to communicate with the CAN module anytime a content adaptation function is required. This logic is responsible for the preparation of CANs to do transcoding and/or aggregation of multimedia streams. This interface is based on Media Gateway Control type of protocols (e.g. MEGACO [19]). Home Registrar Server: provides the functionality to keep track of the current location of user terminals. Whenever someone tries to reach a specific user (respectively user’s terminal), this component provides the current user’s address (based on login information). Presence DB: keeps track of user’s physical location, so that the best terminal and network solutions that are available at the user’s current location can be offered to her/him when a multimedia session shall be established.
-
Redirect Server: is responsible for redirecting messages to SIP Proxies belonging to other domains (i.e. the home or the visiting domain that is local from the point of view of the attached terminal).
C. Content Adaptation Node The introduction of an intermediate Content Adaptation Node in point-to-point conferences of streaming sessions enables the provisioning of a wide range of services, such as: Establishing point-to-point conferences between codecincompatible peers or streaming sessions (e.g. VoD) between a codec-incompatible client and server. Adapting media streams to the available network resources (e.g. transcoding due to the fact that one of the peers has insufficient downstream bandwidth to use a certain audio or video codec). Providing enhanced error protection (e.g. forward error correction, RTP retransmission) or rate-control functions on a last, wireless hop between the receiver and a CAN located in the receiver’s access network. Adapting media streams to the available hardware capabilities and/or user preferences (e.g. user connects using a GPRS Mobile, and wishes to reduce the video frame size in order not to pay more for a bigger video, which would anyhow be resized at her/his terminal). Other advanced multimedia services based on media transformation are also enabled like speech-to-text and text-tospeech conversion for the support of the hearing, speaking and visually impaired, language translation services, etc. MPEG21 DIA [13] mechanisms are applied for content adaptation. CANs are configured and managed by external control entities. This entity varies in function of the Content Adaptation scenario: On implicit adaptation scenarios, CANs are managed by the CA Module inside the MMSP Broker, but on explicit adaptation scenarios CANs are managed directly by the Mobile Terminal. A common control interface can be used in both scenarios. However, due to the different nature and location of the controller entities, this interface may be implemented using different underlying transport protocols in each case (e.g. SIP [4] or RTSP for explicit adaptation, MGCP/MEGACO [19] or an object-based proprietary solution for implicit adaptation). CAN controller entities (Mobile Terminal or MMSP Broker/Proxy CA module) rely on Service Discovery for finding the proper CANs. In order to enable this searching process, the CANs must register their capabilities in the SDS on startup. The CANs can update their status in SDS periodically or when a relevant change in their conditions occurs (like new adaptation facilities become available, resource availability at the CAN for adaptation changes, new incoming streams become available so offering new adaptation sessions is enabled, etc.). The controller entities contact the SDS in order to search for CANs that can fulfil a specific adaptation request. The SDS provides information on the matched CANs along with status information that can guide the controller entity in the task of selecting the most suitable (set of ) CAN(s) among the
different possibilities included in the SDS response. Thus, the CANs can form a CA Overlay Network (CAON) by cascading adaptation services offereb by several CANs. The Content Adaptation Node consists of several interworking subsystems (i.e. Managers) through the Content Adaptation Coordinator (CAC), which is responsible for the joint control and resource distribution between all media adaptation sessions performed by a CAN (see Figure 4). A Session Manager is responsible for setup, modification and teardown of content adaptation sessions through interaction with external controller entities (MT or MMSP Broker/Proxy CA module). The Service Manager registers the CAN’s capabilities in the SDS via SLP [16] on start-up, and it creates a second capability description using the Ontology Web Language (OWL [20]), which can be accessed by the SDS via HTTP. SLP allows to register capabilities in the form of simply key-value pairs, whereas OWL allows to express more complex facts, such as information about how many resources are required for each type of adaptation operation, how much delay will be introduced to the end-to-end session by using a specific adaptation function, and status information about the available CAN resources (e.g. CPU, network interface bandwidth). Thus, the controlling entity can find a CAN that is principally able to perform a desired adaptation operation, given resource availability and the delay introduced by such an adaptation process. CAN status is monitored by Resources Manager and provided to the decision-taking components of the CAN to aid in adaptation logic. Media Managers receive, adapt and forward media data from/to clients/downstream nodes. Several Media Managers may coexist in a CAN, each of them dealing with a particular kind of data (streaming media, real-time contents, MM messaging, etc.) and adaptation type (data transcoding, flow aggregation, modality transformation, etc.). Media Managers support the creation of one-to-one, one-to-many, many-to-one and many-to-many sessions. Several audio and/or video sources can be mixed into a single RTP [9] flow, and input media streams can be transformed into output media messages to be sent towards Message Servers. The Media Manager Coordinator handles the diverse Media Managers present in a Content Adaptation Node. It instantiates all the Media Manager present in the CAN, collects the Media Manager’s adaptation capabilities and routes the session setup, modification and teardown messages to the appropriate Media Manager. Content Adaptation Node
MMSPB/MT
Session Manager (SM) SIP, RTSP, MGCP/Megaco,...
Content Adaptation Coordinator (CAC) Media Manager (MM)
MT RTP/RTCP
Resources Manager (RM)
Media Manager Coordinator (MMC)
SDS
Service Manager (SrvM)
Figure 4: CAN Architecture
SLP
D. Service Discovery The Service Discovery Service (SDS) provides access to information about availability of users, terminals and services in the system. The Service Discovery Directory can be composed of a hierarchy or a mesh network of interconnected SDS Servers. The Service Discovery framework makes use of the Service Location Protocol [16] and consists of 3 major elements: Service description: a multiplicity of capabilities is associated to each entity/service to be discovered and is used to register and locate these entities/services (like Content Adaptation Services offered by CANs). . Service registration: each entity which needs to be discovered is published in a directory system in compliance with the service description model and with an associated scope which defines the group of users that can be informed of the availability of such entity. Service querying: when an entity has to be discovered a client process will make a query including the capability parameters that are requested for that entity to obtain its location (typically its URL). Alternatively, a query for finding out capabilities of an entity whose location is already known can be made. A possible scenario, in which service discovery is required, is for example a multimedia session establishment. The following are the main interactions in this particular scenario: 1. The service provisioning application (e.g. a video server) registers its services in the SDS, thereby publishing the information about the type/-s of the service/s, their availability and other information related to the service (e.g. Business policy, security regulations, etc.). 2. A terminal will initiate a session initiation request to get access to a specific type of service, but not to a specific service instance. 3. The request will be forwarded to the MMSP Broker closest to the Access Network, which will form a query to the SDS based on the session initiation request. 4. Based on information about the terminal’s capabilities, its location, the user preferences and the requested type of service, the SDS will select one or multiple ordered video provider candidates and return this information to the MMSP Broker. The MMSP Broker will proceed with the session initiation process. As part of 4., the SDS may communicate with its peer SDSs within an adjacent operator domain/-s on the availability of services there. This communication can be done for every specific communication case or in advance e.g. when different provider wish to exchange information about possible mutual support of services. IV. SCENARIO AND INTERWORKING The following scenario details the interworking between some of the components. This scenario includes the Access Router [21], which is the first hop wireless router providing access to the fixed network for the mobile terminal. The MMSP P/B is responsible for receiving SIP requests, interpreting the SDP part and based on a User Profile retrieved from A4C (SVUP),
validate if the user is or not authorized to use the service, the needed codecs and the correspondent maximum bandwidth. The MMSP/B also maps the high level requested service into a network service request that can be understood by a lower level component, the QoS-Broker: 1-11: Activation of the QoSBr and AR2. 11-14: Both MTs register using SIP REGISTER in MMSP so it can translate SIP addresses that come in the “TO” field of a SIP message to an IP address.
Figure 5 – Multimedia scenario 15-16: MT1 invites MT2 to a multimedia session using SIP INVITE request message. When the invitation arrives to MMSP, it sends back a SIP TRYING indicating that the process in ongoing. 17-18: The MMSP intercepts the invitation and sends a SIP OPTIONS request message to MT2, instead of just forwarding the invite message, in order to know the terminal capabilities. This behaviour is needed to detect content adaptation scenarios in forehand. When MMSP receives the response to this query, it tries to match MT1 capabilities (received in MT1 SIP INVITE SDP attachment) and MT2 capabilities (received in SIP 200 OK SDP attachment – response to SIP OPTIONS). In this scenario we assume compatibility between both parties’s for simplicity, otherwise in the following steps a suitable CAN would be detected by querying the SDS.
19-26: After that, the MMSP requests to allocate resources by sending a COPS request to the QosBr with the following parameters: QoSB_Handle – is the identification of the QoS session. {NSID, MIN_BW, AVG_BW_MAX_BW} – is a list of requested network transport class and requested bandwidth values; the MMSP is requesting an amount of bandwidth (minimum required bandwidth, average bandwidth and maximum necessary bandwidth) for a specified network service class. That list is ordered by NSID (network service class) in first place and then by BW (bandwidth). The construction of this list is made by the required codec bandwidth to establish the multimedia session, looking at the business model and user profiles (SVUP). This information is separated in two directions: downstream and upstream. MT1 and MT2 are the origin and destination of the chunk, respectively. A COPS Report is sent to QoS Broker indicating success. 27-32: QoS resources in the network are assured, so MMSP sends the first SIP INVITE (the one it had received by MT1) to its destination (MT2). MT2 starts to ring and finally accepts MT1 invitation. 33-35: MT1 after receiving SIP 200 OK response (accepting the invitation), sends one last message (SIP ACK) to MT2 before the RTP Session is established. 36-39: MT1 disconnects and sends a SIP BYE message to MT2 in order to close the session. When the MMSP receives that message it releases the QoS resources that are allocated for that session by sending a COPS DeleteRequestState signal to QoSBr, just before the proxy sends the SIP BYE message that concludes the session.
vice provisioning over heterogeneous terminals and networks. ACKNOWLEDGMENT The work described in this paper is based on results of IST FP6 Integrated Project DAIDALOS. DAIDALOS receives research funding from the European Community's Sixth Framework Programme. Apart from this, the European Commission has no responsibility for the content of this paper. The information in this document is provided as is and no guarantee or warranty is given that the information is fit for any particular purpose. The user thereof uses the information at its sole risk and liability. REFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14]
Figure 6 – Multimedia scenario (release reservations) V. CONCLUSIONS This paper described the Multimedia Service Provisioning Platform (MMSPP) within the IST project Daidalos. This functional platform groups the Daidalos components that are capable of controlling multimedia sessions. At the control level, the MMSPP handles SIP sessions, interprets SIP attachments (SDP/SDPng or other session descriptions), mediates terminal/application capability negotiation, and locates and uses adaptation services as well as 3rd-party support services if needed. At the transport level, it handles media flows and adapts them to fit the requirements and the status of users, terminals and networks. User authentication, authorization and charging, as well as network resource reservations are handled by other components of the Daidalos architecture that interact closely with MMSPP for providing end-to-end control of ser-
[15] [16] [17] [18] [19] [20] [21] [22]
DAIDALOS - Designing Advanced network Interfaces for the Delivery and Administration of Location independent, Optimised personal Services (EU Framework Programme 6 IP), http://www.ist-daidalos.org/ Rajeev Koodli, “Fast Handovers for Mobile IPv6”, draft-ietf-mobileipfast-mipv6-08.txt, October 2003 A. Johnston et al., “Session Initiation Protocol (SIP) Basic Call Flow Examples”, IETF RFC 3665, December 2003 J. Rosenberg et al., “SIP: Session Initiation Protocol”, IETF RFC 3261, June 2002 R. Sparks, “The Session Initiation Protocol (SIP) Refer Method”, RFC 3515, April 2003 P. Calhoun et al., “Diameter Base Protocol”, IETF RFC 3588, September 2003 Harri Hakala et al., “Diameter Credit-Control Application”, draft-ietfaaa-diameter-cc-06.txt, August 2004 S. Blake et al., “An Architecture for Differentiated Service”, IETF RFC 2475, December 1998. H. Schulzrinne et al., “RTP: A Transport Protocol for Real-Time Applications”, IETF RFC 3550, July 2003 H. Schulzrinne et al., “Real Time Streaming Protocol (RTSP)”, draftietf-mmusic-rfc2326bis-07.txt, July 2004 M. Handley, V. Jacobson, “SDP: Session Description Protocol”, IETF RFC 2327, April 1998 and M. Handley, V. Jacobson, C. Perkins, “SDP: Session Description Protocol”, draft-ietf-mmusic-sdp-new-18.txt, 06/04 D. Kutscher et al., “Session description and capability negotiation”, IETF Internet-Draft, Work-in-progress: draft-ietf-mmusic-sdpng-08, February 2005 A. Vetro, C. Timmerer, S. Devillers (eds.), ISO/IEC 21000-7:2004, “Information Technology - Multimedia Framework (MPEG-21) - Part 7: Digital Item Adaptation”, October 2004 T. Guenkova-Luy et al., “Harmonization of Session and Capability Descriptions between SDPng and MPEG-21 Digital Item Adaptation”, IETF Internet-Draft, Work-in-progress: draft-guenkova-mmusicmpeg21-sdpng-00, February 2005 T. Guenkova-Luy et al., ISO/IEC JTC1/SC29/WG11, “MPEG-21 DIA based content delivery using SDPng controls and RTP transport”, 04/05 E. Guttman, C. Perkins, J. Veizades, M. Day, “Service Location Protocol, Version 2”, IETF RFC 2608, June 1999 H. Tschofenig et al., “Enriching Bootstrapping with Authorization Information”, IETF Internet-Draft, Work-in-progress: draft-tschofenigenroll-bootstrapping-saml-00, February 2005 D. Durham et al., “The COPS (Common Open Policy Service) Protocol”, IETF RFC 2748, January 2000 F. Andreasen, B. Foster, “Media Gateway Control Protocol (MGCP) Version 1.0”, IETF RFC 3435, January 2003 OWL Services Coalition. “OWL-S Semantic Markup for Web Services”, http://www.daml.org/services/owl-s/1.0/owl-s.pdf R. Azevedo, A. Oliveira, F. Fontes, D. Guerra, P. Esteves, T. Mota, “End-to-end QoS implementation in a B3G network”, to appear in Proc. of the AICT 2005, Lisbon, Portugal, July, 2005 B. Stiller, J. Fernandez, H. Hasan, P. Kurtansky, W. Lu, D. Plas, B. Weyl, H. Ziemek, B. Bhushan: “Design of an Advanced A4C Framework”, Daidalos whitepaper for Del. D341, www.ist-daidalos.org