A Bandwidth Management Framework for Wireless Camera ∗ Array Zhenyu Yang
Klara Nahrstedt
University of Illinois at Urbana-Champaign Department of Computer Science SC, 201 N. Goodwin, Urbana, IL 61801
University of Illinois at Urbana-Champaign Department of Computer Science SC, 201 N. Goodwin, Urbana, IL 61801
[email protected]
[email protected]
ABSTRACT
1. INTRODUCTION
Wireless 2D cameras are becoming more widely used for applications such as video surveillance and conferencing due to their easy deployment. These scenarios require multiple high quality video streams that share limited wireless channel resource. Therefore, a bandwidth management that is sensitive to application QoS requirements, content extraction and the specifics of a camera array environment is essential. This paper addresses the problem of bandwidth management to coordinate multiple video flows and to support streaming from wireless camera array. We present a bandwidth management framework that deploys a coordination scheme between the camera array and system resources. Especially, the framework explores different relations and scheduling policies between cameras and the bandwidth allocation to achieve better multi-view video delivery. The implementation uses Linux platform and IEEE 802.11b wireless ad hoc network. Our experimental results show that the bandwidth management framework helps achieving streaming differentiation while maintaining high quality video delivery.
Today most video conferencing or surveillance devices such as 2D video cameras are still wired which means that, (a) less cameras might be deployed, and (b) installation causes a lot of inconvenience and cost involved with cabling. On the other hand, wireless video cameras can be easily mounted and controlled, making them attractive for video surveillance and conferencing systems. In those scenarios, multiple cameras may be easily deployed in the form of a camera array, which usually improves the viewing quality and provides multi-view display to users. Figure 1 illustrates one possible usage of a wireless camera array (WCA) for video conferencing. The videos are first captured by the cameras
LAN/WAN
Categories and Subject Descriptors C.2.1 [Network Architecture and Design]: Wireless Communication; C.2.3 [Network Operations]: Network Management; H.5.1 [Multimedia Information Systems]: Video
General Terms Design, Performance
Keywords Bandwidth Management, Wireless Camera Array, Coordination, Adaptation ∗This work was supported by the Office of Naval Research under Grant NAVY CU 37515-6281.
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. NOSSDAV’05, June 13–14, 2005, Stevenson, Washington, USA. Copyright 2005 ACM 1-58113-987-X/05/0006 ...$5.00.
wireless camera video aggregator
Figure 1: WCA Application in Video Conferencing and streamed to the video aggregator through the wireless link. The aggregator then sends them to a remote location using wired networks. Since the wireless channel resource is very scarce and dynamic due to the large volume of video traffic, an underlying coordination mechanism is critical to support multi-streaming (multi-camera/multi-view). In this paper, we are interested in designing a bandwidth management framework for WCA with the following features. First, it provides service differentiation based on the camera selection policy and the content of videos. Second, it incorporates a bandwidth estimation technique developed for monitoring wireless channel condition. Third, it takes into account application characteristics. Adaptive applications usually allow more flexibility to dynamically configure bandwidth allocation based on application-specific parameters such as the number of cameras, their location and functionality. Fourth, it provides an easy control in case cameras join or leave the WCA based on bandwidth availability.
One feature of WCA application is that cameras forming an array are highly correlated. In most cases, the importance of one camera and its bandwidth requirement is not independent from others. For example, in video conferencing the camera recording the speaking person is more important than cameras facing quiet participants. In video surveillance, cameras tracking ongoing dynamic activities are more important. The relationship among cameras in terms of their scenes captured provides an interesting design space to be explored for bandwidth management. Our bandwidth management uses the relationship in WCA and employs a bandwidth coordinator whose coordination policies are content and camera sensitive. Figure 2 gives an overview of the WCA Bandwidth Management (BM), where wireless cameras are under the administration of the bandwidth coordinator.
2. RELATED WORK
wireless channel
Video Aggregator
Bandwidth Coordinator control
data
Figure 2: WCA Bandwidth Management To achieve BM for WCA, the coordination framework includes several conceptual entities. First, the bandwidth management utilizes a Content Extraction entity that extracts the motion information from the video content. The motion indicates the dynamic aspect of the importance of the video. Second, the bandwidth management utilizes a Camera Selection entity that assists in bandwidth adaptation based on the relation among cameras. For example, the entity may use the group-based management as one of the camera selection policies that allows for different camera clustering and differentiation. Cameras are configured into different groups and assigned different priorities for allocating bandwidth. This policy can be applied in applications such as an auditorium, where cameras facing the stage are in one group and cameras facing the audience in another. In summary, we envision an application scenario where the WCA bandwidth management is carried out in two dimensions (Figure 3). The user specifies the camera seleccamera-based
content-based
selected frame
dropped frame
increased to support higher frame rate. For suspicious activity, the operator remotely controls the PTZ camera to lock the view. With the PTZ camera being given the higher priority the viewing quality of the interested region is improved. Meanwhile, the camera selection may reduce the streaming from b/w cameras based on the overall viewing coverage. We evaluate the BM within the Linux platform and IEEE 802.11b wireless ad hoc network. The results show the effectiveness of coordinating multiple streams while reducing bandwidth requirement. The remaining of the paper is organized as follows. Section 2 introduces the related work. Section 3 presents the BM framework. In Section 4, we describe the coordination protocol. Section 5 evaluates the performance. The paper is concluded in Section 6.
Time
Figure 3: BM Adaptation in Two Dimensions tion policy for WCA based on application preferences such as the position and video quality of each camera and the overall viewing coverage. During runtime, the bandwidth is allocated based on the motion information and the camera selection policy. For example, in a video surveillance system, we can mount several black/white (b/w) fixed perspective cameras along with a Pan/Tilt/Zoom (PTZ) color camera. The PTZ camera is given higher group priority. When motion is detected, the bandwidth of related b/w cameras is
There are many adaptation schemes developed for transmitting real-time video streams based on coding and frame skipping techniques. In [7], the authors propose adjusting the number of forced update macroblocks and the quantization scale of H.263 video compression according to the current status of wireless networks. In [12], source adaptive techniques are proposed for single source multicast to adjust rates of multi-layered video streams. In [1], a receiver-driven adaptive mechanism is designed for multi-layered streaming in peer-to-peer network. Both [12] and [1] are based on the bandwidth constraints of wired channels. All the schemes ([7], [12], and [1]) are intended for single video stream although it is possible for them to be used in multiple video streams. We extend the work by addressing the issue of the relationship and coordination among different streams. In [14], a frame-skipping video aggregation technique is used for rate adaptation. Under network congestion, B-frames of multiple MPEG video streams are dropped in a round-robin fashion. Similarly, the work does not consider differentiation of multiple streams and the channel from multiple video streams to a central aggregator is wired. Several content-based adaptive schemes are proposed as well. In [9], the idea of content difference between adjacent frames is used to select summary frames and their transmission order to mitigate the impact of network disruption. The scheme is based on analysis of stored video which is not intended for real-time streaming. In [4], sports videos are segmented based on event detection. In [8], lecture videos are analyzed to differentiate content frames from non-content frames for selective video adaptation. Similar to [14], both [4] and [8] have the path to the aggregator wired. The focus of their work is in the stage of aggregating and broadcasting multiple video streams instead of in the stage of collecting them. Research in multi-camera system is also very active such as AutoAuditorium ([3]) and Intelligent Camera Management ([10]). Their work has influenced us with those concepts such as different camera roles and an automatic video director. Compared with them, our major concern is to design a light-weighted wireless camera system that can be easily set up to coordinate multiple video streams under the contention of wireless channels. The more sophisticated camera controls (e.g. sound-tracking) will certainly enhance the quality of our system and we will extend those techniques to the wireless domain in our future work.
3.
BM FRAMEWORK OVERVIEW
The design goal of BM framework is to dynamically coordinate the bandwidth requirement of different video flows and improve the delivery quality. Figure 4 shows the framework along with its data and control loops between entities. The Wireless Camera first captures the video and sends it to the entities of Content Extraction and Streaming Unit. The Content Extraction entity analyzes the video to extract motion information while the Streaming Unit is responsible for enforcing the bandwidth allocation. There is a monitoring entity at the MAC layer of each wireless camera, called Bandwidth Estimation, which monitors the average available bandwidth of the camera as video packets pass through the MAC layer. The bandwidth estimate is then sent back to the Streaming Unit forming a local feedback loop. Meanwhile, Wireless Camera video capture
Streaming Unit
Content Extraction bandwidth allocation
Bandwidth Coordinator Camera Selection Bandwidth Adaptation/Coordination
network layer
camera
A
Figure 5: The Problem of Camera Coverage era ci scattered in the region, its location and orientation is known (e.g. via UWB sensor technique) and its Viewing Coverage (V Ci ) can be computed as well. Adding a new camera improves the overall viewing coverage but certain cost may also increase. The expression of cost (costi ) is task-specific. For example, consider the task of maximizing the coverage while maintaining the minimum frame rate of each camera. The costi can be quantified using the channel i time proportion ([11]) as costi = mbw , with bwi being the bwi bandwidth estimate of camera ci (discussed in Section 3.4) and mbwi its minimum bandwidth requirement. Suppose A(V Ci ) gives the area of the viewing coverage. The camera coverage problem is: given a set of cameras, each of coverage V Ci and costi , find a subset C of cameras satisfying (1).
Bandwidth Estimation
arg max A( ∪ V Ci ) ∧
MAC 802.11
C control flow
ci ∈C
Σ costi ≤ 1
ci ∈C
(1)
data flow
Figure 4: WCA BM Framework the Wireless Camera sends the bandwidth estimate and motion information to the Bandwidth Coordinator. The Bandwidth Coordinator collects the data from all cameras and calculates the bandwidth allocation under the selection policy specified by the Camera Selection entity. The bandwidth allocation is distributed to the Streaming Unit of each camera, which completes the control loop. We note that there are three temporal levels where adaptation is involved. At the stream level, the emphasis is focused on selecting cameras and assigning their relative importance. At the group of pictures (GOP) level, consecutive frames are processed to perform content extraction and bandwidth estimation. At the frame level, the goal is to enforce the bandwidth allocation by controlling the sending rate of each individual frame. Further details about the key entities of the framework are presented in following subsections.
3.1 Camera Selection Under bandwidth constraints, the Camera Selection entity decides which camera should be used for capturing video and its relative importance according to the selection policy. We introduce four camera selection polices as listed below. 1. Random Policy – The cameras are randomly picked up with no preference and no situation awareness. 2. Event-based Policy – The policy refers to assigning importance to cameras based on extra monitoring channels such as sound-tracking, pressure and thermal detection. We will investigate this policy in our future work. 3. View-based Policy – This policy attempts to maximize the viewing coverage of an area with certain QoS requirements. The problem can be simplified and formalized using the similar approach of [6] (Figure 5). The floor plan of the target region is approximated by a polygon. For each cam-
The coverage problem is NP-hard which can be proved using the Knapsack Problem. In our future work, we plan to approach the problem using the greedy algorithm as shown below. C←∅ total ← 0 V Ci sort the cameras ci in descending order of cost i for each ci in the sorted order if total + costi ≤ 1 AND A(∪cj ∈C V Cj ) < A(∪cj ∈C∪{ci } V Cj ) then C ← C ∪ {ci } total ← total + costi 4. Group-based Policy – Under this policy, several groups are defined by the user for differentiating cameras according to preferences such as the position, function and video quality. Each group is assigned different priority for accessing the wireless channel. Note that, it is possible to combine different policies for more complex camera control. In one example, the user may configure cameras into two clusters: Gs for cameras facing the speaker and Ga for the audience. Later, he can specify that Gs use the Event-based Policy and Ga use Random or View-based Policy. In another example, if certain camera location is more important (e.g. camera A in Figure 5), we may augment the View-based Policy with the Group-based Policy to improve the quality of overall viewing coverage.
3.2 Content Extraction We adopt the motion detection program [13] and the technique of background subtraction [5] for extracting the motion information of video content. To analyze the video, the current frame Fj is compared against a reference frame Fr pixel-by-pixel. The pixel whose intensity difference beyond a threshold is counted as moving. The set of moving pixels is given as: Pj = {p : |Ij (p) − Ir (p)| > T }, where Ij (p) and
Ir (p) give the intensity value of pixel p in Fj and Fr respectively, and T is the threshold. The motionj is defined as the number of moving pixels in Fj (i.e. motionj = |Pj |). For camera ci , the Content Extraction entity calculates a running averagePof motion over one second and sends the value as mi = n1 n+k−1 motionj to the Bandwidth Coorj=k dinator, where n is the number of samples over one second and k the starting point to sample.
3.5 Streaming Unit The Streaming Unit enforces the bandwidth management when sending videos over the wireless network. Based on the channel time proportion assigned and its current bandwidth estimation, the Streaming Unit dynamically computes the current frame rate (cf ri ) and skips frames accordingly. cf ri =
3.3 Bandwidth Estimation For bandwidth estimation, we use the method as discussed in [11] where each camera utilizes video frames as probS , ing packets to estimate the total bandwidth BW = tr −t s where S is the MAC layer packet size, ts the time when the packet is given to the MAC layer, and tr the time when a link layer ACK is received. The individual bandwidth estimation captures the fact that the bandwidth perceived by different nodes could be different due to location-dependent interference of wireless network. The bandwidth estimation keeps the running P average of BW over one second and sends it as bwi = n1 n+k−1 BWj to the Bandwidth Coordinator. j=k
This section describes the protocol used in the interactions between the Bandwidth Coordinator and Wireless Cameras within the bandwidth management architecture. The BM is invoked at the times of flow establishment, flow termination, and significant changes in bandwidth estimation or motion. The protocol is illustrated using Figure 6.
for each camera ci ri ×f s mctpi ← mfbw i 2. The residual channel time proportion (rctpi ) is calculated for each camera ci in proportion to the group channel ratio (gcrj ) and motion information (mi ). P R ← 1 − mctpi total ← 0 for each camera ci j ← group of ci quotai ← mi × gcrj total ← total + quotai for each camera ci i rctpi ← R × quota total 3. Finally, the rctpi is added to the mctpi , which is the channel time proportion (ctpi ) allocated for camera ci . for each camera ci ctpi ← mctpi + rctpi
Bandwidth Coordinator
Wireless Camera
Camera Table
cameraID
3.4 Bandwidth Adaptation
1. The minimum frame rate (mf ri ) of each camera ci is allocated first. The rate is converted to the minimum channel time proportion (mctpi ) as given below, where f s is the frame size. We use Motion JPEG for the video compression which produces constant bit rate (CBR) video stream, and the frame size does not change much.
(2)
4. BM PROTOCOL
flow establish
The Bandwidth Adaptation entity is the core of the bandwidth coordination and it is responsible for bandwidth allocation of each camera ci according to three factors: (a) the camera selection policy, (b) the motion information mi , and (c) the bandwidth estimate bwi . The bandwidth allocation follows the max-min fairness rule [2]. First, for each selected camera, its minimum frame rate is guaranteed. Second, the residual bandwidth is divided to each camera in proportion to its importance as specified by the selection policy and its motion information. For Random and View-based Policies, all selected cameras are equally important. For Group-based Policy, the importance of each camera is based on its group channel ratio (gcrj ). The allocation algorithm using Groupbased Policy is given below.
ctpi × bwi fs
flow terminate motion change
1
cameraID groupID mi bwi
min rate
2 3 Group Table
Content Extraction
video
Bandwidth Estimation
channel time proportion
4
Streaming Unit
bw change
5
Bandwidth Adaptation
1: 2: 3: 4: 5:
groupID
ch ratio
hello: (min_rate) goodbye motion information allocate: (ch_proportion) bandwidth estimate
Figure 6: Bandwidth Management Architecture
4.1 Coordinator When a new camera ci initiates, it specifies its minimum frame rate in a hello message to the Bandwidth Coordinator. The flow establishment is similar to the description of [11]. Initially, the camera has no estimate of the bandwidth because it has to use the video packets for probing the wireless channel. In that case, the Bandwidth Coordinator uses a hardcoded bandwidth estimate until more accurate estimate from the camera is available. For the set F of previously accepted cameras, the Bandwidth Coordinator checks: 1 − Σcj ∈F mbwj ≥ mbwi , where mbwi is the minimum bandwidth requirement of camera ci ri ×f s ). If the test is true, the new camera is (mbwi = mfbw i admitted (F = F ∪{ci }). Otherwise, it is rejected. However, the Bandwidth Coordinator may choose preemptive measures depending on the camera selection policies. For example, in the View-based Policy the Bandwidth Coordinator may perform camera replacement if the new camera improves the coverage. Or in the Group-based Policy it may replace a camera of lower group priority. Once the camera is admitted, the Bandwidth Coordinator registers a new entry in its Camera Table. It then invokes the Bandwidth Adaptation to recalculate the channel time proportion for the new set of cameras using the algorithm of Section 3.4. After that, the bandwidth allocation is distributed to cameras via the allocate message.
4.2 Wireless Camera The Wireless Camera performs the bandwidth estimation, content extraction, and video streaming as described in Section 3. To further save the bandwidth, we include a threestate FSM in the Wireless Camera. The switching among states depends on the motion detected. In high mode, the Bandwidth Coordinator assigns as much channel time as possible to the camera. When the motion is very small, the camera switches to low mode where only enough channel time is assigned to keep its minimum frame rate. The camera will turn to sleep mode for even smaller motion. In sleep mode, the camera no longer sends out video. Thus, its entry in the Camera Table can be expired at any time.
programs of all entities are written in C. We also implement a video viewer in Java that playbacks the multiple streams received. To facilitate the experiment, three video streams (S1 , S2 , S3 ) are recorded to simulate possible scenes of a video conference, each featuring a person with face and shoulder in the central image region. In S1 , the person sits still with small movements of eyes and head but not talking. In S2 , the person talks intermittently. In S3 , the person talks continuously. Each frame is JPEG-compressed with average MJPEG size of 20K bytes. The maximum available data rate for each stream is thus 2.8Mbits/s. The maximum throughput at the application layer of IEEE 802.11b is ≈6Mbits/s, not enough for each stream at the maximum frame rate. The first experiment is done without the bandwidth management framework and each stream is sending at the maximum frame rate. Figure 8 shows the average frame receiving rate and the average frame loss rate is 32.8%. 20
Frame Rate (Frames/Second)
When a camera ci terminates, it sends a goodbye message to the Bandwidth Coordinator, which expires its entry (F = F − {ci }). The bandwidth allocation of the remaining cameras is updated and redistributed. The Wireless Camera monitors the change of bandwidth and motion, and periodically communicates with the Bandwidth Coordinator. Significant changes may invoke the renegotiation process which is almost identical to that of the flow establishment. One major difference happens when the channel resource is not sufficient to support all the previously accepted cameras due to the inherently unreliable nature of wireless network. In such case, some cameras have to be cut-off. The choice may be based on certain preemptive schemes as mentioned earlier.
high mode
S1 S2 S3
15
10
5
0 0
50
100
150
200
Time (Second)
low mode
Figure 7: A Three-State FSM of Wireless Camera
5.
EXPERIMENT
So far, we have implemented the BM framework combining Group-based Policy of camera selection, Content Extraction, and Bandwidth Estimation. For evaluation, we set up a wireless test bed with parameters listed in Table 1. The performance metrics is the frame loss rate. Frame loss has two sources. As video frames are sent using UDP they may get lost or corrupted due to unreliable communication. More dominantly, frames arriving out of synchronization bound are dropped and considered as lost. Table 1: Experimental Parameters Wireless MAC IEEE 802.11b number of streams 3 number of frames per stream 4500 (4 minutes) video (maximum) frame rate 18 frames/second image size 352×240 pixels We use AXIS 2130 camera to get quality videos. However, since it has no wireless interface and its programmability is limited, we connect each camera with a laptop (IBM T41 or T22) having IEEE 802.11 interface. Four laptops are used to form an ad-hoc network, one serving as the Bandwidth Coordinator while the other three as Wireless Cameras. The
Figure 8: Video Streaming without BM The second experiment evaluates the bandwidth management of Section 3. Figure 9 shows the average frame receiving rate of each stream. The average frame loss rate drops to 5.2%. The rate differentiation is achieved per stream: the QoS for S3 is fully maintained around the maximum frame rate whereas S1 and S2 have lower quality (rate) as they contain less motion. 20
Frame Rate (Frames/Second)
sleep mode
S1 S2 S3
15
10
5
0 0
50
100 Time (Second)
150
200
Figure 9: BM via Content Extraction We define two groups G1 and G2 in the third experiment for the Group-based Policy, where G1 = {S1 , S3 } and G2 = {S2 } with the group channel ratio being 1 and 2 respectively. Figure 10 shows the average frame receiving rate of each stream. By assigning S2 to a group of higher priority, its frame rate is increased.
Frame Rate (Frames/Second)
20
S1 S2 S3
15
10
7. REFERENCES 5
0 0
50
100 Time (Second)
150
200
Figure 10: BM via Content Extraction and Groupbased Policy In the next experiment, we allow streams to switch to low mode when the motion detected is below a threshold. Figure 11 shows the average frame receiving rate of each stream.
Frame Rate (Frames/Second)
20
S1 S2 S3
15
10
5
0 0
50
100 Time (Second)
150
200
Figure 11: Effect of Wireless Camera FSM
6.
conferencing, we assume a controlled environment for the camera array to avoid the interference of cross-traffic (e.g. use of a separate frequency band for WCA communication). Forth, we will extend the work to other wireless MAC protocols with higher data rate and enhanced QoS support.
CONCLUSIONS AND FUTURE WORK
In this paper, we have developed a bandwidth management framework for wireless camera array under which video content, camera selection, and bandwidth estimation are integrated to provide camera coordination and service differentiation. We evaluate the scheme using a wireless test bed. The significant drop of frame loss indicates that most frames are received within synchronization bound. The efficiency of the channel resource utilization for multiple video streaming is therefore improved. There are several directions to be explored. First, it is important to further investigate the bandwidth management under more sophisticated scenarios. For example, a camera capturing high motion may not actually be important and a camera having small viewing coverage may detect high motion. Thus, more advanced camera selection and media analysis techinques are needed and should be more integrated to improve the accuracy of bandwidth allocation. Second, the rate adaptation based on VBR videos is a very interesting issue as the amount of motion content will then have an impact on the frame size and complicate the adaptation algorithm. Third, the provisioning of QoS under the interference and dynamics of wireless channel is a tough problem even with certain resource estimation and re-negotiation procedures. However for an indoor environment like video
[1] V. Agarwal and R. Rejaie. Adaptive multi-source streaming in heterogeneous peer-to-peer networks. In Proceedings of the 12th Annual Multimedia Computing and Networking (MMCN ’05), January 2005. [2] D. Bertsekas and R. Gallager. Data Networks (2nd Ed.), Chapter 6. Prentice-Hall, 1992. [3] M. H. Bianchi. Autoauditorium: A fully automatic, multi-camera system to televise auditorium presentations. In Proceedings of Joint DARPA/NIST Smart Spaces Technology Workshop, July 1998. [4] S. Chang, D. Zhong, and R. Kumar. Real-time content-based adaptive streaming of sports videos. In IEEE Workshop on Content-based Access of Image and Video Libraries, 2001. [5] R. Collins and A. L. et al. A system for video surveillance and monitoring. Technical Report CMU-RI-TR-00-12, Carnegie Mellon University, May 2000. [6] U. M. Erdem and S. Sclaroff. Automated placement of cameras in a floorplan to satisfy task-specific constraints. Technical Report BUCS-TR-2003-031, Boston University, December 2003. [7] H. Liu and M. E. Zarki. Adaptive source rate control for real-time wireless video transmission. Mob. Netw. Appl., 3(1):49–60, 1998. [8] T. Liu and C. Choudary. Real-time content analysis and adaptive transmission of lecture videos for mobile applications. In Proceedings of the 12th annual ACM international conference on Multimedia, pages 400–403. ACM Press, 2004. [9] T. Liu and S. Nelakuditi. Disruption-tolerant content-aware video streaming. In Proceedings of ACM Multimedia 2004, Octomber 2004. [10] Y. Rui, L. He, A. Gupta, and Q. Liu. Building an intelligent camera management system. In MULTIMEDIA ’01: Proceedings of the ninth ACM international conference on Multimedia, pages 2–11. ACM Press, 2001. [11] S. H. Shah, K. Chen, and K. Nahrstedt. Dynamic bandwidth management for single-hop ad hoc wireless networks. In First IEEE International Conference on Pervasive Computing and Communications (PerCom), pages 195–203, March 2003. [12] B. J. Vickers, C. Albuquerque, and T. Suda. Source-adaptive multilayered multicast algorithms for real-time video distribution. IEEE/ACM Trans. Netw., 8(6):720–733, 2000. [13] J. Vreeken. Motion 3.1.17, http://www.lavrsen.dk/twiki/bin/view/motion/webhome. [14] A. Yeung and S. C. Liew. Multiplexing video traffic using frame-skipping aggregation technique. In Proceedings of International Conference on Image Processing, volume 1, pages 334–337, oct 1997.