A generic embedded robot platform for real time ...

6 downloads 2615 Views 433KB Size Report
Figure 3: Screen view of the remote control center. During the ... Desktop or laptop PC. The robot platform is ... Touch sensors and the wheel encoders are connected to the digital ... environment. Camera, Samsung MPC-C10, is connected to.
A generic embedded robot platform for real time navigation and telepresence abilities Saša A.Vukosavljev

Dragan Kukolj, IEEE senior member

RT−RK Computer Based System LLC, Narodnog Fronta 23a, Novi Sad, Serbia [email protected]

Faculty of Technical Sciences, University of Novi Sad, Trg D. Obradovića 6, 21000 Novi Sad, Serbia [email protected]

Ištvan I.Pap

Vladimir Kovačević,

RT−RK Computer Based System LLC, Narodnog Fronta 23a, Novi Sad, Serbia [email protected]

Faculty of Technical Sciences, University of Novi Sad, Trg D. Obradovića 6, 21000 Novi Sad, Serbia [email protected]

Abstract— This paper presents a generic embedded robot platform concept that meets functional and real time requirements. The described concept allows easy system extension with new components, simple configuration and implementation of advanced features. Design and development of such complex system poses significant challenges and requires knowledge from several scientific fields as: mechanics, electronics, software and computer engineering. Proposal for hardware control units of these robots is based on the modern families of microcontrollers and DSPs coprocessors which are relevant for autonomous mobile application regarding requirements for interfaces, processing power, memory and power consumption. Typical software for robots generally includes control/navigation and communication functions. In the last decade multimedia software technologies also took place in the robotic system architecture to improve robot capabilities. The concept is proposed after analysis of spatial, temporal, and functional requirements. It involves a generic architecture using modular hardware design and software component based approach. The concept is illustrated by typical use case – wireless/wired controlled robot. Besides basic navigation commands set, it is able to transmit audio and video information. This provides telepresence impression to the remote operator. The presented approach offers a convenient/flexible platform for further development focused on specific areas such as: intelligent navigation, sensor arrays, wireless networking, human machine interaction, audio and video processing. Keywords- mobile robot, embeded systems, real time navigation, component based design, telepresence

I.

INTRODUCTION

In the past robots were well established in the industry. Industrial robots are implemented into well-structured environment, for well-known and repeatable work tasks that may be programmed in advance. Today’s robotic community continually grows regarding computer technology progress and popularization of robotics like robot football competition or international EUROBOT contents, [1]. Also, the number of published papers at

conferences and journals are steadily increasing. Robots have become more and more attractive for academia and science exploration. In the future, robots may show up in other places: our schools, our homes, even our bodies. As the technology progresses, we are finding new ways to use robots [2, 3]. Caused by development of techniques and technologies, the complexity of autonomous robotic systems has grown dramatically in recent years. Performing complex tasks and autonomous work in an unfamiliar environment require sophisticated platforms and algorithms for sensors data processing, networking, communication protocols, navigation, localization, control, human-machine interaction etc. The diversity of sensors, actuators, and advanced mechanical platforms accompanied with increasing processing power, communication interfaces and memory resources of computer system allows continuous improvement of robot intelligence. Computer science and technology have rapid growth in last decade which drive modern robotic and close up to the humans. In the beginning these robots are used for toys like LEGO Mindstorm, [4], or for simple and repeating tasks, for example home cleaners. Telerobotics and telepresence move robots a step forward to modern life and technology. A synonym for robot distance control in telerobotics and telepresence means "feel like you are somewhere else". Anyhow, the robots are always interacting with humans, either by taking commands or providing services. This is the reason why the audio-visual interface, together with robot appearance is very important. The text of the paper is organized into sections as follows. Problem explanation of this development system and mobile robot, including description of the proposed approach is given in chapter 2. The section 3 describes targeted use case scenarios. Robot hardware and software design are described in sections 4 and 5 respectively. The sections 6 and 7 contain closing conclusions, acknowledgements and list of references given in the paper.

II.

PROBLEM DEFINITION AND PROPSED APPROACH

Four main challenges are put in front of next generation robotic systems: • increased mobility, • longer power autonomy, • more intelligence and • better connectivity. In its essence, it is a multidisciplinary field comprising four fundamental technical areas: mechanics, electronics, computer and software engineering. All noted challenges are integrated in one, embedded computer based systems. Intensive research work is expected in this area, with challenging goals: how to develop, test and verify such complex system from hardware and software point of view. Contemporary robotic community is using two different approaches in the robotic architectures: microcontroller and PC based. Robotic platform like Lego Mindstorm [4], or Trilobot, manufactured by Arrick Robotics [5], uses 8 or 16 bit microcontrollers for robot control unit. This approach has many limitations regarding memory, processing power and possible extensions. On the other side it is very attractive for wide robotic community – especially for beginners. The PC based robot platforms are commonly used for executing complex tasks, utilizing advanced navigation algorithms, audio and video processing [6-7]. As consequence of increasing requirements in front of autonomous robot, control units must be based on one or more microcontrollers and/or digital signal processors. Also, in last years multi-core processors become available for embedded system designs. Main aspects in the embedded system design are: • Processing power - complex tasks and advanced algorithms require more processing power and highend 32 bit microcontroller, special coprocessors or DSP-s. • Memory resources – for data storage or buffering of large amount of data, complex algorithm and OS require more program and data memory. • Communication – capability of different types of wired and wireless interfacing. • Power consumption – power management unit, smart power units, charging devices, power harvesters. Regarding the major requirements, distributed control unit for the embedded robot architecture is proposed, shown on Figure 1. Key blocks - controller nodes - are: • Sensors and Actuators controller nodes – responsible for hard real time tasks, data acquisition, primary data processing, basic motion control. Typical choice is 8 bit RISC microcontrollers up 10MIPS, with small memory footprint, low power consumption and simple communication interfaces. • Main Robot controller node – responsible for soft real time tasks, high processing power processor with large memory resources (Flash, DDR/SDRAM, SD-card). The controller have set of communication interfaces



responsible for different types of wired/wireless connectivity. Typical choice is 32 bit ARM/MIPS RISC microcontrollers running up to 1 GHz, and expanded with DSP instruction sets. Commonly, these processors have integrated special network communication coprocessors, DSP coprocessors for audio and video processing and larger memory footprint using external memories for data and program storing. Communication interfaces o Robot controller/smart SA controller – Low/bit rate serial wire interfaces (I2C, SPI, CAN, LIN, ETHERNET 10Mb base, etc). Main task is communication between main controller and smart sensors/actuator controllers. o Robot controller/remote device - High bit-rate, parallel or serial (ETHERNET 100/1000Mbps base, USB, LVDS, etc). Main task is communication between main controller and remote devices or special coprocessors/ controllers. Remote control device

CPU & Memory

Embedded robot control unit

Interfaces Smart sensor

Smart actuator

Smart device

Figure 1: Proposed embedded robot architecture An autonomous robot architectures have been studied for many years, describing how software systems could be organized [8]. Nowadays, a hybrid solution combining reactive behaviors with limited knowledge of the world and higher level reasoning is widely accepted [9-10]. However, there is no standardized way of implementing this architecture to build a complete software system. As a result, software systems for autonomous robots are usually written from scratch and not built from existing pieces of software, leading to a waste of time and resources. Indeed, students, researchers and developers spend a large amount of their time solving software implementation and integration problems instead of focusing on their specific areas of interest to make valuable contributions. Regarding time constrains for robot application it must require: soft (for most of the multimedia and communications software) and hard real-time limits (for the acquisition and control behaviors). Three basic fields regarding robot software can still be observed: data collection and processing, communication and control. According to previous by listed fields, three types of components are used to build application:







The robot control and acquisition component requires a software platform that supports data acquisition, pre processing and basic control. Most of its tasks are hard real-time tasks, with typical frequencies of up to 1 kHz. This component is commonly used in smart sensor and actuator controllers, equipped with 8bit microcontrollers with internal memory. The most frequently used programming language is C. The communications component requires a software platform that supports buffer, timer, and message management (a set of common communications routines – communications kernel). Typical implementation language is C. The data processing component typically requires an embedded hardware-software platform that is completely dedicated to digital signal processing. The most frequently used programming languages are assembler and C. It is implemented as a single infiniteloop monitor task, where the algorithm complexity determines the footprint of the processor and memory.

All components require routines for initialization, restart, stop-close, and process task. From power management point of view component must be written in low power still entering in low power modes and adjusting processor clock. Increasing Power autonomy requires intelligent software power management mechanisms based on alarm/events and synchronization procedures. Software for smart sensors and actuator controller are commonly written without OS or small RTOS adopted for microcontrollers, like FreeRTOS [11]. From communication side software component requires a set of common routines, which are needed for the communication with other entities. These routines include buffer message (send, receive). Software for main controller is written to work under OS, for example embedded Linux distribution, [12]. The main controller processor architecture embedded Linux must be ported, using build root, [13]. During configuration we must select different software applications, communication protocols and device drivers. After building procedure, kernel and root file system can be easy installed on Linux platform via TFTP or NFS service. Specific components, applications, libraries, drivers can be additionally developed and deployed on Linux based controller, [14]. III.

USE CASE

The concept is illustrated by a typical use case – wireless/wired controlled robot. Besides basic navigation commands, it is also able to transmit audio and video information that provides impression of telepresence to remote operator. As a result of this project, the robot called ROBONIT (Robot Novi Sad Institute of Technology) was developed. It was built using commercially available hardware components. The targeted functions were: • Remote control of robot actions;

• • • • • • •

Advanced communication features with persons near robot using a videophone; Multimedia features, playback of prerecorded or streamed audio/video content playing; Microprocessor based conrol unit with embedded Linux OS; Stanadard comunication interfaces: RS232, I2C, USB, ETHERNET 10/100Mb; Extension interfaces makes possible to add sensors and actuators; Autonomy – battery supply with charger Modular SW structure.

A typical usage is shown in Figure 2. After startup, the robot registers itself in the wireless network, and waits for incoming connections. Then, the remote operator starts the remote control application on his computer. After entering the network name of the robot, the software connects to the robot using the network infrastructure, and lets the operator to control the robot. The operator receives audio and video content from the robot.

Figure 2: A typical use case. During navigation, the operator relies on the real-time video feed received from the robot and the readings of the proximity sensors installed on robot. The complete status of the robot (sensor readings, motor speeds, subsystem status, etc.) is sent to the control software, which graphically presents the telemetry data. During navigation, the operator can start playback of prerecorded multimedia material like music or movies. The robot will play back the presentation, while the operator still has the navigation possibility. As the camera is mounted on a moveable pod, by moving the camera, the operator can get a better overview of the environment. When a person approaches the robot, the operator can open a bidirectional (videophone) link, and can start a conversation with him. In order to retain the conversation partner in the robot camera’s view, the operator starts the face tracking feature. As the partner moves, the camera follows him, aiming to keep him in the view center.

and motors. Specially designed filters eliminate the interference on the power lines caused by the motors. The robot is powered from one rechargeable battery. The range – sonar based sensors are connected to the A/D converters of the specially designed microcontroller board – sensor actuator controller. The other 5 sensors are placed onto the robot front side, parallel with the floor. The reason for this arrangement is the fact, that the robot forward most of its time. Touch sensors and the wheel encoders are connected to the digital inputs of the sensor controller board. Figure 3: Screen view of the remote control center. During the development of the targeted functions, other valuable features were recognized, and that improves the basic functions. Those functions are: acoustic echo cancellation (AEC), face detection, recognition and tracking, speaker localization, semi-autonomous and autonomous navigation, [15-17]. After the implementation of the improvements, ROBONIT became a robot usable for many purposes like robot-guide in companies, museums, airports, as a presentation platform, in telerobotics, telepresence (education), and video conferencing. IV.

THE ROBOT HARDWARE

System for robot remote control consists of two main components: • Robot platform and • Desktop or laptop PC. The robot platform is proprietary developed and this section describes its HW structure. On the other side for remote control station has used standard PC computer. The ROBONIT platform is presented on Figure 4. Main components of remotely controlled robot are: mechanical platform equipped with motors and wheels, CPU and memory unit, communication adapter, audio and video subsystem, power management unit with battery and extension connectors. The robot is built around a mechanical body driven by 2 motors and 4 wheels. Frontal wheels are used for steering and rear two wheels are used for drive. The DC motors are driven by the PWM output of the controller driver board, routed through appropriate H bridges and based on MOSFET type switching components. Robot control unit is based on low power. High performance microprocessor is placed on the platform. On starter configuration robot is equipped with sensor set which includes 2 wheel encoders and 5 range sensors. Capturing audio and video streams out of the robot environment is enabled by the subsystem which consists of microphone placed on camera and stereo speaker set, camera pod and USB camera. Besides the components described above, additional boards are installed for power management and supply unit. They are designed to provide power for all electronic system

Figure 4: ROBONIT platform and hardware components. The sensors board is connected to the main board computer through I2C connection. Central processor unit is based on StarGate board, with Intel PXA 255 RISC processor which is commonly used in PDA (Personal Digital Assistant) devices. Maximal processor clock is 400 MHz and current consumption of 500mA. Communication interfaces are provided via coprocessor companion chip “Intel SA1111 StrongARM”, Main computer board has small weight and small dimension (9.5 cm x 6 cm) that allows remote device applications. Memory unit consists of 32 MB FLASH memory program and 64 MB SDRAM data memory. Control unit, companion chips have attractive connectivity subsystem with larger type of standard interfaces: PCMCIA, CF-II, RS232, USB host, 10/100 Mb Ethernet. In that way the communication system includes the most ubiquities devices such as cameras, WLAN adapters, memory cards, etc. During development phase we have used LAN network communication, or standard serial RS232 port for program downloading, spatial JTAG interface with IEEE1149.1 standard, and with appropriate probes, it can be used for advance debugging. The CPU is extended with audio input/output subsystem using an AC97 codec, the microphone is connected to the

microphone input of the board, while the two front speakers (Figure 4) are connected to the on-board 2W amplifier. The CCD camera mounted on robot platform is responsible for capturing video information’s from robot environment. Camera, Samsung MPC-C10, is connected to USB host controller on processor companion chip. Maximal power consumption is 1.5W. Frame rate and resolution characteristic are 35fps@CIF (352*288) or 15fps@VGA (640*480). WLAN, IEEE 802.11b [20], compact flash adapter is used for wireless communication. On the robot platform is used CF network adapter Netgear MA-701WLAN, with following characteristics: TCP-IP protocol, maximal power consumption up to 1W, maximal data rate up to 11 Mb, WEP data security and integrated antenna. Typical range is presented in Table 1 used for communication adapter, which depends on use case, data rate etc. Table1. WLAN typical use/data rate/ range characteristics Use Data rate(Range) Indoor 5.5 Mbps (100m), 11 Mbps (50m) Outdoor 5.5 Mbps (350m), 11 Mbps (150m)

recharging on docking station has been provided. Battery status, current consumption data is provided to CPU through simple SMBUS interface. Using Lithium Ion “Inspired Energy NI2040A22” battery pack, weight bellow 500g, with 6.6Ah capacity robot platform has maximal autonomy up to six hour with maximal processing requirements for robot navigation, audio and video transfer. Different sensors like electronic compass, passive infrared sensors, accelerometers, tactile, weather stations can be easy added to robot. For this purpose we can use available GIPO pins. Specially designed smart sensors or actuators with appropriate microcontroller unit typically use simple communications/interfaces for example, I2C, SPI or CAN. V.

ROBOT SOFTWARE

The robot software has a modular structure. The modules are clearly separated, by features and responsibilities. Higher-levels relays on low-level functions. Overview of the ROBONIT software is given on the Figure 6.

Figure 5 presents HW system structure, its main components and set of connections of remotely controlled robot. Battery & Power Supply & Management

Motors & driver elec. & sensors electronic

AC 97 audio card & Amplifiers & Speakers

Remote PC

WLAN card

Figure 6: ROBONIT software architecture. A. Basic system software The lowest-basic level is the system software that handles hardware devices like sensors, motors and encoders. The system software makes possible to initialize, configure, run the devices, and to read their values. Every device is modeled with an appropriate software object having adequate attributes and methods. Host PC

LAN

USB Camera & Microphones

Figure 5: HW system structure and components The robot is powered using one rechargeable battery and special power management hardware. This HW is designed in that way it enables external system powering during development phase. In application the battery powering and

B. Reflex routines The responsibility of reflex routines is to retain robot integrity. They are implemented at the next, higher level. The reflex routines stop the robot in dangerous situations. The reflex routines are implemented at level close to hardware because of the response time. The most critical situation, for example, is downstairs in front of the robot. The routines must react very quickly; otherwise the robot could fall down.

The reflex routines are simple algorithms, relying on comparison with some pre-calculated thresholds. C. Low-level motion functions Digital speed, position regulators, speeds and acceleration profiles for robot motions are included in low level motion functions. The correction algorithm is implemented as a software PID regulator. The input error is the difference between the readings of the left and on the right encoder. According to the difference, the fill factor of PWM for the right motor (actually its speed) is adjusted, until the encoder readings match. This correction is made only when the command “straight forward” is issued. D. High-level motion functions High level motion functions include set of navigation algorithms to achieve easier robot control and manipulation in special cases, for example passing through the doors. Tele-operated robot is controlled by a remote operator. The navigation is based on video feedback and telemetry. The operator controls the robot by pressing arrow keys on the keyboard. Key-presses are converted to motion commands, and they are sent to the robot through wireless networks. Delays on wireless network can be significantly higher than the delays on wired networks, so navigation may be tedious to the narrow passages. High-level motion functions aim to make remote navigation easier, by slightly adapting the local remote commands. The main goal is handling situations like passing through doors and navigation in narrow passages. Collision-free navigation of mobile robot is developed, [1819]. Procedures for navigation and direction of mobile robot are enabling its autonomous movements in unknown environment and avoiding obstacles while it moves towards given target. E. Robot control layer The robot control module is responsible for implementation of robot-specific functions. It acts as a network server, which accepts incoming connections from remote control software. Once a connection is created (i.e. when a command channel is formed), the robot control module accepts commands from the remote side, and periodically sends the robot status to the remote side. The commands sent to the robot are like: • Robot motion related commands, like: straight forward, forward with slight left (right) steering, backward, turn left, turn right; • Videophone related commands (start, stop and configure videophone) and • Multimedia related commands (start, pause and stop presentations). After receiving a command, the robot control module passes it to the appropriate subsystem. Another responsibility of the robot control module is to provide wireless network roaming. F. Videophone support The task of the videophone subsystem is to provide unidirectional or bidirectional audio/video link between the

robot and remote operator. During navigation, only the direction from the robot to the remote operator is needed. During conversation with a person near robot, bidirectional connection is required. The videophone subsystem relies on well-known videophone technologies. For videophone communication, standard H.263 video codec [21] and G.723.1 audio codec are used [22], transmitting Real-Time Protocol [23]. Foundation for implementing audio and video communication between the robot and the remote personal computer has used OpenH323 project, [24]. OpenH323 project is open source intended to develop applications that use the H323 protocol for multimedia communications over networks. Besides the commercial implementation of the H.323 this protocol is expensive for the use and distribution. Figure 7 shows generalized software structure for robot – remote control.

Figure 7: Software architecture robot – remote control unit. The program nitphone is adapted to work on a robotic platform for the embedded Linux Operating System uses shared libraries (.so). Nitphone software support was developed in C + + based on OpenH323 software and provides the functionality on video connections, accepting commands from the operator of the remote computer and the interpretation of accepted commands. Video is transmitted using logical channel rfc2190 of the H.263 codec. Software on the operator side on remote PC is ohphone.exe adapted to the work of the Windows operating system and used shared libraries and libavcodec.dll SDL.dll. SDL software on the workstation is in charge for showing video from robot camera. Libavcodec library processes, converts audio and video signals in the real time. Software structure, links, main video phone library between robot and remote control station are presented in Figure 8.

Figure 8: Video phone software architecture robot – remote control unit.

VI.

CONCLUSION

In this paper, an approach of generic embedded robot platform is described. The overview of existing processing platforms from the point of general application requirements is given. Instead of building custom models for each application we presented the system and computing architecture that is general enough to cover most of the robotic configurations. The described concept is allowed easy system extension with new components, simple configuration and implementation of advanced features. Use case results suggest that the proposed architecture satisfy functional requirements and can run in the real time. The approach has been successfully validated on the robot ROBONIT prototype. The designed robot platform uses wireless network for connectivity, and it has multimedia and videophone capabilities. It is result of multidisciplinary research and development. After the extension of basic feature set with advanced functions, it became a very attractive presentation platform, usable in education, meetings, fairs, also as a robot guide in companies, museum and airport, or in the teleconference systems. If the platform is combined with touch screen, it can be used as an interactive informational center. The robot has wireless control by the remote operator, and behind basic navigation commands set, it is able to transmit audio and video information that give telepresence sort of feeling. Due to combination of different techniques and solutions, many additional features were identified as valuable for further improvement in applications like: autonomous motion with obstacle detection and path finding algorithm, higher level robot-human interaction using voice command system, sound source separation, audio tracking, etc. Also, weaknesses of common approaches and solutions were recognized, which otherwise, would remain hidden in ordinary applications. Finally, the presented approach offer convenient and flexible platform for further development focused on specific robotic areas. Due its modular software structure, either high-level or low-level algorithms/methods can be easily integrated into existing framework. Hardware can be extended with new sensors, actuators and specific devices. The platform can be used for development and comparison of wide variety of computing algorithms in applied artificial intelligence, autonomous behavior, navigation, human interaction, audio processing, video processing and network management.

REFERENCES [1] [2]

[3]

[4] [5] [6]

[7] [8]

[9] [10]

[11] [12] [13]

[14] [15]

[16]

[17]

[18]

[19]

[20]

ACKNOWLEDGMENT This work was partially supported by Ministry of Education and Science of the Republic Serbia under Grant numbers TR-32034, by Secretary of Science and Technology Development of Vojvodina Province under Grant number 114-451-2434/2011-01, and by Hungarian Development Agency under the TÁMOP-4.2.2/08/1/2008-0008 program.

[21] [22]

[23] [24]

Eurobot,http://www.eurobot.org/ Nourbakhsh, “Robotics and education in the classroom and in the museum: On the study of robots, and robots for study.”, In Proceedings Workshop for Personal Robotics for Education. IEEE ICRA 2000. L. Greenwald, “Tools for effective low-cost robotics”, AAAI Spring Symposium on Robotics and Education, pp. 58--61, Stanford, California, March 26--28, 2001 Bagnal, B. Core Lego Mindstorms Programming, Prentice Hall PTR, Upper Saddle River, NJ 07458, 2002. Trilobot user guide, Arrick Robotics, 1998. Ištvan Pap, Dragan Kukolj, Zoran Marčeta, Vladimir Đurković, Marko Janev, Miroslav Popović, Nikola Teslić, ''Remotely controlled semi-autonomous robot with multimedia abilities'', 5th International Conference on Control & Automation – ICCA2005, Budapest, Hungary, June 26-29, 2005. Vladimir Kovačević, Saša Vukosavljev, Uroš Grbić, Branislav Atlagić, “Autonomni robot kaddy”, ETRAN, Hereceg Novi 2003. Joseph L. Jones, Bruce A. Seiger and Anita M. Flynn, “Mobile Robots implementation to inspiration SE”, A. K. Peaters Natick, Massachusetts, 1999. Ronald C. Arkin, “Behavior Based Robotics”, Intelligent Robots and Autonomous Agents, MIT Press, 1998. R. A. Brooks, “A robust layered control system for a mobile robot”, IEEE Journal of Robotics and Automation, vol. 2, no. 1, pp. 14– 23, March 1986. FreeRTOS, http://www.freertos.org/ http://www.embedded-linux.org Lewin A. R. W. Edvards, “Open Source Robots and Process Control Cookbook – Designing and Building Robust Dependable Real-Time systems”, Elsevier, 2005, ISBN: 0-7506-7778-3 Build root, http://buildroot.uclibc.org/ S. Hutchinson, G. Hager, and P. Corke, “A tutorial on visual servo control,” IEEE Transactions on Robotics and Automation, vol. 12, no. 5, pp. 651–670, 1996. N. Teslić, V. Radenković, D. Kukolj, M. Popović, “Camera real-time human tracking”, Proceedings of the XXVII Int. Convention MIPRO 2004,Opatija, May 24-28, 2004, pp. CIS.130-134. D. Kukolj, M. Janev, I. Papp, N. Teslic, S. Vukobrat, Speaker Localization under Echoic Conditions Applied to Service Robot, IEEE Int. Conf. on Computer as a Tool - EUROCON 2005, Belgrade, November 22-24, 2005. Branko Markoski, Saša Vukosavljev, Dragan Kukolj, Szilveszter Pletl, „Mobile Robot Control Using Self-Learning Neural Network“,7th INTERNATIONAL SYMPOSIUM ON INTELLIGENT SYSTEMS and INFORMATICS, SYSI 2009, Subotica, Serbia, September 25-26 2009. Li, Fuzzy-logic-based Reactive Behavior of an Autonomous Mobile system in Unknown Environments, Eng. Applic. Artif. Intell., 7(50), pp.521-531, 1994 IEEE 802.11b Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications: Higher-Speed Physical Layer Extension in the 2.4 GHz Band ITU-T Standard H.263 – Video coding for low bit rate communication, ITU-T Recommendation H.263, 1998. ITU-T Standard G.723.1 Dual rate speech coder for multimedia communications transmitting at 5.3 and 6.3 kbit/s, ITU-T Recommendation G.723.1, 1996. C. Perkins, “RTP: Audio and Video for the Internet”, Addison Wesley, 2003. www.openh323.org