CRIEP: A Platform for Distributed Robotics Research

4 downloads 360 Views 8MB Size Report
Host machine establishes remote connection to database ... The host machine stores the robots position and adjacent locations in a remote MySQL database. 4.
CRIEP: A Platform for Distributed Robotics Research Gabriel Loewen

James Weston

Jackie O’Quinn

Armstrong Atlantic State University 11935 Abercorn Street Savannah, GA

Armstrong Atlantic State University 11935 Abercorn Street Savannah, GA

Armstrong Atlantic State University 11935 Abercorn Street Savannah, GA

[email protected]

[email protected]

[email protected]





Ashraf Saad

Armstrong Atlantic State University 11935 Abercorn Street Savannah, GA

Armstrong Atlantic State University 11935 Abercorn Street Savannah, GA

[email protected]

[email protected]

ABSTRACT This paper presents a testbed which is the result of a collaborative project between computer science and psychology. The tested is composed of hardware and associated software which allows an individual, or group of individuals to conduct research in the fields of distributed robotics, searching algorithms, behavior modelling and other areas of cognitive science. The robots operate on a flat grid environment which is ideal for many types of algorithms. Algorithms are implemented as Java classes which are dynamically loaded at runtime and presented to the user. The overall structure of the testbed allows developers to create custom behavior classes which may be accessed and utilized by users. The methodology behind the robots movement relies on path analysis and is handled by a computer which maintains a wireless Bluetooth connection to each robot.

Keywords Robot, Behavior, Swarm, Java

1.

Bradley Sturz

tive science. A distributed robotics system is defined as a collection of autonomous entities which communicate either directly or indirectly over a shared network. In particular, we are interested in swarm robotics, dead reckoning, and landmark based navigation [2]. When we began working on this project our goal was to create a multi robot platform [5] which could communicate information amongst the entire set of robots. In addition to the distributed nature of the system we also wanted a robust and structured method for developers to create behaviors [6] by which the robots would operate. The combination of the distributed nature of the system and the ability for behaviors to be easily developed make this system very capable and robust.

2.

When we began working on the platform we decided on a specific set of requirements for smooth operation. Based on on the initial goals of the project we decided upon the following system requirements:

INTRODUCTION

• The system must support multiple robots.

Collaborative Robotics Intelligence and Education Platform, or CRIEP, is a project which aims at combining computer science concepts and theories in cognitive science to make it easy for researchers in both fields to use robots as a tool for conducting research as well as an educational tool for undergraduate students [7]. Distributed robotics plays a role in the fields of computer science and cogni-

• Robots must be able to communicate with each other. • The environment that the robots operate in must be structured in such a way that behavior may be developed based on a simple coordinate system. • Robots must be under video surveillance at all times in order for users of the system to view their movement.

∗Associate Professor of Computer Science †Assistant Professor of Psychology

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Copyright 2011 ACM 1-XXXXX-XX-X/XX/XX $5.00.

SYSTEM REQUIREMENTS AND SPECIFICATIONS

• Behaviors must be abstract and structured such that they are accessed in the same way while not necessarily performing the same task.

3.

SYSTEM DESIGN AND IMPLEMENTATION

In order to satisfy the requirements and specifications that we agreed upon we began researching appropriate hardware and software components that were appropriate for our goals. During the research phase we decided to use the

(a) Intellibrain-2 Robotics Controller

(b) Intellibrain-2 with robot chassis

(c) AIRcable USB dongle and serialto-Bluetooth Adapter

Figure 1: Components of the Robot

RidgeSoft IntelliBrain-2 robotics controller to power each individual robot. The IntelliBrain-2 uses a custom Java API as its programming interface and supports hundreds of sensors and effectors, however for our purposes only a few of those sensors were necessary for proper operation. The IntelliBrain-2 robotics controller supports communication using a standard serial port. For simplicity and ease of use we decided to use AirCable serial-to-Bluetooth devices with the IntelliBrain-2 as a means for communicating data between each robot. Hardware limitations with the serial communication made it necessary to use an intermediary computer to relay messages between robots. In addition, the intermediary computer also hosts the software and hardware necessary to stream live video of the robots as well as collect data from the robots and the users. The testbed is a combination of the robots, computer system for communication, grid lines used for robot navigation and an overhead camcorder. The camcorder is fixed to the ceiling of the room in which the robots are maintained. In addition to the hardware requirements of the system there were certain conceptual requirements with regard to behavior. The concept of a behavior with regard to a robot is pinnacle to the successful operation of the platform. We define a behavior to be a robot’s ability to use collected data regarding its operating environment to dynamically generate a set of actions to accomplish a user defined goal.

3.1

IntelliBrain-2 Robotics Controller

Each IntelliBrain-2 controller is equipped with one Sharp GP2D12 2X infrared range sensor (IRS) and four Fairchild QRB1134 reflective object sensors (ROS) [3]. The purpose of the IRS is to determine the proximity of an object in the robot’s path. The IRS determines proximity by radiating an infrared beam outward from the emitter, which bounces off a nearby object and is caught by the receiver. When the beam is received the IRS calculates the angle that the beam entered the receiver. The distance between the emitter and receiver is known, so the distance of the object from the sensor can be triangulated. A ROS functions like an IRS, but instead of computing distance an ROS determines the reflectivity of a object. Rather than using time as a measure for reflectiveness, the amount of reflected light captured by the receiver determines the sensor output value, which rep-

resents degree of reflectivity. In addition to its sensors, each robot contains two Fubata S148 continuous rotation servos and wheels that provide a variable speed driving mechanism for moving the robot in any direction. An LCD display is attached to the IntelliBrain-2 robotics controller and is used to display runtime information.

3.2

Communication Scheme

In order to satisfy the communication requirements of the system design the following steps are performed by the system and are carried out by the host machine as well as the client concurrently. Communication between all components of the system is crucial to maintain support for the distributed nature of the robots. Data is passed through a various mediums before it is presented to the user, therefore, it is important to maintain the integrity of communication. The process of sending data from the robots to the user and retrieving a response requires six distinct data transfers. Most of the communication between robots and the host machine happen at grid intersection points. When a robot has found a grid intersection point it will always transmit its current location and heading to the host computer system along with a set of available paths which is determined by sampling the robots array of line sensors. After robots have submitted data to the computer system it is processed by the behavior which the user has chosen and a direction is transmitted back to each robot according to the behavior output. This process continues until each robot has satisfied the requirements of the behavior that it has been assigned at which point the robot stops and waits for new input from the computer system. The corresponding data transfers are: • Robot submits data to host machine over Bluetooth. • Host machine establishes remote connection to database and submits data to a defined relation. • Client application retrieves data from the remote database. • Client application submits new data to the database with the aid of a user selected behavior. • Host computer retrieves new data from the remote database.

1. Host machine determines how many robots are available to the system and waits for wireless connections to be established. 2. Each robot sends its current location in the grid to the host machine along with a set of all possible adjacent locations which are available to the robot. 3. The host machine stores the robots position and adjacent locations in a remote MySQL database. 4. The user application reads the database and updates information accordingly. The system waits for a response from the user application. 5. The user application responds to the data by querying the set of rules defined by the behavior which has been selected by the user. The behavior output is stored in the remote MySQL database. 6. The host machine reads the database entries and relays the data to the robot. 7. These steps are repeated from step 2 until the selected behavior has completed all requirements at which point the robot stops and waits for new user input. Figure 2: Algorithm for Data Communication

• Host computer transfers data to robot over Bluetooth. It is apparent that at some point during the communication process there could be a corruption of data resulting in a lapse in data integrity and operation of the system. Figure 2 shows the required system-to-system data communication process. Figure 6 further shows how the communication is handled concurrently.

3.3

Video Stream

Live video streaming is activated when a user connects to the remote database using the client application. The host computer maintains a firewire connection to the camcorder and operates by taking continuous snapshots of the grid and saving them to the computer. Software on the host computer uploads the video frame image to the remote database every two seconds. The client application retrieves the current frame from the database and displays it to the user. The database is used as a medium to store the video frames as a result of the restrictions of the university network which prevents the use of sockets. Figure 5 shows a depiction of one frame of video taken by the overhead camera.

Figure 3: Bottom view of the Reflective Object Sensors. Sensors outlined in red denote the outer reflective object sensors. Sensors outlined in blue denote the inner reflective object sensors.

Figure 4: Front view of the Reflective Object Sensors. Sensors outlined in red denote the outer reflective object sensors. Sensors outlined in blue denote the inner reflective object sensors.

Figure 5: Video Frame Depiction - This is a depiction of a typical video frame image presented to the users. In this frame there are two robots active. One robot is positioned at [0,0] with a heading of North and the other robot is positioned at [2,3] with a heading of East.

Identify Robots

Identify Robots

Wait for data from robots

yes

User selects robots and starts behavior

Receive data from robots

Has behavior finished?

Store data in remote database

Read new client data from remote database

no

(a) Host application

Relay new data to robots

Wait for new data from database

Wait for user to respond

Store next command into database no

Refresh interface to reflect new data

Has behavior finished?

(b) Client application

Figure 6: Block Diagram of the Data Communication Process

State Name Lost Both Left One Left Centered One Right Both Right

State Description Neither sensor over line, position unknown Both sensors left of line One sensor left of line Both sensors over line One sensor right of line Both sensors right of line

Action Stop Steer hard right Steer slightly right Steer straight ahead Steer slightly left Steer hard left

Figure 7: Inner Line Sensor State Machine Diagram and Action Table [4]

yes

3.4

Robot Navigation

Each robot has two sets of reflective object sensors attached to the bottom of the chassis, see Figure 4 and Figure 3. These sensors are positioned such that the reflectivity of the surface is constantly determined where reflectivity is measured as being either low or high. The inner two reflective object sensors are designed to straddle the interior of the black lines that are printed on the surface. If the robot veers to the left or to the right one or both of the inner reflective object sensors will detect a shift in reflectivity and the robot will adjust its motor powers accordingly based on the output of a state machine, see Figure 7. The outer two reflective object sensors are designed to detect an intersection point. If either of the outer sensors comes into contact with a grid intersection then a shift in reflectivity will be detected and the robot will halt movement and begin the communication process.

4.

BEHAVIOR DEVELOPMENT

Robots within the grid environment are controlled using behaviors written in Java. These behaviors are extended classes of the behavior superclass. The system uses a plugin architecture and behaviors are loaded dynamically using the Java ClassLoader API. Behaviors may be written with many different objectives in mind and the possibilities of developing unique behaviors are endless. The simplest paradigm is the single robot, or one to one, paradigm in which one robot is controlled by one behavior. However, there is the swarm, or one to many, paradigm in which many robots are controlled by one behavior. Furthermore, by grouping many single robot behaviors there is the paradigm of many behaviors controlling many robots, which is a many to many paradigm. Depending on the requirements of the behavior any one of these paradigms may be taken advantage of in order to produce the desired result. In addition to the possible mappings between robots and behaviors each behavior may use many different approaches. For instance, because of the nature of the grid environment it may be advantageous for behaviors to use established searching algorithms such as breadth first search. For more information on behavior development see the CRIEP Developer Manual [8].

5.

POTENTIAL APPLICATIONS

The use of a distributed robots to accomplish a defined task is appealing to both computer scientists as well as cognitive scientists. By developing this platform with both of these fields in mind there is the possibility to develop behaviors modelled by theories such as dead reckoning and landmark based navigation [2]. Additionally, the integration of computational models, such as artificial neural networks, with the behavior architecture could prove to be useful in applications of learning and recognition. Further development of the CRIEP platform could yield all terrain robots capable of operating in interesting environments [9]. Given a large environment for the robots to navigate one interesting application in the field of swarm robotics is soil optimization. Robots moving along a bed of soil may use nitrogen detecting sensors to manage the nitrogen input of a crop. In this case the swarm could operate using a behavior centered on particle swarm optimization [10] where the nitrogen level of the soil determines the optimality of the sample taken from a specific area of the crop. Several robots taking soil

samples will map these fitness values to a one-dimensional fitness space which would be used to help a farmer optimize the distribution of fertilizer required for the crop [1].

6.

CONCLUSIONS

Form factor and usability make robotics a challenging field because many systems are designed to only perform one specific set of tasks and are often large and cumbersome. However, these challenges become benefits of the CRIEP platform because usability is defined by the limitless amount of possible behaviors that may be developed and the robot’s form factor is very small. Here we have only discussed some of the research being considered for distributed and swarm robotics. Work is in progress to investigate further applications using this platform.

7.

ACKNOWLEDGEMENTS

This work was partially supported by NSF CPATH EAE grant award 0722238 to Dr. Ashraf Saad. The results, conclusions, and opinions expressed herein are those of the authors.

8.

REFERENCES

[1] James Blondin. Particle swarm optimization: A tutorial. http://http://cs.armstrong.edu/saad/ csci8100/pso_tutorial.pdf, September 2009. [2] S. Healy. Spatial representation in animals. Oxford University Press, 1998. [3] Ridgesoft Inc. Intellibrain 2 robotics controller. http://www.ridgesoft.com/intellibrain2/ IntelliBrain2Datasheet.pdf, 2006. [4] Ridgesoft Inc. Exploring robotics with the intellibrain-bot. http://www.ridgesoft.com/articles/education/ ExploringRoboticsEdition2.pdf, 2009. [5] Kazuo Ishii and Tsutomu Miki. Mobile robot platforms for artificial and swarm intelligence researches. International Congress Series 1301, 2007. www.ics-elsevier.com. [6] Gal A. Kaminka and Inna Frenkel. Flexible teamwork in behavior-based robots. In Proceedings of the Twentieth National Conference on Artificial Intelligence (AAAI-05), 2005. [7] Christopher Kitts and Neil Quinn. An interdisciplinary field robotics program for undergraduate computer science and engineering education. Journal on Educational Resources in Computing (JERIC) - Special issue on robotics in undergraduate education. Part 1, Vol. 4, June 2004. [8] Gabriel Loewen, James Weston, and Jackie O’Quinn. Criep developer manual. http://armstrongrobotics. org/files/developermanual.pdf, 2010. [9] Giovanni C. Pettinaro. Teamwork by swarms of all terrain mobile robots. Industrial Robot: An International Journal, 2004. www.emeraldinsight.com/0143-991X.htm. [10] Jim Pugh and Alcherio Martinoli. Multi-robot learning with particle swarm optimization. AAMAS Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems, 2006.

Suggest Documents