Web-Controlled Devices and Manipulation: Case

0 downloads 0 Views 738KB Size Report
For example, setting up chemistry or physics laboratories could present potential safety ..... create a DC gearhead motor. The information on hacking the servos ...
1

Web-Controlled Devices and Manipulation: Case Studies In Remote Learning for Automation and Robotics T. Sobh, R. Mihali, G. Gosine, P. Batra, A. Singh, S. Pathak, T. Vitulskis and A. Rosca, S. Grodzinsky, L. Hmurcik School of Engineering 221 University Avenue, Bridgeport, CT 06601, U.S.A. http://www.bridgeport.edu/~risc Abstract The concept of distance learning has been more and more articulated during the past few years and is expected to shortly turn into a practical education system within current high level learning institutions. The chances are that distance learning would transparently extend colleges and institutes of education, and could plausibly become a preferred choice of higher education, especially for adult and working students Distance learning is being envisioned as an education mechanism that is more accessible, comfortable and closer to the learning individual. The distance being introduced is between the individual and the physical institution itself, but the exact requirements of a degree is maintained This concept would be unachievable without the current technology, for example, the impressive worldwide accessibility of the Internet. The main idea in e-learning is to build adequate solutions that could assure educational training over the Internet, without requiring a personal presence at the degree offering institution. For example, being able to obtain a Bachelor's degree in Computer Engineering from an accredited institution while residing thousands of miles away from it and actually never seeing it, except maybe for the graduation ceremony. The advantages are immediate and of unique importance, to enumerate a few: • Scholarship/education costs can be reduced dramatically, both from a student's perspective and the institution's (no need for room and board, for example); • The usually tedious immigration and naturalization issues that are common with international students are eliminated; • The limited campus facilities, faculty members and course schedules an institution can offer are no longer a boundary; • Working adults can consider upgrading skills without changing their lifestyles The paper offers several instances of projects that can be mainly utilized for distance learning for lab-based courses in the Robotics and Automation area. Introduction Distance learning possibilities have been considered for quite some time, however, only the recent growth of the Internet did actually bring them to being a reality. The idea is to use the large information bandwidth that the Internet can offer and expose a variety of educational materials online. A course could, for example, be offered online through a comprehensive list of multimedia solutions ranging from an online course book and online

2 homework submissions to live video conferencing, auditing and examinations. There are many methods of creating an online course and one would have to be chosen adequately, considering issues such as security/reliability of training, educational expectations such as: work speed improvement (e.g. math and programming courses), physical dexterity (e.g. mechanical, medical courses), etc. Some courses might require specialized hardware input and output solutions, such as haptic devices combined with virtual reality gear, etc. It is certainly easy to set up online some types of courses online, such as computer programming, mathematics, literature, while it is difficult to setup engineering or laboratory based courses due to the custom and specific hardware required. For example, setting up chemistry or physics laboratories could present potential safety and cost issues, medical laboratories bring many additional issues, and at an absolute view there could certainly be issues that the current technology could not solve. We concentrate on some of the aspects of the mentioned problems and present a few implementations that could help in their solution. In a supportive effort towards faster distance learning implementation and consideration, we present through this paper a sequence of projects that have been developed and can serve in the process of distance learning education ranging from simple "hobby" style training to professional guidance material. The projects have an engineering / laboratory flavor, are part of the ongoing work of the faculty and students of the Computer Science and Engineering Department of the University, and are being presented in an arbitrary order. The topics range from vision and sensing to engineering design, remote control and operation. The inclination is towards being able to set up physical labs, online, allowing students to use equipment / machines (microscopes, manipulators – for handling substances or various parts of experiments, pulleys, etc), hence the goal of creating tools to allow engineering lab based courses to be offered in a distance learning manner. Before proceeding to detailed approaches of each project, a synopsis of each project is given: Mobile Robot Controlled by a Phone In an attempt to add to the many possible ways of automating and implementing remote engineering, the project presents a complete, in depth, cost-effective solution for controlling a robot through phone calls. Various extension possibilities are being discussed as well. Internet Based Software Library for the SIR-1 Serial Port Controlled Robot Through this project, an efficient software library has been implemented that allows full control of an SIR-1 manipulator over the Internet. An HTTP based interface exemplifies how a 5 degrees of freedom robot can be set to work over the web. The results show the possibility of extending such APIs to various robots with dedicated applications Web Based Computer Vision Framework for Security and Tracking Applications A visual processing software interface is constructed. Various aspects / issues of image processing and object recognition are demonstrated, supported with examples and simple expandability options. Internet Controlled Satellite Transponder (using a remotely controlled robotic manipulator) The idea is to be able to have a robot perform the control of a satellite transponder from a remote location without having to make modifications to the expensive and dedicated satellite control equipment. Such approaches (device controlling devices) can be used in controlling various

3 types of situations, such as hazardous conditions that would not allow human presence, war conditions, constricted spaces etc. 1. Mobile Robot Controlled by a Phone

Figure 1.Mobile Robot View Mobile Robots (figure 1) have numerous applications: unmanned exploration, land mine removal, energy plants and manufacturing factories, to mention a few. We introduce a cost-effective robot. The idea is simple and has been the focus of many researchers, namely using a remote controller to request a robot to perform a specific job. Our focus is on reducing the costs and making the robot controllable by a device that most individuals own – a phone. With the introduction of video-enabled cell phones it will be possible for the user to see the robotic movement in real-time and possibly perform educational exercises using a simple interface at a distance. Examples include “calling” a robot on the way home from work and have it do various jobs like vacuuming the home and sprinkling the garden. This could also be done by logging on to a web server via an internet connected device (wired or wireless) and sending signals to the robot, or directly sending specific commands to the robot through cellular phones to a wireless (or cellular) server at home (with or without a web connection). 1.1 Introduction and Background Search The objective of this project is to introduce the idea of controlling a personal robot via a phone line. This project is divided into two sections: a) Conducting research on various applications of mobile robotic systems and their feasibility. b) Focusing on the design and actual implementation of a control mechanism for the robot over existing phone lines, in particular analog copper wires. In the last 30 years we have seen technology and technology applications grow dramatically. With the introduction of personal computers in the 1970s and mass wireless communications around the same time, technology has advanced from the industrial age to the information age [1]. Dramatic improvements in optimization, profound changes in costs and vast increases in applications have made it possible for us to consider the design of a personal robot. In fact, some personal robots are already out in the market for limited applications.

4 Sony Corporation has introduced Aibo [2], the intelligent pet. On May 12, 1999, “Designed for home or office, a new smart and affordable personal robot was launched” by Probotics, Inc. called Cye (figures 2 and 3), “The compact personal robot can do a wide variety of tasks such as carry dishes, deliver mail, lead guests to a conference room and vacuum the carpet”. Using wireless communications technology, Cye is controlled by Map-N-Zap, a highly intuitive graphical user interface (GUI) loaded on any PC with a 133 MHz Chip or higher.

Figure 2: “Cye in Living Room” Figure 3: "Orange Cye-sr with Cordless Vac" Photographs by: Neoforma Design A prototype robot named R100 (figure 4) developed at NEC's Central Research Laboratories is capable of visual recognition, voice recognition, mechatronics and Internet communication technologies, “the robot can recognize individual faces, understand verbal commands, and move smoothly around the home, avoiding such obstacles as tables and chairs” (NEC).

Figure 4: R100 personal robot designed by NEC. This robot communicates by talking, listening, responding and acting which is different from the ones that communicate via keyboard, dial and buttons. Computers may be indispensable today, but they are not that friendly for everyone. NEC is conducting research on a personal robot to break through the barrier between people and computers. Their aim is to develop a “robot companion who will become like a family member dependable, kind, considerate but at times maybe a little cranky and just human like the rest of us” (NEC).

5 From our background search we conclude that controlling a robot over existing telephone lines has not yet been implemented. To implement a control mechanism for controlling any robot over a telephone line appears challenging. Having this mechanism, would open a clear window of opportunities in fields such as distance learning, remote / wireless automation and manipulation 1.2 Design Ideas Based on our research, we developed three different methods to implement our idea of controlling a robot over the phone. a. Building a robot with a phone chip. b. Using the Internet Telephony Server. c. Implementing a combination of the Phone and Internet for robot control a) Building a robot with a phone chip One way to communicate with the robot could be to build a robot with a phone chip (figure 5) (similar to Lucent Technologies’ phone on a chip [3]). Various types of phones exist in the market. Initially, we thought of using the Personal Communications Services (PCS)[4]. The radio spectrum in the 1.8 -2GHz range is typically used for this mode of digital cellular transmission that competes with analog and digital services in the 800Mhz and 900MHz bands. There are several advantages for using digital transmission. A digital signal can pass through an arbitrary number of regenerators with no loss in signal and thus travel long distances with no information loss. Voice, data, music and images can be interspersed to make efficient use of circuits and equipment. Much higher data rates are possible using existing lines. Compared to analog transmission, it is much cheaper than analog transmission as the only need is to distinguish between 0 and 1.

Figure 5: An application of a Robot with a phonechip b) Using the Internet Telephony Server Another approach to the same design problem could be through using the Internet to communicate with the robot using the same PCS wireless technique. An Internet telephony server links phone lines to the Internet (figure 6). With the Internet, we can connect to the web

6 server using ASP or CGI programs. The web server then generates the easily decodable touchtone. Then the same techniques would be used on the receiving end of the robot as described earlier. One of the added features of using the internet and the telephony server would be the ability to display a simulation of the robotic movements at the remote location, or a real time video feedback system can be implemented.

Figure 6: Example of using Internet Telephony Server c) A combination of the Phone and Internet for robot control A third possibility for personal robot control is to merge both of the above mentioned approaches and use both a direct dialing system and access through the Internet. Figure 7 shows the flexibility of using an alternate system when the other system is unavailable or more expensive. For example, if we are in a country in Asia and need to access the robot in an apartment in California, then using the Internet will typically be more economic than using a phone system. Similarly, when we are driving on a highway and need to access the robot to clean up our home before we get back, then using a phone will definitely be a good option (although a wireless palm connection with a video camera would be possible too).

7 T e le p h o n e u n it p la c in g c a ll d ire c tly to th e R o b o t

a

In te rn e t c o n tro lle d te le p h o n y s e rv e r u s e d to c o m m u n ic a te w ith th e re c e iv in g u n it

C a ll re c e iv in g u n it

F P G A c h ip w ith d e c o d e r a n d k in e m a tic s lo g ic

R obot m o to rs /s e n s o rs

Figure 7: Flowchart showing the combination of the two design methods Another application Personal Robots may be used for is grocery shopping (figure 8). The existing online grocery shopping sites allow home delivery at certain times. Optimistically, we can do grocery shopping online and at the same time we can request our Phonebot to open the door, carry the delivery and put it in the refrigerator. Experiments with robots in the field of surgery are already being done, where a doctor can remotely monitor a patient and operate on him/her (very limited surgery types). This idea can be further implemented by designing robots for the ambulances. This helps the doctors to provide immediate care to the critical patients.

Figure 8: Combining the two design methods

8 1.3 Methodology and Justification Considering the fast paced modern technology, the new generation of engineers needs to look a step ahead. From the beginning of this project we primarily focus on communication with a robot. Our main objective was, and is, to make the robots user-friendlier and be able to communicate with them as we can communicate with other people around the globe. Thus we decided to design a simple prototype of a robot that can be controlled via phone - Phonebot. To keep it simple, we will start with a design similar to the one described under design idea (a). 1.4 Hardware/Software and Test Equipment Required Hardware Requirements: Flex 10K70 Chipset Talrik II Robot (including the Servo motors and Sensors) [5] TelePhone Set Teltone's M-8870-01 Chip SPST 90-2323, 5V relay switch from RadioShack 3.58MHz Crystal Oscillators (Part # ) Variable Power Supply Resistors, Capacitors and Diodes Software Requirements: Altera's MaxPlus II, OrCAD, HTML, CGI/ASP, Java, Matlab Test Equipment: Oscilloscope, Logic Analyzer, Multimeters, Nerd-Kit 1.5 Implementation As mentioned above, our primary objective is to communicate and control the Phonebot over an analog phone system. First, a call is placed to the Phonebot using either Plain Old Telephony System (POTS) [6] or a Personal Communication System (PCS), or by using the telephony server via the Internet. Once the Phonebot receives the ring, a ring detector circuit detects it and the call is completed by establishing a connection between the phone chip and the FPGA chip (figure 9). The project was implemented using a top-down process. We started by defining various modules. Initially, we broke down the project into two parts – sending a signal through the phone line and performing robot control with the FPGA device. However, during the implementation we found out that receiving and decoding signal coming through the phone line required significant work. This part was further broken down into two modules and we divided the work as shown in figure 9(the description of each module follows the basic block diagram of Phonebot and the VHDL programming section is described separately under FPGA control): a) Ring Detect and connect phone line b) DTMF decoder The robot control part also required a new module for the controlling the motors. Hence this part was renamed as FPGA control and divided into sub-parts as follows: c) FPGA Control - Clock Division (VHDL program) - Ring Detect (VHDL program) - Motor Control (VHDL program) - Robot Control (VHDL program)

9 d) Assembling various parts of Phonebot and combining the different VHDL modules.

PHONEBOT

PHONE

Phone lin e

R in g D e te ct A n d L in e C o n n e ct

ROBOT

DTMF D ecoder

R obot C o n tro l

M o to r / Sensor C o n tro l

F P G A (F L E X 1 0 K 20 /10 K 7 0 )

Figure 9: PHONEBOT – Basic Block Diagram a) Ring Detect and Phone Line Connection When the phone rings, the telephone company is sending a ringing signal, which is an AC waveform[7,8,9]. The common frequency used in the United States is 20 HZ and in Europe it is typically 25 Hz and it can be any frequency between 15 and 68 Hz. Most of the world uses frequencies between 20 and 40 Hz. The voltage at the subscribers end depends upon the loop length and number of ringers attached to the line; it could be between 40 and 150 Volts. There are various types of ring detectors available in the market. We found the following circuit for the ring detector (figure 10) very easy and inexpensive to design [10].

Figure 10: Phone detect circuit 1 1

Ring detect circuit obtained from the website, http://www.ee.washington.edu/circuit_archive/circuits/F_ASCII_Schem_Tel.html

10

Specifications C1 = 1 uf CR1, CR2, CR3 = 1N914 C2 = 10 uF R1, R3 = 100K ohms R2 = 10K ohms The telephone line mostly has only DC (-48V) and/or small signal AC (audio). In the circuit shown in Figure 9, capacitor C1 blocks the DC and the voltage divider circuit obtained from the R3 and R2 resistors prevent the low level AC from having any effect on the circuit. When the telephone rings, it brings about 90V RMS of AC at 20Hz. When the telephone rings, the capacitor C2 is charged. Diodes CR1 and CR2 guarantee that the output (ring detect logic) does not exceed the power supply levels and prevent any damages to other circuits driven from its output. Since C2 and R1 have the time constants of 1s, the output goes low for 1second after the ring stops. This pulse is to be detected and used for connecting the telephone circuit by the FPGA chip. Relay switch for establishing phone line connection After detecting the circuit the next step was to establish the phone line connection. First we tried to use a BJT switch but this complicated the problem even more as the signal we need to pass was negative 48 DC. Instead, using an electromagnetic switch was an easier approach. Using simple electromagnetism, an armature can be attracted by the electromagnet, a spring and a set of electrical contacts [11]. The circuit is depicted in Figure 11.

Figure 11: Schematic capture of the relay circuit used for the connection. We used an SPST 90-2323, 5V relay switch. In Figure 11 when a current flows around the electromagnet, a magnetic field is created around the electromagnet resulting different polarities

11 at the ends of the electromagnet. Thus, the electromagnet attracts the top metal leaf to close the upper circuit. When the electromagnet is not activated, the spring pulls the switch open. In our project, once the ring signal is detected, we activate the switch by sending 0V signal from the Altera board to close the telephone path, which completes the telephone circuit. This is equivalent to picking up the phone to establish a connection. We were originally thinking of sending 5V from the Altera board and grounding the other end of the coil of wire. However this did not work and we assume that it could be because not enough current was flowing out of the Altera’s output pins [12]. Thus we switched them around with VCC and output pin 0V. There was sufficient current flowing through the wire to charge the electromagnet in the relay switch. b) DTMF Decoder Dual Tone Multi Frequency (DTMF) signals are used for speed dialing and replace the old conventional rotary dialing system. These signals correspond to the digits on the dial pad of any modern touch-tone phone. Each of those touch-tones is constructed with the combination of two different frequencies. This information can be utilized to find out which button was pressed on the keypad and can be used for various applications. The construction of the signals according to different frequencies is shown in table 1: Table 1: Signals related to different frequencies High Frequency Values (Hz) Low 1209 1336 1477 1633 Frequency 697 1 2 3 A Values 770 4 5 6 B (Hz) 852 7 8 9 C 941 * 0 # D These frequencies can be decoded using precise filters and then we can decode which digit was pressed depending upon these decoded frequencies. We used the M-8870-01 DTMF decoder chip to decode this information. The chip uses a series of low pass and high pass filters to decode the frequencies [13]. Then it uses a digital detection algorithm and a code converter to provide its output in the form of four binary output data. This data is supplied to the FPGA chip for further use. The internal circuit used by the chip to decode the DTMF is shown in figure 12.

12

Figure 12: Internal circuit of the DTMF decoder (from Teltones Data Sheet). There are various problems in decoding these DTMF signals such as “Talk off”, “Talk down”, “Substandard DTMF”, etc. which can provide false DTMF signals at times. “Talk off” errors occur when a false signal is detected due to voice or other noise. “Talk down” errors occur when we miss some of the detected signals due to line noise or outgoing/incoming voice. “Substandard DTMF” are the errors in generated DTMF signals due to substandard components in the generating device such as the telephone set or the keypad [14,15]. Most of these errors can be eliminated if we use more precise components (resistors and capacitors). While testing the DTMF decoder circuit we observed that a slight change in the voltage, current, resistors or capacitors could trigger significant errors. It was realized while testing the circuit that the Teltone’s decoder works under very specific ranges of voltage and current. c) FPGA Control For the controller we decided to use the Altera’s Flex 10K70 chip. The FPGAs contain arrays of logic cells, and enable designing real systems to operate at increasingly higher frequencies. They have the ability to increase integration, to place more and more electronics in a chip and use all available gates within the FPGA, thereby providing cost-effective solutions. The most important factor for selecting this chip is that we can program and reprogram the device while it is in a system. This provides us with great flexibility in using the Phonebot. The Phonebot can be programmed within a few minutes to do many tasks. Once programmed the robot will have the “intelligence” to complete the requested task. The FPGA chip can also be programmed to store the complex kinematics of the robot and send feedback, using the DTMF generator via the same phone line [16,17,18]. For our project, Flex 10K70 is used to control the Phonebot based on the signals decoded by the DTMF decoder.

13 The FPGA chip decodes the 4-bit binary output (from the DTMF decoder) to robot commands. The device is programmed to decode the DTMF signals, issue commands to the robot, and send pulses to the servomotors based on the logic programmed into it. The controller consists of these four modules: Clock Division Module This module divides the 25MHz clock of the Altera’s University Board into slower clocks so that we can provide the precise timing for the servo-motors because these motors work at pulses in the order of milliseconds. As shown in Figure 13, the 25MHz clock is divided into 1Mhz, 100Khz, 10KHz, 10Hz and 1Hz. Figure 14 shows the flow chart of the clock division module, while figures 15, 16 and 17 show in details the clock division. The Robot Control module requires 10Hz clock whereas the Motor Control module requires 10KHz clock. The VHDL source code for this module is provided in Appendix A. Clock_1MHz Clock_100KHz Clock_25MHz

Clock_10KHz

Clock Division Module

Clock_10Hz

Clock_1Hz Figure 13: Block diagram of clock division module

Figure 14: Flow chart of Clock Division module

14 Clock Division: Clock change within 20us:

Figure 15: Clock changes at 13 cycles of 25MHz clock (for 1MHz clock) Clock change within 1ms range:

Figure 16: Clock changes at 5 cycles of 100KHz clock (for 10KHz clock) Clock change within 20ms range:

Figure 17: Clock changes at 5 cycles of 10KHz clock (for 1KHz clock) Ring Detect Module This module takes its input from the ring detector circuit and the DTMF decoder, provides output to the relay for closing or opening the telephone circuit (figure 18 shows the diagram of the module, figures 19 and 20 show the timing diagram and the flow chart respectively). As soon as

15 a ring is detected this module provides a 0V signal to the relay and thus closes the circuit. When it senses a DTMF code for digit 0, it opens the switch again. However, we encountered a small problem while disconnecting the phone. The FPGA device stored the code for digit 0 even after it was disconnected and for the same reason next time we dialed up, it would connect and disconnect on its own. This problem was resolved by resetting the signals in the FPGA device on the falling edge of the SB signal (SB signal goes high when there is a valid code detected by the DTMF decoder) [19]. SB Clear Ring_Detect

Ring_Connect

Ring Detect Module D (Touch Tones) Figure 18: Block diagram of Ring Detect Module

Figure 19: Ring Detect Timing Diagram

16

Figure 20: Ring Detect Module Flow Chart Robot Control Module This module programs the movement of the robot according to the detected DTMF signals. It determines the path of the robot and provides a signal for the servo motor control module, controlling the direction and the duration of running the motors (figure 21 shows the diagram of the module, figure 22 the flow chart and figures 23-27 show the timing diagram). For example, if we press 7, the robot moves in forward direction for 4 seconds; if we press 8, the Phonebot moves in reverse direction for 4 seconds; and if we press #, Phonebot comes to a complete stop. We also added few bump switches to the Phonebot. These bump switches send a high signal to the FPGA when they are bumped. Depending upon which bumper detects switch closures, the FPGA device determines where the collision occurred and issues a signal to the Phonebot to change the direction of its path.

17

Clk_10 Hz

lmotor_speed

SB

rmotor_speed

D Sensors (S1,S2,S3)

Robot Control Module

lmotor_dir rmotor_dir

Figure 21: Robot Control Module Block Diagram

Figure 22: Robot Control Module Flow Chart

Figure 23: Robot Control Timing Diagram (a)

18

Figure 24: Robot Control Timing Diagram (b)

Figure 25: Robot Control Timing Diagram (c)

Figure 26: Robot Control Timing Diagram (d)

19

Figure 27: Robot Control Timing Diagram (e) Motor Control Module The Phonebot has two servo motors to aid its movement. First we had to hack these motors to create a DC gearhead motor. The information on hacking the servos was obtained from Mekatronix manual for the Talrik II robot [5]. The hacked servos work on Pulse Width Modulation (PWM) [20]. This module provides the robot with pulses at specific times. If we provide a pulse for 1 to 1.5ms, then the motor spins in one direction and if we provide a pulse for 1.5 to 2ms, then the motor spins in another direction. The pulses have to be issued in an interval of about 20ms. This module implements the generation of this timing (figure 28 shows the block diagram of the module, figures 28 and 29 show the flow and the timing diagram chart respectively).

Clk_10KHz Lmotor_speed Rmotor_speed Lmotor_dir Rmotor_dir

lmotor

Motor Control Module Figure 28: Motor Control Block Diagram

rmotor

20

Figure 29: Motor Control Flow Chart

Figure 30: Motor Control Timing Diagrams

21 d) Assembling various parts of Phonebot and combining the different VHDL modules The final task in the project involved assembling the different modules together. We had to make a common ground for the FPGA device [21,22] the circuit we designed for the ring detection and the DTMF decoder. The Altera board was not detecting the decoded signals produced by the DTMF decoder until a common ground was implemented. The main controller program was implemented in structural format combining the various components. The VHDL source code for combining all the different modules is provided in Appendix E. 1.6 Economic Analysis Overall cost for the Phonebot was very limited. The following table (table 2) shows the hardware expenses in U.S. dollars. Part QTY Per unit *Servo Motors and Wheels *FPGA chip (Flex 10k70) DTMF Decoder 1 1.50 (Teltone M-8870-01) Relay Switch (SPST 90-2323) 1 2.40 Oscillator 1 3.00 Capacitors 5 0.60 Resistors 8 ~ 0.50 Diodes 3 0.50 Wires/Circuit Board 1 pk 5.00 Sensors 4 2.00 *Wood 9V Adapter / Ni-MH Battery 1 15.00 TOTAL COST (Estimate) Table 2: Hardware expenses

10.00 N/A 1.50 2.40 3.00 3.00 4.00 1.50 5.00 8.00 10.00 15.00 $ 63.40 + FPGA Chip

Prices were quoted from RadioShack’s website (http://www.radioshack.com) * Estimated price for comparison purpose only Assuming the FPGA chip costs around $100, we can build a Phonebot for about $163.40. Compared to CYE, which costs $800.00, or Aibo, which costs $2500.00, the phonebot is very cheap. If this robot was to be mass produced, these figures would drop significantly. 7) Conclusions and Future Directions The idea of Phonebot evolved from our original idea of designing a wireless controlled robot. It was during our research phase that we started considering the idea of controlling a robot via a telephone line. We encountered various problems as described under various modules. There are several features that could be added to the Phonebot. Instead of the DTMF decoder, a voice encoder/decoder can be used [23]. Similar to a DTMF decoder, the voice decoder would recognize certain frequencies and depending upon those frequencies, the Phonebot can be programmed to do certain tasks. This feature could also be used for security purposes. Using more sensors and a visual feedback would make the Phonebot more animated and will be able to carry out more tasks. Integrating with the Internet is another possibility that would add more access to the Phonebot. A movie showing the Phonebot can be seen at

22 http://www.bridgeport.edu/cse/projects/phonebot/index.html, while the appendices A to E contain the controlling software of the robot

2. Internet Based Software Library for the SIR-1 Serial Port Controlled Robot The idea of web-based control has been always envisioned from the first days of networked computing. Being able to execute operations from remote locations, is an active and desired choice in many fields, such as robotic manipulation. This project presents a complete web based control solution for a manipulator, exemplifying one of the many possibilities that remote automation encompasses. By facilitating web based control solutions, manipulators take an immense step ahead towards the world of distance learning and remote manipulation. The library created and presented through this section can be easily mirrored as a web server that would allow control of the SIR-1 manipulator from any point on the globe. Position and image feedback for the robot is possible too. A complex API for the control of the SIR-1 robot (figures 31A and 31B) is developed. Available functions include direct/inverse kinematics computations, serial port communication interfacing, and link speed control. One of the main challenges is the fact that the SIR-1 controller does not report any robot status messages. Several techniques are applied to overcome this drawback. The API can support an indefinite number of port connections and thus control a theoretically indefinite number of SIR-1 robots. For the purpose of this project, two robots with their respective controllers have been used. Due to the high availability of serial ports on standard PCs, this API can be deployed virtually anywhere and in any environment, including the Internet, for almost any application [24]. In order to simplify this, a Visual Basic port is developed. It should allow for easy deployment under Windows NT and any web-based application.

Figure 31A: The SIR-1 Robot

Figure 31B: The SIR-1 Robot

2.1 The SIR-1 Robot Generalities The SIR-1 is a mechanically simple instructional robotic arm with 5 degrees of freedom and rotational links. It features 6 electrical motors (1 for each link and 1 for the end-effector, by

23 default a two-finger gripper). The work envelope is rather limited. Each link is limited in its movement by one or two limit sensors. With exception of the end-effector roll link, which has an optical limit sensor implemented with a photo diode, the sensors are simple mechanical switches. An activation of any such switch will cause the controller to abort any current commands and stop the robot. The SIR-1 is controlled by a controller box via a 50-wire cable. The wires power up the motors and connect the limit switches to the electronics within the controller box. Physical Characteristics Below (figure 32) is a schematic representation of the SIR-1 robot according to the operating manual. Figure 33 depicts the workspace of the robot. This data has been used for all mathematical computations and for conversions within the function library.

Figure 32: SIR-1 Schematic Representation

Figure 33: SIR-1 Workspace

24 Controller Unit The SIR-1 features a controller unit that coordinates the duration and sequence of motor powering to generate movement. It also monitors the limit sensors and interrupts any movement once one has been activated. If any switch remains closed even after any attempt by the controller unit to back up the link that reached its limit, the entire robot is suspended and an error code is generated. The unit and robot must be reset in this situation. Attached to the controller unit is a programming pad used for moving the links and programming sequences on movements. Both the controller unit and the programming pad are pictured in figures 34 and 35 respectively.

Figure 34: SIR-1 Controller

Figure 35: SIR-1 Programming Pad The controller unit features an RS-232 serial interface that can be used to connect it to any PC equipped with a serial port. Via this interface, commands can be sent to the robot in a manner similar to the use of the programming pad. For the purpose of this project, a 1-to-1 serial link cable has been used to connect the controller unit to a PC running Windows 98. Commands are sent via the serial link as CRLF terminated ASCII strings. The structure of these strings is explained below in the Programming section. Software Interface Problems Several important issues have arisen when attempting to create the function library. They are summarized below. • No information about the link position or status is received from the controller. Although the controller itself monitors the limit switches, it does not convey this information to the serial interface. The PC and software running on it have thus no way of “knowing” whether the command that was issued was carried out successfully, or whether any link has reached its limit [25]. • The home configuration can be changed via the hardware, so the software must be updated with the current home position. Otherwise, the software cannot be aware of the real status of

25





the links, and may issue commands that will move a link outside the work envelope, causing the controller unit to abort and halt the robot. The robot must be thus homed before receiving any move commands from the software. This can be done by the program itself, as the controller unit is able to receive homing commands through the serial interface. Movement commands do not specify magnitude in degrees, but “steps”. However, for direct or inverse kinematics functionality, we are interested in being able to specify movement magnitude in degrees. Moreover, the number of steps each link can move is unlimited (!). The controller unit will simply stop the movement if the number if steps specified in the move command exceeds the physical limits of the robot. To overcome this drawback, a simple proportional conversion from degrees to steps is performed by the software library. The limits specified in the manual of the robot are used, and the number of steps needed to reach those limits from the home position is determined experimentally. Last but not least, there is the inverse kinematics problem. The SIR-1 has a non-standard movement pattern that will be described in the Mathematics section. We were able to overcome this drawback and, as it turns out, the IK problem is actually simplified in the case of the SIR-1, as described below.

2.2 Mathematical Model Differences Between the Classic Model and the SIR-1 The classic Denavit-Hartenberg model for direct kinematics derivation assumes that as one link moves, modifying its angle with respect to the system of coordinates of the robot and the previous link, all other links maintain a constant angle with respect to each other. Thus, all links following the one that is moving modify their angle with respect to the horizontal and the general system of coordinates, as exemplified in figure 36 below [26,27]. CO NST

Figure 36: Modifying one link's coordinate, the others maintain constant angles relative to each other In the case of the SIR-1 however, due to its construction principles, as one link moves all other links maintain a constant angle with respect to the general system of coordinates. Figure 37 should clarify this concept.

26 CONST

Figure 37: SIR-1 specific angle changes with the move of one link 2.2.1 Deriving the Direct Kinematics Equations At first glance this non-standard model makes things more complicated. In practice however, it simplifies the process, as the coordinates of each link no longer depend on multiple angles, but only on one. Below (figure 38) is a geometrical model of the SIR-1 robot. A set of direct kinematics equations can be derived simply by inspection. lR

δP lG

lE 900 δE

δR z

lS δS

lB x δB y

Figure 38: Geometrical Model of SIR-1 The set of equations resulting from this model is: x = (lS cos δS + lE cos δE + lP cos δP + lG sin δP sin δR ) cos δB y = (lS cos δS + lE cos δE + lP cos δP + lG sin δP sin δR ) sin δB z = lB + lS sin δS + lE sin δE + lP sin δP - lG cos δP cos δR 2.2.2 Deriving the Inverse Kinematics Equations Quite obviously, this is a problem with multiple solutions, as we are confronted with 3 equations in 5 unknowns (corresponding to the 5 angles of the links). However, in most cases we will be interested in reaching a certain point with a specified pitch and roll. For instance, in order to pick

27 up an object placed in the horizontal plane, we would want to have the gripper aligned vertically in most instances (roll = 0º, pitch = -90º). This causes two of the variables to become constants, and the inverse kinematics problem to reduce to 3 equations in 3 unknowns, which has a unique solution (as we will see, we actually arrive at a double solution, but due to the simplicity of the SIR-1, each point in the work envelope can only be reached in one configuration, and one of the solutions can be discarded). We begin by replacing all constant terms (the link lengths and the roll and pitch angles), by constants. (1) x = (lS cos dS + lE cos dE + lP cos dP + lG sin dP sin dR ) cos dB (1) x = (lS cos dS + lE cos dE + lP cos dP + K1 ) cos dB (2) y = (lS cos dS + lE cos dE + lP cos dP + lG sin dP sin dR ) sin dB (2) y = (lS cos dS + lE cos dE + lP cos dP + K1) sin dB (3) z = lB + lS sin dS + lE sin dE + lP sin dP - lG cos dP cos dR (3) z = lS sin dS + lE sin dE + K2 Next, we will divide equations (2) and (1); this will eliminate the entire parenthesis (which is identical for both equations). y / x = sin dB / cos dB And from here dB = tan-1 (y / x) Since angle dB is now known, we can replace all terms in the equations that contain it by constants lS cos dS + lE cos dE + K1 = x / cos dB = y / sin dB = T1 As can be noticed easily, term T1 could be undefined at dB = 0º or 90º, but this is not a concern as we can use either one of the fractions to define it. Therefore, cos dS = (T1 – K1 - lE cos dE) / lS and by replacing T1 – K1 by R1 cos dS = (R1 - lE cos dE) / lS Similarly, from lS sin dS + lE sin dE + K2 = z by replacing z – K2 with R2 sin dS = (R2 - lE sin dE) / lS

28 We can now square and add the two equations, thus eliminating the sine and cosine of dS, whose squares will add up to 1. (R1 - lE cos dE)2 + (R2 - lE sin dE)2 = lS2 By expanding and rearranging the equation, we will isolate all constant terms on the right side, obtaining R1 cos dE + R2 sin dE = R12 + R22 + lE2 – lS2 Finally, replacing all constant terms on the right side by R R1 cos dE + R2 sin dE = R We are now left with a trigonometric equation in one unknown. In order to solve it, we will use the trigonometric identities cos dE = (1 – t2) / (1 + t2)

sin dE = (2t) / (1 + t2)

where t = tg (dE / 2) By expanding and rearranging this new equation, we obtain R1 – R1t2 + 2tR2 = R + Rt2 This is a quadratic equation with 2 solutions, but we will discard one as being outside the work envelope of the robot. The solution we keep will result in a solution for dE dE = 2 tan-1 t Which we can further use in any of the equations above to obtain a value for dS dS = sin-1 [(z – K2 - lE sin dE) / lS] Thus, we are able to derive solutions for all three angles that define the position of the end effector. Programming Generalities The SIR-1 controller accepts commands from the serial interface as CRLF-terminated ASCII strings. Prior to issuing any command, a handshaking sequence has to be performed. Once the character ESC has been sent, the controller will respond with an 8-bit integer. The respective bits indicate the status of each link (1 for active and 0 for inactive). This is the only status update the controller will send throughout the process.

29

Commands are in the form , , … For example, in order to move the first link 100 steps, the following string would be sent to the serial port: M 100, 0, 0, 0, 0, 0 In this case, the trailing zeros can be left out. Programming the controller via the serial port is thus reduced to assembling a command string in a buffer and sending it to the port character by character. Available Commands The SIR-1 has the following set of commands available for programming via the serial interface: M --moves links a certain number of steps S --sets the speed of links P --pauses the robot H --sends robot to home position T --switches to “teach mode” This is a relatively scarce set, which does not allow for complex functionality such as direct and inverse kinematics. In order to implement these higher-level capabilities, several enhancements have been added to the software by introducing additional overhead and conversions on the PC. 2.3 Enhancements to the Available Commands Steps-degrees conversion In order to control link movement by degrees rather than steps, a simple proportional correspondence between the number of steps each link can move and the angle it covers has been implemented. The maximum angle has been obtained from the work envelope specified in the robot documentation, and the maximum number of steps has been determined experimentally. Current and limit positions Since the controller does not convey any information regarding the current link position, a mechanism is necessary to keep track of the current status of each link. A one-dimensional array is updated with the number of steps each link has moved after any move command has been issued [28]. Since the maximum number of steps each link can move is known, the same array can be used to check whether any link will reach its limit after a movement command. Inverse Kinematics Inverse kinematics functionality has been implemented using the model described in the Mathematics section above. Link movement is no longer controlled by angle, but also by absolute rectangular coordinates. Programming Platform ANSI C has been chosen as the programming platform. This allows for deployment on older Windows 3.x or DOS systems. The library has been tested under Windows 9x. Due to the simple port output commands, designed to work under DOS, the library will not function without

30 modification on Windows NT, which has additional protection mechanisms for accessing resources - this can be easily modified. A Visual Basic port of the library is implemented. This allows for web deployment and robot control over the Internet. Function Library Abstract Below is a summary of all functions present in the library. int SIR1_Handshake(); Initiates communication with robot. This function must be called before any other command can be issued. Returns 0 if communication is established, 1 if any of the links are inactive. int SIR1_MoveLink(link L, int degrees); Moves one link a specified number of degrees. Returns 0 if successful, 1 if link L is not active, or 2 if the number of degrees exceeds the link limits. If L = G (the gripper is moved), the integer degrees specifies millimeters of gripper opening. int SIR1_MoveRobot(int AlphaBase, int AlphaShoulder, int AlphaElbow, int AlphaPitch, int AlphaRoll, int PercentGripper); Moves all links a specified number if degrees. Returns 0 if successful, 1 if any of the links is inactive or any of the numbers exceeds the link limits. void SIR1_HomeRobot(); Sends robot to HOME position. No return value int SIR1_GotoXYZ(int X, int Y, int Z, int PITCH, int ROLL); Moves the center of the gripper fingers to coordinates X, Y, Z, with a specified roll and pitch of the gripper segment. Returns 0 if successful, 1 if the inverse kinematics equations have no solution (the point is outside the working envelope). void SIR1_SetSpeedLink(link L, SPEED S); Sets the speed of a link to a specified value (valid values are 0 to 7, of type SPEED defined above). Returns 0 if successful, 1 if link L is inactive. void SIR1_SetSpeedLinks(SPEED B, SPEED S, SPEED E, SPEED P, SPEED R, SPEED G); Sets the speed of each link to a specified value (valid values are 0 to 7, of type SPEED defined at the top of sir.h). Returns 0 if successful, 1 if any link is inactive. int SIR_SetSpeedRobot(SPEED S); Sets the speeds of all links to the specified value. Returns 0 if successful, 1 if any of the links is inactive. void SIR1_Pause(int TIME); Pauses robot for 1/100 * TIME seconds. No return value int SIR1_LinkPosition(link L);

31 Returns the position of each link (in degrees) relative to the absolute system of coordinates defined for the robot (check diagrams in the manual for this function library). For the gripper the percentage of opening is returned. void SIR1_SetPort(int port); Sets the address of the port we are talking to. Examples: COM1, COM2, 0x3f8, 0x2f8 etc. No return value. int SIR1_IsActive(link L); Checks to see if link L is active Returns 1 for active, 0 for inactive. 2.4 Conclusions Although the SIR-1 is a relatively simple robot, it can accomplish complex tasks, due to its high repeatability (0.6 mm according to the specifications). During tests, the robot was able to pick up and deposit a 9V battery, back and fourth, ten times in a row. Due to this kind of performance, the SIR-1 can be used to operate, for example, switches and buttons. Under such circumstances, a software library like the one presented as part of this project can greatly increase the flexibility and capability of the robot. The project clearly demonstrates the endless possibilities of using such a robot as the SIR-1 for web/internet based remote automation through the implemented API [29, 30]. The applications range from pressing buttons, flipping switches or remotely controlling any other similar interfaces, to distance learning applications. 3. Internet Based Computer Vision Framework for Security, Surveillance and Tracking Applications Advances in CCD devices and fast general-purpose microprocessors made building computer vision systems possible without expensive special equipment. Reliable computer vision solutions are no longer specific to an industry or automation application, but are at the edge of the interest (need) of the everyday generic consumer. Web-cams are more and more capable of functions that resemble computer vision, whether through enclosed software packages or hardware extensions. For example, classic Lego (www.lego.com) parts offer robotics kits that have surprisingly efficient vision detection packages. This project presents a possible framework to quickly design and implement vision systems that perform useful real-time tasks using only off-the-shelf hardware. 3.1 The Problem Domain The most general goal of any computer vision system is the detection and identification of an object model or models in an input image or stream of images [31]. Such general tasks are typically difficult to achieve with satisfactory speed and accuracy. But in different applications certain assumptions can be made about input images. For example, these assumptions could include: - Lighting condition in input images is known and/or constant. - Object orientation in the input image is known - It is known that objects are not rotated out of the vision plane - It is known that objects are not rotated in the vision plane

32 - Object is rigid - Object has constant shape - Object scale is known - Background of image scene is known. - Sequence of images is available (And /or possibly several others) Such assumptions make it possible to build computer vision systems that operate in real time and give results that are acceptably accurate [32,33]. 3.2 Architecture Each algorithm dealing with a well-defined and solvable problem is implemented as a building block. The system can be built out of blocks connected using a simple pipe architecture where the output of a block is fed to the input of other blocks. Each system would have image acquisition and output analysis with vision processing implemented in between. This allows for a quick system implementation, without modifying the underlying components. Components can be enhanced or replaced without affecting other parts of the system taken into consideration [33,34]. Generally each system would consist of three broad components: - Early processing - Feature extraction - Feature matching Early processing is the stage in which images are enhanced without trying to interpret them. Components that can be used in this stage include: - Gaussian filters - Histogram normalization - Color filtering Feature extraction selects certain feature points in the image that are relevant. Sample components within this module includes: - Edge detection - Line or ellipse detection - Region growing - Region splitting - Minmax point extraction Feature matching is the task of determining if model features are present in the feature set extracted from the image and how well they match. 3.3 Sample system Requirement A system has to detect a face in the scene and follow it. Scaling, rotation in the vision plane are allowed. Small to medium variation in lighting is tolerated. Background is arbitrary. Rotation out of the vision plane and significant lighting variations are not allowed.

33 3.4 System Specifications acquisition

color filtering

conversion to monochrome

Gaussian blur

thresholding

MinMax feature extractor

heuristic significant feature detection

feature matcher

match result

Figure 39: System Diagram The steps from Color Filtering to Thresholding (Figure 39) constitute early vision processing. They only enhance an image for later processing without extracting any higher-level feature data. The color filter removes regions that have colors that are considered not possible in the model. For example, the sky and forest background are removed in this step. Conversion to monochrome makes the image suitable for processing with other algorithms. Gaussian blur removes noise and many small, insignificant image features. It significantly lessens the number of features returned by the feature matcher. Thresholding removes parts of the image that are too dark, and therefore cannot contain objects for detection. Values for color filter, threshold and Gaussian blur can be specified at design time or extracted from model [35,36]. The last four steps constitute high-level vision processing. The feature extractor scans the image and finds parts of the image that are most significant for detection. In this sample system, MinMax search is chosen because of speed and simplicity. Local minimum and maximum points of image are selected as most important points and used in the feature-matching algorithm. Features that are not relevant for detection can be removed by applying a heuristic filter. Relevance of a feature in this filter is defined as the energy carried by the min or max point. Energy is the summation of pixel values that are affected by that minmax point. The feature matcher performs optimized exhaustive search between features detected in the image and in the model and assigns each pair of combinations a match value. Matching is based on evaluating angles between features, proportions of distances and proportions of amplitudes. In this way, the matching function remains effective if rotations in the vision plane, scaling and uniform lighting variations are present. The combination with the highest match value is chosen as a possible detection candidate. If the match value is higher than threshold value, detection is successful; otherwise, the model is assumed not present in the image. 3.5 System Test Results The original image (figure 40) has been passed through color filtering (figure 41), blurring (figure 42), thresholding (figure 43), MinMax feature extraction (figure 44) and heuristics to remove less significant features (figure 45).

34

Figure 40: Original Image

Figure 41: Color filtered image. Forest and sky background get filtered out because they are composed of colors not present in the human face.

Figure 42: Blur. Small features and noise get filtered out.

35

Figure 43: Threshold. Dark areas are removed.

Figure 44: MinMax Feature extraction. Relevant points get marked, but so are many other, unimportant points.

Figure 45: Heuristics to remove less significant features. The importance of a feature is measured in terms of the energy it carries. After filtering, the remaining features are significant and the combinations of all of them can be examined quickly. Detections happen in less than one second using a Pentium PC and VGA resolution color images when detecting one model. Lower resolution images are suficcient for many applications, therefore the system could be used in real time for detection of more than one model [37].

36 3.6 Applications and Modifications Modular design of the framework allows easy modifications and extensions to the system, such as using the system as a web based robotic eye that can follow faces. Sample modifications to improve the quality of detection include: Fourier filtering and edge enhancement in early processing. In late processing, different feature extractors can be used – such as Hough transforms based methods, line following, snakes, region splitting and region growing. The matcher can be efficiently replaced with neural networks or a heuristic instead of exhaustive search. Background subtraction and light elimination could also improve the detection rates [38]. In order to further enhance functionality, the system could be extended with multi model matching, which would make possible the detection of objects rotated out of the vision plane.

4. Internet Controlled Satellite Transponder (using a remotely controlled robotic manipulator) 4.) Objective To be able to control a satellite transponder from a remote location by using a robotic manipulator to mechanically change controls on the receiver or an UHF/infrared remote control of the receiver. The reasons a robot was used to mechanically use the controller unit instead of hard-wiring the controller unit (or the UHF remote control) to the server were: (1) The process of hard-wiring can permanently damage the expensive controller unit and subsequently render the transponder useless [39,40]; (2) The robot can be easily reprogrammed to adapt to different controller equipment, such as working with a remote controller. To establish communication between the remote client and the robot via the server and enable the robot to carry out the desired commands without exceeding its limited workspace or without running into obstacles, the following tasks need to be implemented: 1 - Interfacing between robot and server (RS-232) 2 - Interfacing between receiver and server 3 - Interfacing between the server and Internet 4 - Teaching the robot (trajectory planning and generation) A flowchart of the proposed system is shown in figure 46.

37 SA TE LLITE T R A N SPO N D ER

R E M O TE PC

REC EIV ER

IN TER N E T

R em ote C ontrol Video Interface RS-232

R obotic M anipulator

Figure 46: System Schematics Interfacing between the robot and the server is carried out via the RS-232 serial port, which is a standard on the main board of any IBM-compatible PC. The robot’s controller box simply receives commands as ASCII strings through the RS-232 connection. An important issue that arises is that there is only one-way communication. I.E. once the computer issues commands to the robot, the controller box does not send back a signal to indicate a successful or unsuccessful command interpretation and application [41]. To overcome this it is important that a flawless trajectory be planned during initialization of the robot and implemented in a manner such that obstacles do not pose a problem. Sample commands (among others) that can be issued to the robot include: GC: Closes the gripper GO: Opens the gripper NT: Nest the robot Interfacing between the receiver and the server is implemented through a special interface card which takes the analog feed from the transponder’s receiver and converts it into a digital format ready to be broadcast over the internet. The PC card receives a feed through a standard coaxial cable such as that used for Cable television. The transponder can point at different ‘look-angles’ and at each look-angle it can receive feeds at numerous frequencies. Interfacing between the server and the internet is relatively simple since it involves setting up and configuring a web server to serve HTTP requests from clients. Security is an important issue here and user-authentication is vital since it is undesirable to have unauthorized individuals controlling the robot remotely and causing damage to the equipment. It is also important to ensure that only one individual is controlling the robot at the same time. It is also important to initialize the robot so that it starts at a position known to the software. Initialization also involves defining key positions in the robot’s workspace and defining trajectories that the robot should travel along. Lastly, the robot must be ‘taught’ about its surroundings and the environment that it will be operating in. This can be done by fixing the position of the remote control unit and defining its position relative to that of the robot. This allows the robot to ‘know’ where the controller unit is

38 and where particular controls on the unit are. The robot must be nested prior to operation to ensure that it always starts from the same point thus allowing error-free software control. 4.2 Constraints and limitations: - Limited set of commands that the robot’s controller box can understand - Only one-way communication between the server and the robot’s controller box - The web server’s clients have no access to the serial port, which is the only means of talking to the robot - Time constraints - Compatibility issues force usage of a certain operating system How were the above limitations overcome to achieve the objective? The following tasks had to be carried out: 1 - Configure the server-to-server HTTP requests. 2 - Write software allowing HTTP clients to issue commands to the robot via the RS-232 port. 3 - Write software to initialize the robot and define the work area. Server configuration was relatively less complicated and setting up an NT server with an IIS web server to process requests and enabling security authentication and control was smooth. An ActiveX control needed to be written in Visual C++. This component allows Active Server Pages to run executables on the server itself. An executable called ‘writetoport.exe’ did the simple task of taking command-line arguments and sending these arguments to the robot as commands via the serial port. This application was first implemented in C++ under a DOS environment but when ported to Windows NT, the operating system’s security management features did not allow direct access to the computer's serial ports [42]. This application needed to be re-written in a windows environment using Visual Basic. 4.3 Robot initialization As mentioned previously, robot-initialization is vital to ensure error-free operation, especially since the robot’s controller unit sends back no signals to the server itself. The position of the remote control needs to be fixed and positions within the robot’s work area need to be defined. There are two basic types of movements: • Point to Point (PTP) movement • XYZ movement PTP movement involves specifying the joint variables (the joint angles in this case) and changing them according to the desired position of the robot. XYZ movement involves moving the end-effector in an x, y, or z plane. The joint variables are calculated based upon position and desired motion and accordingly changed and recalculated at every point within the trajectory [43]. XYZ movement is much slower and inaccurate than PTP movement since the robot needs to perform repeated calculations to determine the desired position. Hence PTP movement was used to minimize error between the desired trajectory and the actual trajectory. In order to use PTP

39 movement the trajectories need to be well-defined; each of the joint angles need to be specified and effected simultaneously to allow smooth trajectories. 4.4 List of resources (not exhaustive): - Mitsubishi MoveMaster EX robot - Satellite transponder and receiver - Interface card for receiver - IBM-PC or compatible - Windows NT 4.0 (server) - Internet Information Server 4.0 - Languages: C++, Visual C++, Visual Basic, VBscript (for ASP technology) 4.5 Advantages and potential uses: - Remote satellite feed for Internet users - Remote communication - Distance learning The project demonstrates the synergy created by combining robotic and computing power. On a larger scale this concept can be ported to pragmatic and useful web applications using robots.

Bibliography [1] Tanenbaum, Andrew S. Computer Networks, New Jersey, Prentice Hall. [2] Internet: http://www.wired.com/news/gizmos/0,1452,32930-3,00.html [3] Skahill, Kevin. VHDL for Programmable Logic, Addition-Wesley, 1996. [4] Yalamanchili, Sudhakar. VHDL Starter's Guide, Prentice Hall, 1998. [5] Doty, Keith L. TALRIKII Assembly Manual, MekatronixTM, 1999. [6] Meyers, Robert A. Encyclopedia of Telecommunications, Academic Press Inc. [7] Internet: http://www.attitude-long-distance.com/how_phones_work.htm [8] Internet: http://www.ee.washington.edu/circuit_archive/circuits/F_ASCII_Schem_Tel.html [9] Internet: http://www.howstuffworks.com/relay.htm [10] Internet: http://www.d2tech.com/dtmfbenefits.htm [11] H. Latchman, Sanjeev Tothapilly, C. Salzmann and D. Gillet, Hybrid Asynchronous and Synchronous Learning Networks in Distance and Local Education, Proceedings of the 1998 International Conference on Engineering Education, Rio de Janiero, August, 1998. [12] D. Gillet, G. F. Franklin, R. Longchamp, and D. Bonvin, Introduction to Automatic Control via an Integrated Instruction Approach, The 3rd IFAC Symposium on Advances in Control Education, Tokyo, Japan, August 1994. [13] F. Michau and D. Munteanu, Web Server of Pedagogical Documents in Control Engineering, 7th Annual Conference of the European Association for Education in Electrical and Information Engineering (EAEEIE) on Telematics for Future Education and Training, Oulu, Finland, 1996. [14] Hayati S., Venkataraman S. T.; Design and Implementation of a Robot Control System with Traded and Shared Control Capability; Proceedings IEEE International Conference on Robotics and Automation; 1310-1315; 1989

40 [15] Hirzinger G., Heindl J., Landzettel K.; Predictive and Knowledge-Based Telerobotic Control Concepts; IEEE International Conference on Robotics and Automation, May 14-19, Scottsdale AZ; 1768-1777; 1989 [16] C. Salzmann, H.A. Latchman, D. Gillet and O. Crisalle, Requirements for Real-time Experimentation Over the Internet, Proceedings of the 1998 International Conference on Engineering Education, Rio de Janiero, August, 1998. [17] Feiner S., Macintyre B., Seligmann D.; Knowledge-based augmented reality; Communications of the ACM, Volume 36, Number 7; 53-62; 1993 [18] Drascic D.; Stereoscopic Video and Augmented Reality; Scientific Computing and Automation, Vol. 9, Number 7, June; 31-34;1993. [19] Ellis S. R., Bucher U. J.; Distance Perception of Stereoscopically Presented Virtual Objects Optically Superimposed on Physical Objects by a Head-Mounted See-Through Display; Proceedings of Human Factors and Ergonomics Society 38th Annual Meeting; 1300-1304; 1994 [20] Dunbar R. M., Klevelant H., Lexander J., Svensson S., Tennant A. W.; Autonomous underwater vehicle communications; ROV `90, The Marine technology Society, 270-278, 1990 [21] Glumm M. M., Peterson L. A.; Remote Operations Using a Low Data Rate Communications Link; DRG Seminar: Robotics in the Battlefield, 6-8 March; 37-55;1991 [22] Grimson W., Lozano-Perez T., Wells W., Ettinger G., White S., Kikinis R.; An automatic registration method for frameless stereotaxy, image guided surgery and enhanced reality; IEEE Conference on Computer Vision and Pattern Recognition Proceedings, Los Alamitos, CA; 430436; 1994 [23] Faugeras O. D., Toscani G.; The Calibration Problem for Stereo; Proceedings of Conference on Computer Vision and Pattern Recognition, Miami Beach, FL, June; 15-20;1986 [24] Ferrell W. R., Sheridan T. B.; Supervisory control of remote manipulation; IEEE Spectrum, October; 81-88; 1967 [25] Rastogi A., Milgram P., Drascic D., and Grodski J. J. [1993] Virtual Telerobotic Control Proceedings of the Knowledge-Based Systems & Robotics Workshop, Nov. 14-17, 1993; 261269 [26] Rastogi A. [1995] Design of an interface for telerobotic in unstructured environments using augmented reality, M.A.Sc. Thesis, Department of Industrial Engineering, University of Toronto. [27] D. Garton, R. Rehm, B. Schäfer. A Man-Machine Interface for the Real-Time Control of Robots and the Simulation of Satellite Teleoperations. 14th European Annual Conference on Human Decision Making and Manual Control, June 14-16, 1995, Delft, Netherlands. [28] M. Schlenger, B. Schäfer, W. Rulka, R. Fonseca. Realtime-Simulation of the Dynamics of Elastic Space-Manipulators by MBS-Software. 3rd European In-Orbit Operation Technology Symposium, June 22-24, 1993, ESTEC, Noordwijk, Netherlands. [29] J. Gomez-Elvira, J. Santo-Tomas, T. Munoz, E. de la Fuente, B. Schäfer. Platform for Robotic Simulation for LEO Satellites. 3rd Workshop on Simulators for European Space Programmes, 15-17 Nov 1994, ESTEC, Noordwijk, NL. [30] Freund, E.; Rossmann, J.; Uthoff, J.; van der Valk, U.: Towards realistic Simulation of Industrial Robots, Proceedings of the IEEE/RSJ/GI Intelligent Robots and Systems, IROS, Muenchen, 1994 [31] A. Kosaka, M. Meng, and A. C. Kak, Vision-guided mobile robot navigation using retroactive updating of position uncertainty,' Proceedings of 1993 IEEE International Conference on Robotics and Automation, Vol.2, pp.1-7, 1993.

41 [32] A. Kosaka and A. C. Kak, Fast vision-guided mobile robot navigation using model-based reasoning and prediction of uncertainties,' CVGIP--Image Understanding, Vol. 56, No. 3, November, pp.271-329, 1992. [33] A. C. Kak, K. M. Andress, C. Lopez-Abadia, M.S. Carroll, and J. L. Lewis, Hierarchical evidence accumulation in the PSEIKI system and experiments in model-driven mobile robot navigation, in Uncertainty in Artificial Intelligence (M. Henrion, R. Shachter, L. N. Kanal, and J. Lemmer, Eds), pp.353-369, Elsevier, North-Holland, 1990. [34] K. Gupta, A. P. del Pobil, Practical Motion Planning in Robotics: Current Approaches and Future Directions, John Wiley & Sons, West Sussex, 1998. [35] Y. K. Hwang, N. Ahuja, Gross Motion Planning - A Survey, ACM Computing Surveys 24, no. 3, Sep. 1992, 219-291. [36] P. Isto, Path Planning by Multiheuristic Search via Subgoals, Proceedings of the 27th International Symposium on Industrial Robots, CEU, Milan, 1996, 712-726. [37] P. Isto, A Two-level Search Algorithm for Motion Planning, Proceedings of the 1997 IEEE International Conference on Robotics and Automation, IEEE Press, 2025-2031. [38] Y. Koga, K. Kondo, J. Kuffner, J.L. Latombe, Planning Motions with Intentions, Proceedings of SIGGRAPH 94, ACM SIGGRAPH, 1994, 395-408. [39] A. Li and G. Crebbin. Octree encoding of objects from range images. Pattern Recognition, 27(5):727-739, May 1994. [40] W.E. Lorensen and H. E. Cline. Marching cubes: A high resolution 3D surface construction algorithm. In Computer Graphics (SIGGRAPH '87 Proceedings), volume 21, pages 163-169, July 1987. [41] S-F Chang, J.R Smith, H.J. Meng, H. Wang, and D. Zhong. Finding images/video in large archives. D-Lib Magazine, February 1997. [42] Stavros Christodoulakis and Peter Triantafillou. Research and development issues for largescale multimedia information systems. ACM Computing Surveys, Dec 1995. [43] M.G. Christel, D.B. Winkler, and C.R. Taylor. Multimedia abstraction for a digital video library. In ACM Digital Libraries '97, pages 21-29, Philadelphia, PA, 1997.

Appendices Appendix A --********************************************************* --THIS PROGRAM DIVIDES THE 25MHz CLOCK OF THE ALTERA'S ** --UNIVERSITY BOARD INTO DIFFERENT SLOWER CLOCKS TO RUN ** --OTHER MODULES WHICH REQUIRE SLOWER CLOCKS ** --********************************************************* --********************************************************* library IEEE; use IEEE.STD_LOGIC_1164.all; use IEEE.STD_LOGIC_ARITH.all; use IEEE.STD_LOGIC_UNSIGNED.all; ENTITY clk_div IS PORT ( clock_25Mhz clock_1MHz clock_100KHz clock_10KHz clock_1KHz

: : : : :

IN OUT OUT OUT OUT

STD_LOGIC; STD_LOGIC; STD_LOGIC; STD_LOGIC; STD_LOGIC;

42 clock_100Hz clock_10Hz clock_1Hz

: OUT : OUT : OUT

STD_LOGIC; STD_LOGIC; STD_LOGIC);

END clk_div; ARCHITECTURE a OF clk_div IS SIGNAL SIGNAL SIGNAL SIGNAL SIGNAL

count_1Mhz: STD_LOGIC_VECTOR(4 DOWNTO 0); count_100Khz, count_10Khz,count_1Khz :STD_LOGIC_VECTOR(2 DOWNTO 0); count_100hz, count_10hz, count_1hz : STD_LOGIC_VECTOR(2 DOWNTO 0); clock_1Mhz_int, clock_100Khz_int, clock_10Khz_int, clock_1Khz_int : STD_LOGIC; clock_100hz_int, clock_10Hz_int, clock_1Hz_int : STD_LOGIC;

BEGIN PROCESS BEGIN -- This portion generates a 1MHz clock WAIT UNTIL clock_25Mhz'EVENT and clock_25Mhz = '1'; IF count_1Mhz < 24 THEN count_1Mhz

Suggest Documents