Session T4D USING ROBOTS IN AN ... - Semantic Scholar

11 downloads 938 Views 123KB Size Report
robots in this course at the end of the semester. We will discuss the results of this survey, which we believe, make a strong case for using robots in the AI course.
Session T4D USING ROBOTS IN AN UNDERGRADUATE ARTIFICIAL INTELLIGENCE COURSE: AN EXPERIENCE REPORT Amruth N. Kumar1 Abstract  In this paper, we will report our experience using robots in the Artificial Intelligence course we taught in Fall 2000. Our objective was to use robots to reinforce the traditional concepts of search and expert systems. We wanted the robots to be simple to build, yet powerful enough to illustrate AI concepts. In this paper, we will discuss our choice of the robot, describe the projects we assigned and list the problems our students encountered carrying out those projects. We surveyed our class regarding the use of robots in this course at the end of the semester. We will discuss the results of this survey, which we believe, make a strong case for using robots in the AI course. Index Terms  Artificial Intelligence, Robots, Assessment.

THE COURSE In Fall 2000, we used robots in our Artificial Intelligence course. This is a junior/senior level course, taken by Computer Science majors in a liberal arts undergraduate school. The course is very traditional in its content: it covers representation and reasoning, with emphasis on search, logic and expert systems. Our objective in using robots was to reinforce the learning of these basic AI tools using an embodied agent. In contrast, robots have been used by other educators as an organizing theme for the various AI concepts [1], as an empirical testbed for philosophical issues in a graduate course [2], and as a bridge between abstract AI theory and implementation [3].

THE ROBOT We chose the LEGO Mindstorms robot for the course because it is truly "plug-and-play". Students do not have to design circuits, or even solder components to build robots. They can just snap the controller, motors, sensors and wheels together. We had gathered from students who had used some of the other kits (in other schools) that they had spent most of the semester just constructing their robot. Since our students are liberal arts (and not electrical engineering) majors, we preferred not to put them through the hardship of such construction. Another reason why we chose the LEGO Mindstorms robot was that it is inexpensive: only $ 200 a kit. Therefore, we could ask students to buy their own kit, either individually or in groups of 2 or 3 students. The robot kits are readily available in many toy stores as well as online [4]. 1

The brain of the MindStorms robot is the RCX brick, which is the controller. The brick permits three inputs and three outputs. The robot kit contains one light sensor, two touch sensors, two motors, several gears (including a differential), wheels, beams, bricks and assorted pieces, more than 700 components in all. Additional sensors and motors are available for purchase on-line and are relatively inexpensive. However, the LEGO Mindstorms robot has its drawbacks too. The RCX brick has very limited memory: 32K, and the user's program cannot be larger than 6K. The user's program is restricted to no more than 32 variables. The robot's original operating system does not support recursion or arrays. We had to design our AI projects to accommodate these rather stringent constraints. But, we believe the ease of use and inexpensive nature of the robot outweigh these disadvantages. The robot comes with a visual programming language, which does not permit nested control constructs. However, alternative languages and cross compilers are available for the robot, including NQC (Not Quite C) [5], which is a subset of C. We chose to use NQC with the robot. Other options available for use with the robot include C [6], Ada[7], SmallTalk [8], pbForth [9], Java [10] and Scheme[11]. An excellent resource for LEGO MindStorms robots is "Dave Baum's Definitive Guide to Lego Mindstorms" book [5]. This book describes several robots, including one that follows a line, and another that senses/avoids obstacles. The book includes the mechanical construction and programming needed for these robots. Students can adapt these robots for their projects, and focus on building intelligent behaviors into them using AI algorithms.

THE PROJECTS In the past, we assigned AI projects in LISP. LISP is not necessarily a popular language with students who have been trained in imperative languages. So, many students welcomed the opportunity to use a C-like language instead. They were also excited at the prospect of using robots in the course. Using robots introduced "play" and fun into an otherwise staid course, and energized the students. Recall that we wanted to assign projects that reinforced the traditional concepts of search, logic and expert systems. We chose to assign closely tailored (as

Amruth N. Kumar, Ramapo College of New Jersey, 505 Ramapo Valley Rd, Mahwah, NJ 07430, [email protected]

0-7803-6669-7/01/$10.00 © 2001 IEEE 31 th ASEE/IEEE Frontiers in Education Conference T4D-10

October 10 - 13, 2001 Reno, NV

Session T4D opposed to open-ended) projects in our course, in order to give students a clear idea of what was expected of them, and to help us formulate a clear grading policy. We will now describe the projects we assigned, the props we had to build for the projects, and the problems encountered by students when attempting the projects. Project 1 Problem Statement: We assigned the first project on blind searches - depth-first and breadth-first search of a tree. Students could begin by assuming that the tree was a binary tree with three levels. They had to later generalize their implementation to deeper trees with arbitrary branching factor. The Prop: We built the tree using colored adhesive strips on black foam board. Different colors were used for the nodes of the tree (yellow) and the arcs of the tree (green). Please see Figure 1 for a picture of the binary tree.

and the type of wheels the robot used (treads were the worst, skinny tires were the best). In the future, we plan to ask students to purchase rotation sensors to overcome this shortcoming. We also plan to suggest that students build their robot with four slim wheels rather than treads or three wheels in order to improve the reliability of the robot behavior. Project 2 Problem Statement: We assigned the second project on heuristic searches - hill-climbing (with a possibility of local minima) and best-first search. The robots had to use these searches to find their way out of a maze. The robots were expected to build a search tree of the maze as they traversed it. The Prop: The maze was built using LEGO base plates and bricks. 48 x 48 baseplates were connected together by walls built 8-bricks high. Each room in the maze was at least 32 x 32 studs large. Rooms in the maze were demarcated by color adhesive strips laid down on the baseplates. The interior walls of the maze were easily relocatable. Therefore, the maze could be re-configured on the fly to test the generality of implementation of the search algorithms. Please see Figure 2 for a picture of the maze.

FIGURE 1. P ROP FOR PROJECT 1 - THE BINARY TREE

Implementation: Students were encouraged to use the "linebot" described in Dave Baum's book as the basis for their robot. Their robot had to traverse the tree using the search algorithms and demonstrate backtracking. Problems Encountered: Since LEGO robots support a limited number of variables (32) and do not support recursion, we had to limit the number of levels and nodes in the tree. Since students were not using rotation sensors, and therefore, their robots did not have a reliable way of calibrating their turns, we could not generalize the trees to more than three children per node, for fear of crowding the arcs and confusing the robots. The light sensor did not work very reliably – when a robot was on the edge of an arc, the light sensor detected the combination of the background color (black) and the arc color (yellow) as being equivalent to the node color (green)! Therefore, half way through the project, we had to rebuild the tree using yellow for nodes and green for arcs. For the same reason, we had to considerably reduce the size of the nodes. The hardest part of the implementation for students was to get the robot to turn reliably when it had to backtrack. The angle of a turn depended on the coarseness of the foamboard surface, the amount of charge left in the batteries,

FIGURE 2. PROP FOR PROJECT 2

– THE MAZE

Implementation: The robot was expected to sense the walls of the maze using touch sensors. It was expected to keep track of its current location (coordinates) in the maze by sensing the adhesive strips using a light sensor. The robot had to announce with beeps when it thought it had found its way out of the maze. Russell and Norvig [12] classify four types of agents: simple reflex agents, agents that keep track of the world, goal-based agents and utility-based agents. We wanted to steer clear of reflexive agents, and encourage our students to build higher order agents. Hence the rooms in our maze, and adhesive strips to demarcate them.

0-7803-6669-7/01/$10.00 © 2001 IEEE 31 th ASEE/IEEE Frontiers in Education Conference T4D-11

October 10 - 13, 2001 Reno, NV

Session T4D Students had to demonstrate their robot on two different configurations of the maze: one chosen by them, and one chosen by the instructor. Therefore, they could qualify for partial credit by demonstrating that their robot worked for at least one configuration of the maze. However, in order to qualify for full credit, they had to demonstrate that their implementation was general enough to handle any configuration of the maze. Problems Encountered: The 32 x 32 stud size of the rooms meant that the robots had to have a short wheel base, and a small turning radius. The easiest design for the robots was to mount the RCX brick on top of the motors. However, since the walls were 8 bricks high, the touch sensors had to be mounted so that they did not clear the walls. Some robots had a hard time staying in a straight line. They were likely to think they had moved to the next room to the north when they had actually veered off to the next room to the northwest. Therefore, we used different colored adhesive strips to demarcate the rows and columns of rooms in the maze: blue meant changing the row, and black meant changing the column. Robots with treads (instead of tires) had a tendency to climb over the maze walls rather than go around them! Since best-first search required the robot to build a tree with up to nine nodes, most attempted robots ran out of memory. This was one of the least successful components among all the project components assigned in the course.

demonstrated. Students could qualify for partial credit by demonstrating that their robot could correctly identify at least the first pixel grid, but had to demonstrate that it could also identify the second pixel grid in order to qualify for full credit. Problems Encountered: Students had a hard time getting their robots to hold to a straight course on the pixel grid because of mismatches between the left and right motors. Therefore, they were allowed to “assist” their robots whenever the robots veered off to a side. Students addressed the shortcomings of limited number of variables and memory available for their programs in inventive ways: instead of encoding all the pixels contributing to a character, they encoded only the pixels that were unique to each character. Instead of downloading rules for all the characters, they downloaded only some of the rules, including those for the characters on which they were being tested. These accommodations compromised the generality of the expert system, but enabled students to demonstrate their robot, which would not have been possible otherwise. Finally, they used a procedural rather than a declarative implementation of the rule base.

Project 3 Problem Statement: We assigned the third project on forward and backward chaining in a rule-based expert system. The robot had to traverse a grid of pixels and use a rule-based expert system to determine the character printed in the grid. It had to be able to do this using both forward (data-driven) and backward (goal-driven) chaining. We limited the possible characters in our pixel grid to only numeric characters (0-9). This was necessitated by the limited number of variables and the limited amount of memory available for writing programs on the robot. Once a robot identified a character, it had to beep as many times as the character it had identified. The Prop: The pixel grid was drawn using black adhesive strips on white foam boards. Silver Origami strips were used to indicate lit pixels in the grid. The Origami strips covered most, but not the entire area of a pixel. We found that silver foil had good reflectivity and stood out from the white of the foamb oard as well as the black of the adhesive strips, making the operation of the light sensors more predictable. In general, reflective strips on matte background appears to be the best combination for building props. Please see Figure 3 for a picture of the pixel grid. Implementation: Students had to demonstrate that their robot could identify the characters on two different pixel grids: one that was made available to them immediately after the project was assigned, and a second surprise pixel grid that was made available only when the projects were

FIGURE 3. PROP FOR PROJECT 3

– PIXEL GRID

Students were asked to submit the following deliverables for each project: • A Construction/Hardware Manual which described the use of motors and sensors, the structure of the robot and problems encountered during its construction. • A Technical/Software Manual which described the tasks, functions and variables used in the program, a description of the AI algorithms used and a list of all known bugs in the program. • An Observation Journal written after demonstrating the robot, which described the behavior of the robot during the demonstration and included explanation for all the

0-7803-6669-7/01/$10.00 © 2001 IEEE 31 th ASEE/IEEE Frontiers in Education Conference T4D-12

October 10 - 13, 2001 Reno, NV

Session T4D instances when the robot either failed to meet or overshot the objectives of the test setup. • A Team Work Report for group projects, which listed the contributions of each team memb er to the various aspects of the project including hardware construction, software development, testing and documentation. Team members were also asked to comment on the dynamics of their group. Note that the projects described above are not revolutionary or "cutting edge". But they are realistic considering the liberal arts nature of our Computer Science program, the limitations of the LEGO MindStorms robot kit and the goals we had set forth for the use of robots in our course: that robots should be used for studying AI algorithms, and not robot architecture or mechanical construction principles. Future Improvements Some of the ways in which we plan to improve our projects in the future include: using tinyVM [10] and Java as the programming environment, introducing competition among robots (which robot negotiates the maze the most quickly), using two RCX bricks in “Master/Slave” mode, and possibly letting the robots cooperately solve problems. We plan to require that students purchase the optional rotation sensors to facilitate dead reckoning and reliable turning of the robot. Light sensors vary widely in how they respond to color. Therefore, we plan to encourage the students to purchase an additional light sensor as a backup. We plan to recommend that students use two differentials to account for potential imbalance in their motors. We may require auto-calibration of the robots, i.e., the robots set threshold values for sensor inputs at run-time, instead of the thresholds being hardcoded into the program before compilation. Students often found, much to their dismay, that their robot would work correctly on trial runs, but would fail to work during demonstrations held in the presence of the instructor. Demonstrations often turned into marathon sessions for both the students and the instructor, since the students did not want to give up trying to get their robot to work correctly. To avoid such marathon sessions, we plan to encourage the students to videotape the working of their robot and submit the videotape in lieu of demonstrating in person.

FEEDBACK We conducted an anonymous survey of our students at the end of the semester. In the survey, we asked them to compare robot projects with traditional LISP projects, or projects in other courses they had taken with the same instructor (All but 3 of the respondents had taken other courses with the same instructor). In this section, we summarize the results of this survey, which was filled by 16 students.

Compared to projects in other courses, the students rated the robot projects in AI as hard, i.e., 3.85 on a Likert scale of 1 (very easy) to 5 (very hard). They rated the various components of the projects on a scale of 1 (easy) to 3 (hard) as follows: • the problems assigned tended to be hard: 2.4. • putting together robot hardware was ok: 1.71 • writing software for the robot was ok: 2.00 • getting the robot to work reliably was hard: 2.87 • using the props (boards, maze, etc.) tended to be hard: 2.29 Clearly, building the LEGO robot was the easiest part of the project, which validated our choice of LEGO robots for the course. Getting the robot to behave reliably was the hardest part of the project, which was to be expected considering that the robot used motors and sensors – all analog parts. Compared to projects in other courses, the students rated the robot projects in AI as: • taking a lot more time, i.e., 4.56 on a scale of 1 (lot less time) to 5 (lot more time) • a lot more interesting, i.e., 4.18 on a scale of 1 (lot less interesting) to 5 (lot more interesting) So, our initial hunch in using robots in the course was right: even though students spent a lot more time doing robot projects, they also enjoyed the projects a lot more. Students agreed that the robot projects: • helped them learn/understand AI concepts better (2.06 on a scale of 1 (Strongly Agree) to 5 (Strongly Disagree)) • gave them an opportunity to apply/implement AI concepts that they had learned (1.93 on the above scale) When we first decided to use robots in the course, we had some reservations that the robot projects may end up being more “entertainment” than “education”. The above responses put our misgivings to rest: the robots were clearly educational, even though they may have been entertaining at the same time. The first project was segmented into four parts, attempted by 15, 13, 7 and 7 students respectively. The second project was segmented into three parts, attempted by 14, 7 and 7 students respectively. The third project consisted of two parts, attempted by 14 and 11 students respectively. So, the level of participation of students in the projects was sustained throughout the semester, even though students found that the projects were a lot more time-consuming than any previous projects they had attempted in other courses. Students reported spending widely varied amounts of time on the projects. The average time they spent on the first project (which had four components, and was arguably the largest project) was 30.5 hours. They reported spending an average of 20.2 hours on the second project and 12.6 hours on the third and arguably the easiest project. Students were nearly neutral on whether the grade they received on the projects accurately reflected how much they had learned from doing them (2.61 on a Likert scale of 1

0-7803-6669-7/01/$10.00 © 2001 IEEE 31 th ASEE/IEEE Frontiers in Education Conference T4D-13

October 10 - 13, 2001 Reno, NV

Session T4D (Strongly Agree) to 5 (Strongly Disagree)). But predictably, they disagreed that their grades were an accurate reflection of how much time they had spent doing the projects (3.30 on the same scale). There was a trend from individual effort towards team effort as the semester progressed: whereas 50% of the students attempted the first project by themselves, 60% or more attempted the second and third projects in groups of 3 students! This may be seen as students adapting to a more realistic expectation of how much time the projects took to complete. They strongly recommended that group projects be allowed in future offering of this course (1.42 on a scale of 1 (Strongly recommend) to 5 (Strongly do not recommend)). Students were unanimous (100%) in recommending that we continue to assign robot projects in future offerings of the course. Over 90% said they would recommend such a course to friends. From our experience so far, it appears that the two keys to making robot projects successful in the AI course are: • to carefully choose the platform so that robot behavior is as predictable as possible, and • to allow group projects. In conclusion, using robots in the AI course appears to be an idea well worth considering. Students enjoy learning with robots. It engages kinesthetic learners [13], promotes active learning, and promotes situated learning of AI in the context of autonomous agents. We hope that our experience report contains enough detail (hints, pitfalls, etc.) to help interested instructors use robots in their own courses. We also hope that the encouraging results of our survey will get other instructors interested in using robots in their courses.

REFERENCES [1]

Kumar, D. and Meeden, L., “A Robot Laboratory for Teaching Artificial Intelligence”. In Proceedings of th e Twenty-Ninth ACM SIGCSE Technical Symposium (SIGCSE '98), New York, NY: The Association for Computing Machinery, 1998, 341-344.

[2]

Turner, C., Ford, K., Dobbs, S., Suri, N., and Hayes, P., “Robots in the Classroom”. In Proceedings of the Ninth Florida Artificial Intelligence Research Symposium (FLAIRS '96), Florida AI Research Society, 1996, 497-500.

[3]

Shamma, D.A. and Turner, C.W., “Teaching the Foundations in AI: Mobile Robots and Symbolic Victories”. In Proceedings of the Eleventh International Florida Artificial Intelligence Research Symposium Conference (FLAIRS ’98), Menlo Park, CA: AAAI Press, 1998, 29-33.

[4]

http://www.legomindstorms.com

[5]

Baum, D., Dave Baum's Definitive Guide to Lego MindStorms. Apress Publishers. www.apress.com. 1999.

[6]

gcc on legOS, http://www.noga.de/legOS

[7]

Fagin, B., “Using Ada-based robotics to teach computer science”. In Proceedings of the Fifth Annual Conference on Innovation and Technology in Computer Science Education (ITiCSE 2000), New York, NY: The Association for Computing Machinery, 2000, 148151.

[8]

BotKit, http://www.object-arts.com/Bower/Bot-Kit

[9]

http://www.hempeldesigngroup.com/lego/pbFORTH

[10] tinyVM, http://tinyvm.sourceforge.net [11] Lego/Scheme, http://www.cs.indiana.edu/~mtwagner/legoscheme/ [12] Russell, R. and Norvig, P., Artificial Intelligence – A Modern Approach. Prentice Hall. Englewood Cliffs, NJ. 1995. [13] The Active Learning Site http://www.active-learningsite.com/vark.htm

0-7803-6669-7/01/$10.00 © 2001 IEEE 31 th ASEE/IEEE Frontiers in Education Conference T4D-14

October 10 - 13, 2001 Reno, NV

Suggest Documents