Data-focused Parallel Modular Software Design for a Communicating Autonomous Mobile Robot ?
??
Tetsushi Oka , Junya Tashiro
???
Kunikatsu Takase
Graduate School of Information Systems, Univ. of Electro-Communications Chofugaoka 1-5-1, Chofu-shi Tokyo 182-8585 Japan Abstract. This paper presents an approach to designing modular software for a communicating autonomous mobile robot integrating visual processing, motion generation and decision making for navigation and verbal communication. We design the software as a network of parallel computational modules with distinct roles which update a xed data object set at regular intervals in real time. We have realized a communicating mobile robot based on our programming environment BNJ, in which one can describe parallel modular real-time systems in Java in a very simpli ed manner. The description of the system is not speci c to any hardware system and helps the designer to understand how it works. Many classes and modules used in this system are reusable as resources. We illustrate our basic idea and the concrete system designed in BNJ.
1
Introduction
It is necessary for a robot that works in the real world, to have spontaneous communications to receive orders from its masters, to negotiate with other people, to avoid con icts, to cooperate with someone or to acquire useful information. The designers of such a communicating autonomous robot should specify software which can make appropriate actions and utterances in real time without missing changes in the environment and signi cant messages from unspeci c people. However, few methodology for designing such software has been proposed and tested [2, 10], though some agents that can communicate acting in closed worlds [9, 17] have been realised. Although there have been proposals of system architectures of autonomous robots and agents [3, 4, 6, 10, 16], it is still dicult for us to actualize an intelligent communicating autonomous robot. A good architecture will help the designer. However, our goal is not to nd a good architecture itself but to nd a good information system for an autonomous robot and to understand how it works. We believe that enormous knowledge, algorithms, heuristics and theories are required for designing such a system. Experiences in developing robots and developed systems themselves will be great sources of such knowledge. Therefore, we should explore more for a better communicating robot and increase our knowledges and software resources. We must search for a good software for an autonomous robot repeating extension and testing of designed software step by step, because we do not have a complete model of the real world. In such a bottom-up approach, it is desirable that the description of the software is readable, reusable, brief and modular because it is used as a resource for development and analysis. It is also desirable that the description is not speci c to any individual hardware and speci es the property of the robot completely. Nevertheless, there have been few proposals to give a complete, machine independent description of a real-time system. In most of the existing programming methods including concurrent programming[1, 8, 15] and agent architectures above, a program does not specify the property of an autonomous system completely independent of the computer hardware, because the time of computation aects the behaviour. In this paper, we propose an approach to designing reactive modular software for a communicating robot by integrating techniques for vision, motion control, action selection and verbal communications based on a programming method for giving a brief hardware independent description of a real-time system. We describe the software out of parallel computational modules for dierent distinct roles which update a xed set of data objects at their own regular intervals so that it is reactive, readable and reusable. The description of the system speci es the behaviour of the robot independent of the computer hardware. We present how to realise a robot which can reach the goal getting information from both vision and communications. ? ?? ???
Email:
[email protected] Email:
[email protected] Email:
[email protected]
2 2.1
Data-focused Design and BeNet Model Autonomous System and RTAA
An autonomous system which interacts with a dynamic environment can be formally de ned by a tuple (I; S; O; s0 ; fs ; fo ; t). I is the set of the sensory input from the environment. S is the set of the internal state of the system. O is the set of the motor output to the environment. s0 2 S is the initial state of the system. The system's properties are speci ed by the two mapping functions fs : I 2 S ! S and fo : I 2 S ! O. The system changes its state s(t) 2 S and output o(t) 2 O monitoring i(t) 2 I at regular intervals t. s(t + t)
= fs ((i(t); s(t))) (1) o(t + t) = fo ((i(t); s(t))) (2) The input, state and output may be described as tuples or vectors. We call this discrete time computational model RTAA (real-time augmented automaton). Designing software for an autonomous robot is formally equivalent to designing an RTAA. In other words, if we specify the software as an RTAA, it will be a hardware independent, complete description of an autonomous system. Therefore the software for an autonomous robot should specify an RTAA or an equivalent system. 2.2
Data-focused Design of RTAA
We can assume that the internal state of an RTAA is represented by a xed set of data objects in the memory of a computer. Each of the objects takes a value at a time and it is updated every t. It carries some information necessary for the system to work well. For example, information on the state of the environment is usually necessary for a robot. Goals of the robot should be available from the internal memory too. In the data-focused approach, we design the software of an autonomous robot focusing on a nite set of data objects of abstract types. We describe classes for these data objects in an object oriented language and a program that updates the values of the objects based on the sensory input(see Fig.1(center)).
s(t) i(t)
SYSTEM
o(t)
Data Objects
s(t)
i(t)
o(t)
Environment
Local Data Objects
BU dt1
BU dt2
BU dt3
BU dt4
Shared Data Objects
ws(t) o(t)
i(t)
RTAA
Data Object Set
BeNet
Fig. 1.: Data-focused Design and BeNet There are merits of describing the internal state of an RTAA as a xed set of abstract data objects. Firstly, we can estimate the actual size of the memory from the description which is necessary for representing the state of the system. It is realistic to assume that the system is a nite state machine. Therefore, information on the necessary memory size is non-trivial to the designer. Secondly, by using abstract data types and object oriented programming methods, we can design a complex system by encapsulating details of the computation. Thirdly, the data set clari es what kind of information is necessary for the robot. Finally, we can extend the software exibly by adding and modifying the objects. The description of an RTAA speci es the property of an agent completely while ordinary programs, especially ones with concurrent threads do not, because their behaviour are in uenced by the performance of the hardware.
2.3
BeNet Model and BNJ, the Programming Environment
In our approach, we describe an autonomous system as a program with multiple concurrent control threads in order to decompose it into modules of distinct roles and functions. Modularity of the system helps the designer integrating techniques and extending existing systems. The computational model, BeNet, is a network of asynchronous parallel RTAA which have their own real-time intervals(see Fig.1(right)). Some data objects are used locally within an RTAA module which is called a BU(Behaviour Unit). The other objects are shared by BU's and updated by a BU and read by another. Describing a real-time system as a BeNet, it is easier to understand how the whole system works, to extend the system adding modules and data objects and reuse the description of an existing system. As all the modules in a BeNet are RTAA, we can predict how it behaves usually without taking account of the hardware. BeNet has been applied to building software for autonomous robots integrating visual processing, motion control and action selection using programming environments on various computers including multiprocessors [11, 13]. In those applications, BeNets were written in C, C++ or Lisp. In those environments, however, we could not use abstract data objects as messages between BU's. Recently, a new environment BNJ has been developed, in which one can describe a BeNet in Java in a very simpli ed manner with various class libraries for image processing, motion generation, text/language processing and user interface for verbal communications [14, 18].This environment enables us to design BeNets on the data-focused approach using abstract data types. Thanks to BNJ, we could start challenging a eld of communicating robots. 2.4
Specifying BeNets in BNJ
In BNJ, shared data objects are updated by a method called setData. For instance, data1.setData(data2,PRIO);
This replaces the value of a shared object data1 with the value of a local object data2 and the priority value of data1 with PRIO if the priority value is not larger than PRIO. The second argument for the priority is useful for message arbitration among BU's. The priority value of a shared object can be changed by calling another method. data1.unsetData(data2,PRIO);
Only if data1's priority value is equal to PRIO, the priority becomes a negative number. After this operation, the value of data1 can be changed with any non-negative priority value. A BU is de ned as an extended class of an abstract class BehaviourUnit which is also an extended class of java.lang.Thread(See g.2). The designer has to de ne at least four methods and one constructor. A BU can be de ned simply as follows. class MyBU extends BehaviourUnit{ BeNetMessge mess; // Shared Data Set MyInput in; // Local Object for storing Input Message MyOutput out; // Local Object for storing Output Message MyState st; // Local Object for s(t) final int PRIO = 3; // this BU's priority
}
public MyBU(BeNetMessage m,int interval){ // Constructor of this Class super(interval); mess = m; st = new MyState(); } public synchronized void readInput(){ // For reading messages i(t) in = mess.dat1.getData(); } public synchronized void writeOutput(){ // For writing messages o(t) mess.dat2.setData(out,PRIO); } public void initialize(){ // For defining s(0), the initial state st.init(); } public void rule(){ // For updating s(t) and o(t) out = st.changeStateAndOutput(in); }
The method rule is for calculating s(t + t) and o(t + t) at each step. You can start a BU computation simply as follows.
BeNetMessage message = new BeNetMessage(); MyBU mb = new MyBU(message, 30); // interval = 30[ms] mb.start();
Thus, in BNJ, it is facile to de ne a BeNet as a program which updates data objects of abstract types in real time and in parallel. Thread
BehaviorUnit
MyBU2
MyBU OtherBU
Fig. 2.: de ning Classes for BU's
3
Data-focused Design of Communicating Robot
We realised a robot which obtains the goal through communications and reaches the goal based on visual information. When it has wrong information about the environment, it asks questions to obtain correct information. The software for our robot was designed as a BeNet in BNJ. The BeNet consists of vision system, motion system and decision system(see Fig.3). These subsystems are also BeNets and they repeat updating data objects concurrently in real time.
Decision
UI
ut(t)
comFrame
NaviControl Commuincation me(t) ac(t)
pe(t)
Vision Camera
ColorPeriph ColorTrack MarkDetect
Human
at(t)
se(t)
Data Object Set
Motion
action level
pattern level
Search Face Follow TurnAngle Forward Stop
Walk Turn LookAround Gaze Reset
Actuators
Fig. 3.: BeNet Con guration for the Communicating Robot
3.1
Vision System
There are three BU's in the vision system. The vision system is a network out of BU's for peripheral vision, foveal vision and attention control. The con guration of the vision system is shown in Fig.4. It can nd and track landmarks with colors on the oor automatically based on the attention command at(t) from the decision system. In BNJ, we have classes for describing image processing brie y. One of the most important classes is class Image2D which has 2D color image and methods for basical image processing. Classes used in the vision system are shown in Table 1. Peripheral vision unit, colorPeriph extracts an area which seems to include a landmark in the whole scene. Foveal vision unit, markDetect receives the area of attention from the other BU's and examines if it really includes one. Both of these BU's makes use of a shared object which contains at(t), i.e. what the robot wants to look at. Attention control unit, colorTrack tracks an object while it is tracking a correct target. If it fails, it tries to track another target which is found by colorPeriph. Visual processing is accelerated by a color image processing hardware IP2000 on a PC with Pentium Pro 200MHz. Each BU updates its local objects and some of the shared objects at intervals of 150[ms].
Pe(t)
Image2D disp; Monitor
ImageWindow attWin; boolean trackMode;
ColorTrack
priority=0
Data Object ImageWindow candid;
I(t)
ColorPeriph priority=1
BU Object
SearchingObject so;
At(t)
PoleDetect
Image2D cam;
priority=2
boolean poleFound;
boolean poleSearch;
Pe(t)
At(t)
Fig. 4.: The Vision System 3.2
Motion System
The BU's in the motion system are also described brie y in Java. The internal state is represented by data objects which contain commands to other BU's, the legs and the neck. The motion system consists of two types of BU's[12]. One is for primitive motion generation such as walking and turning. These BU's are called pattern generators. The other is for action generation based on the physical action command ac(t) such as following and searching, from the decision system. The latter BU's keep sending commands and parameters to the former BU's. For example, an action generator follow in Fig.5 reads the action command and decides whether it writes messages to the pattern generators. While the action follow is selected, the BU follow computes commands to walk and gaze and updates the shared data objects wp and gp at intervals of 100[ms]. Action Generators
WalkParam wp;
Pattern Generators Walk
Search
priority=0
priority=0
TurnParam tp;
Action ac;
priority=1
Follow
Leg lf,rf,lb,rb;
Turn
Face
priority=1
ResetParam rp;
Reset
priority=2
priority=2
Neck nk;
GazeParam gp;
Gaze
TurnAngle
priority=0
priority=3 LAParam lap;
Forward
LookAround priority=1
priority=4
Stop
Data Object
priority=5
BU Object
Fig. 5.: The Motion System
3.3
Dynamic Scenarios for Decision Making
A communicating autonomous robot takes an action, pays attention to something and sometimes make an utterance to communicate with someone. Designing such a robot, it seems to be eective to make stereotypical scenarios that it acts on and merge them. In the real world, we cannot assume that everything goes as the designer predicts. Therefore, the scenario must be dynamic rather than static. A dynamic scenario is described as a directed graph or an automaton that changes its state based on events from the outside (see Fig.6). In a dynamic scenario, there are many possible paths which can be created dynamically as the robot interacts with the outer world. For example, in navigation, the optimal path to the goal or the goal itself may change as the environment is changed by
Table 1.: BNJ Classes and user de ned Classes for the Communicating Robot System
Class name
Vision
Image2D,ImageSys,BinTracker,YUVArea ImageWindow, Finder,BinaryImageFeatures, SearchingObject Motion Action, Leg, Neck, Actuator,ImageWindow,WalkParam,TurnParam,...,ResetParam Decision HashTable,CommunicationState,Meaning,NavigationState Map, Action,SearchingObject,Stack,Goal
someone. In communications, the designer cannot predict the opponent's messages completely. The goal of a communication should be also achieved in a dynamic interaction with the opponent.
C0
ev(t)
a(t)
c1
fa
ev/a
c2
a’
fc,fa
fc (ev,cx)
cx’
cx(t) c3
c4
c5
Automaton
G
A
Ev x C
C
action
event/context
context
Mappings
Directed Graph
Fig. 6.: Concept of Dynamic Scenario(Automaton, Graph and Mappings) A dynamic scenario can be described as an RTAA. A scenario as an RTAA changes its state in real time with a single interval. Let Ev denote the set of events or information from the outer world. C is the set of the context values of the scenario. Ac is the set of the actions to the world. cx0 2 C is the initial value of the context. g 2 C is the goal state of the scenario. The event from the world ev (t) 2 Ev and the context cx(t) 2 C at time t are used to decide the action a(t + t) 2 A and the context cx(t + t) 2 C at the next time step. cx(t + t)
= fc (cx(t); ev (t)) (3) a(t + t) = fa (cx(t); ev (t)) (4) For a communicating robot, a(t) must contain information about the physical action ac(t), the attention at(t) and the utterance ut(t). ev (t) should represent the message from the opponent me(t) and perception pe(t). 3.4
Data-focused Design of Dynamic Scenarios
We designed a dynamic scenario for our communicating robot in BNJ and intergrated it with BeNets for vision and motion control. They interact each other through xed sets of data objects as described in Fig.3. Events to change the context cx(t) are detected by analysing a message from the human me(t) based on keyword matching using a hash function and checking perception pe(t) from the vision system. Action ac(t), attention at(t) and utterance ut(t) are determined every t (300[ms] in the current system) based on the context cx(t).
3.5
Scenario for Action Selection
The BU NaviControl decides which action generator in the motion system to activate and which attention commands to send to the vision system every 200[ms]. Information used for selecting one of the six actions consists of the action at the previous time step, the current position, the current goal, the environmental map, advices from human, perception pe(t) from the vision system, and some counters and ags. For example, if the robot was searching for a landmark and a landmark is found for a while, the robot starts following it. If the landmark is near enough, the robot turns its body to face the mark and steps forward. In this scenario, action ac(t) is also used as context. We show a graph of the action scenario in terms of the action value in Fig.7. Face Follow Forward Stop Search TurnAngle
Fig. 7.: Scenario for Action Selection
3.6
Scenarios for Communication
If the robot is given no goal, it has to communicate with someone to obtain a goal. As the objective of the communication is distinct for this robot, we could list up all the items to acquire from the human in each case. We designed communication scenarios for obtainig goals and for getting an advice from a person. At each time step, the BU checks if the shared object for messages from human contains a new text string. It checks if it includes relevant keywords or key phrases and classi es the message. The words and phrases to match and their order to match are determined by the context. For example, if the robot has no current goal, the BU searches words which is likely to indicate the human's order in higher priority. The result of message analysis is stored in an object of class Meaning and used for changing the object of class CommunicationState. The utterance ut(t) is decided based on the location of the scenario for communication. String humanMessage hm; CommunicationState cs; NavigationState ns; String utterance; public void rule(){ Meaning me;
}
me = analyseMessage(hm); // analyse human message if any cs.changeState(ns,me); // change the state based on scenario if(cs.isActive()) // if an utterance is necessary, utterance = generateUtterance(); // create one.
3.7
Integrating Scenarios
We realize the whole scenario, merging smaller scenarios for communication and navigation in speci c situations. The integrated scenario has more context variables than the individual scenarios. However, those scenarios can share variables so that the total number of variables is reduced. In our current BeNet, the dynamic scenario for communication and the dynamic scenario for action selection are in two dierent BU's. Data objects for CommunicationState and NavigationState are copied to the shared memory.
4
Experimental Results
We conducted experiments in our laboratory to verify our BeNet(See Fig.8). We are using a wireless system to control a 14-DOF 4-legged mobile robot which has only a 8-bit micro-processor, a wireless data receiver and a image transmitter. Pictures from the camera is received by IP2000 via a video tuner. The robot obtained the goal speaking to a human spontaneously. It could reach the goal based on visual input when it was given a correct map of the environment. This occurred at the \base" in Fig.8. Human: Robot: Robot: Human: Robot: Human: Robot: Human: Robot:
Hi! Hello. Do you have a command? Yes, I do. What shall I do? Please go to the pole 1. Do you want me to go to the pole 1? That's right. OK!
When it was given a wrong map, it could detect it and communicated with a human to obtain information about the environment. In Fig.8, at B the robot noticed that the map was wrong and talked to a human. Robot: Robot: Human: Robot: Human: Robot:
I'm sorry. I lost my way. How can I get to the pole 1? Well, where are you now? I am on the 2nd mark from the base. Then, you should turn to the right. Thank you for your advice.
The robot corrected the internal map data and restarted vision based navigation. When the robot reached the goal, it reported it and asked for another goal at \pole 1". communication Correct Map
communication Report Arrival
Goal 1
pole 2
E
B
C
D
pole 1
A Mark Goal 2
Robot communication Obtain Goal
base
Fig. 8.: Environment for the Experiment In Fig.9, the result on the partial graph of the integrated communication scenario is shown. Bold arrows show the actual transitions of the context. These transition occurred at intervals of 300[ms]. We can trace the scenario for action selection in the same way. This robot currently cannot move very quickly, because we couldn't make the intervals of vision BU's and motion BU's shorter on our PC and IP2000. In our experiments, it took three or four minutes for the robot to reach the goal. However, it is not dicult to increase the performance changing the description if we have a better hardware since we don't have to change the description taking account of the change of the hardware.
Wait
SayHello
AnswerFine
NoGoal AnyCommand WaitAny AskCommandAgain AskCommand WaitCommand
HaveNewGoal Impossible
ConfirmGoal
WaitConfirm ReportArrival SayOK
AnswerPlace Thank
NoCommunication AskAdvice ReportLost WaitAdvice
Fig. 9.: Trace on the Partial Graph of Integrated Communication Scenario 5
Discussion
As the software is designed as a BeNet based on the data-focused approach and we use a wireless robot, it is easy to analyse the system's behaviour on line. We can trace the contents of the data objects and nd problems of the scenario, vision and motion systems. In the decision system, information on the environment and the context for communication are used for both communication and action selection. They are updated based on the messages from the human and the visual information. In the shown example, we could make it clear what information is necessary inside the robot brain for communication and navigation. The modular description of the BeNet helps us to understand how it works and to merge dynamic scenarios in dierent situations. BNJ helps the designer to give a readable and brief description of a complex system for a communicating robot. Generally speaking, a program developed in BNJ is simpler and easier for the designer to imagine what is happening inside the system when it is interacting with the environment than the other existing programming environments since every data object is updated by a few control threads at regular intervals and can be described as an instance of an abstract class. In other parallel architectures [3, 4, 10, 16], it is not so simple. It is crucial for extending an existing system by integrating diverse techniques. On the other hand, in BNJ, many kinds of architectures can be tested by developing class library and unit library. If there are classes for representing objects useful for a speci c architecture, it is easier to specify a concrete system based on that architecture. The description of a system in BNJ is not speci c to any hardware and the behaviour is not aected by the property of the hardware which the BeNet is implemented on if it is simulated on time as long as the computation is in time. This is desirable when the designer wants to improve the competence of a robot by extending the software on a more powerful hardware. Since the description of the system is hardware independent and modular, many parts of it, classes and concurrent units, can be used for robots which do other tasks. We cannot test the behaviour of a BeNet if we do not have a hardware which can simulate it in real time. However, designing as a BeNet, we can discuss how the system interacts with its environment without considering the hardware. In the current implementation, we are not taking account of optimizing on a speci c multi-processor system. We cannot give information of the hardware to the compiler or the virtual machine. Developing good compilers, schedulers or preprocessors for accelerating BeNet simulation or emulation will be one of the problems to solve. It will be more dicult to design and integrate scenarios when the robot is expected to acquire much more kinds of information from people, answer questions and do more complex tasks in the real world. We believe that our data-focused approach on BeNet model is indispensable for designing such a robot. Unfortunately, we don't have good knowledge sucient for preparing every class for communicating robots or for proposing a generic architecture for communicating robots yet. The most important task of ours is to nd out useful data representation and object classes by trying to realise more sophisticated communicating robots by analysing how humans communicate and by implementing dynamic scenarios.
6
Concluding Remarks
In this paper, we presented an approach to realising a communicating autonomous robot. Data-focused design of BeNets are suitable for complex real-time systems, especially for software of autonomous robots with a large repertoire of behaviour and communications. A BeNet described in Java determines the behaviour of the robot independent of the computer hardware and it is readable, modular and reusable. Although realising a communicating robot is not straightforward, we could develop a communicating autonomous robot in a rather short term integrating modules for motion generation, vision, action selection and communication into a real-time system. It was due to our programming environment BNJ and our class library. References
1. S.Ahuja, N.Carriero and D.Gelernter, Linda and Friends, IEEE Computer, August pp.26{34, 1986 2. H. Asoh, S. Hayamizu, I. Hara, Y.Motomura, S.Akaho and T.Matsui, Socially Embedded Learning of the OceConversant Mobile Robot Jijo-2, Proc. of the 15th International Joint Conf. on Arti cial Intelligence IJCAI'97, pp.880{885, 1997 3. R.P.Bonasso and D. Kortenkamp, Characterizing an Architecture for Intelligent, Reactive Agents, AAAI Spring Symposium on Lessons Learned from Implemented Software Architectures for Physical Agents, 1995 4. R.A.Brooks, A Robust Layered Control System for a Mobile Robot, IEEE Journal of Robotics and Automation, pp.14{23, 1986 5. R.A.Brooks and L.A.Stein, Building Brain for Bodies Autonomous Robots, Vol. 1, pp.7{25, 1994 6. E.Gat, Integrating Planning and Reacting in a Heterogeneous Asynchronous Architecture for Controlling Realworld Mobile Robots, In Proc. of National Conf. on Arti cial Intelligence, 1992 7. B.Hayes-Roth, An Architecture for Adaptive Intelligent Systems, Arti cial Intelligence, vol.72, pp.329{365, 1995 8. C.A.R. Hoare, Communicating Sequential Processes, Prentice/Hall International, 1985 9. A.B.Loyall and Joseph Bates, Real-time Control of Animated Broad Agents, Proc. of Fifteenth Annual Conf. of the Cognitive Science Society, 1993 10. T.Matsui, H.Aso, I.Hara and N.Otsu, An Event-Driven Architecture for Controlling Behaviours of the Oce Conversant Mobile Robot, Jijo-2 Proc. of IEEE International Conf. on Robotics and Automation, ICRA'97 Apr. 1997. 11. T.Oka K.Takeda M.Inaba and H.Inoue, Designing Asynchronous Parallel Process Networks for Desirable Autonomous Robot Behaviours, Proc. of IROS96, pp.178{185, 1996. 12. T.Oka M.Inaba and H.Inoue, Describing a Modular Motion System based on a Real-time Process Network Model, Proc. of IROS97, pp.821{827, 1997. 13. T.Oka M.Inaba and H.Inoue, Programming Environments for Developing Real-time Autonomous Agents based on a Functional Module Network Model, Proc. of third ECPD Conf. on Advanced Robotics and Intelligent Automation, pp.326-332, 1997. 14. T.Oka J.Tashiro and K.Takase, Object Oriented BeNet Programming for Data-focused Bottom-up Design of Autonomous Agents, Proc. of Intelligent Autonomous Systems 5, pp.399{406 June 1998. 15. Y.Shoham, Agent-oriented Programming, Arti cial Intelligence, Vol.60, pp.51{92, 1993 16. R.Simmons, A Layered Architecture for Oce Delivery Robots First International Conf. on Autonomous Agents, 1997 17. T. Winograd, Understanding Natural Language Academic Press, 1972 18. BeNet Home Page, http://www.taka.is.uec.ac.jp/oka/BeNet.html
This article was processed using the TEX macro package with SIRS98 style