tended to ease the development of Java games, on multiple platforms. Existing
Java game APIs, tend to focus on providing graphical tools to the developer, ...
Abstract Game orientated application programming interfaces (API), built using Java, are intended to ease the development of Java games, on multiple platforms. Existing Java game APIs, tend to focus on providing graphical tools to the developer, to ease the graphical representation of games. However, one of the most difficult features to implement for any complex game is artificial intelligence (AI), for which there is a great lack of APIs. This document details the specification, design, and implementation for a Java API orientated to providing AI, based on the minimax algorithm. Two APIs are produced, one for desktop platforms, and the other for mobile platforms, both of which include a variety of optimisations. It is concluded, that the APIs produced provide a very easy way to implement AI, based on the minimax algorithm, into a game.
Acknowledgements I would like to thank both Dr. Carsten Furhmann, and Dr. Daniel Richardson for providing me with wonderful supervision throughout my project. I would also like to thank my mother and father for their love, and support, without which I could not achieve such goals.
2
Contents 1 Introduction
7
2 Literature Review
9
2.1
2.2
Application Programming Interface . . . . . . . . . . . . . . . . . . . . . . 2.1.1
Java API Class and Object Design . . . . . . . . . . . . . . . . . . 11
2.1.2
Encapsulation and Information hiding . . . . . . . . . . . . . . . . 12
2.1.3
Method Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.1.4
Inheritance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.1.5
Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Board Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.2.1
2.3
2.4
9
Representation of Boards - Bitboards . . . . . . . . . . . . . . . . 19
Minimax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.3.1
Alpha-Beta Pruning . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.3.2
Bounded Lookahead . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.3.3
Iterative Deepening Alpha-Beta . . . . . . . . . . . . . . . . . . . . 29
2.3.4
Transposition Tables . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.3.5
Killer Move Heuristic . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.3.6
Extending Minimax . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Design Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.4.1
Flyweight Design Pattern . . . . . . . . . . . . . . . . . . . . . . . 34
2.4.2
Strategy Design Pattern . . . . . . . . . . . . . . . . . . . . . . . . 35
2.5
API documentation
2.6
Existing Products
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3
3 Requirements Documentation
38
3.1
Requirements capture an analysis method . . . . . . . . . . . . . . . . . . 38
3.2
Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.3
Requirements Specification . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.4
3.3.1
Board Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.3.2
Artificial Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.3.3
Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.3.4
Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.3.5
Portability Requirements . . . . . . . . . . . . . . . . . . . . . . . 43
3.3.6
Compatibility Requirements . . . . . . . . . . . . . . . . . . . . . . 43
3.3.7
Documentation Requirements . . . . . . . . . . . . . . . . . . . . . 44
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4 Design
45
4.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.2
Architectural Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.3
Minimax Package High-Level Decisions . . . . . . . . . . . . . . . . . . . . 47
4.4
Node hierarchy high-level decisions . . . . . . . . . . . . . . . . . . . . . . 49
4.5
Memory Package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.6
Naming Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.7
Accessors vs. Direct Access . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.8
API Specification Documentation Design . . . . . . . . . . . . . . . . . . . 52
5 Implementation
54
5.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
5.2
Node Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
5.3
Node Casting Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
5.4
State Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
5.5
HasState Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
5.6
Minimax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
5.7
Transposition tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
4
5.8
Minimax Package Architecture . . . . . . . . . . . . . . . . . . . . . . . . 64
5.9
Persistent storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 5.9.1
GameME Persistent Storage . . . . . . . . . . . . . . . . . . . . . . 65
5.9.2
GameSE Persistent Storage . . . . . . . . . . . . . . . . . . . . . . 66
6 Testing 6.1
6.2
6.3
67
Test Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 6.1.1
First Stage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
6.1.2
Second Stage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
6.1.3
Third Stage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Testing Outcomes
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
6.2.1
J2ME hasState . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
6.2.2
getNext/isNext similarity . . . . . . . . . . . . . . . . . . . . . . . 69
6.2.3
Transposition tables effectiveness . . . . . . . . . . . . . . . . . . . 70
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
7 Conclusion
72
7.1
Project Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
7.2
Possible Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
7.3
Personal Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
A
76
B
77 B.1 Software Requirement Specification . . . . . . . . . . . . . . . . . . . . . . 77 B.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 B.1.2 Overall Description . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 B.1.3 Specific Requirements . . . . . . . . . . . . . . . . . . . . . . . . . 80
C
81 C.1 Requirement Elicitation Diagrams . . . . . . . . . . . . . . . . . . . . . . 81
D
85
5
D.1 Test Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 D.2 Testing Stage Three Screenshots . . . . . . . . . . . . . . . . . . . . . . . 87 D.3 Optimisation Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
6
Chapter 1
Introduction Application programming interfaces (API) are programming tools available to developers, that are intended to ease the development of complex applications. One set of applications for which there exist APIs, are game applications. Even the simplest forms of games, can result in complex applications. For example, an application for a relatively simple game such as Tic-tac-toe, may include a graphical user interface, networking capabilities, the ability to save games, and artificial intelligence (AI) for the user to play against. All these features can become extremely complex, but the complexity of development can be reduced by the use of APIs. There currently exist many APIs aimed at game applications, for a myriad of platforms, such as Microsoft Game API (GAPI), Genuts, and Javas Game API provided within MIDP 2.0. However, nearly all existing game orientated APIs are primarily aimed at helping developers graphically enhancing their games. There seems to be a neglect to provide an API orientated towards AI. At first glance this may seem justifiable, as there cannot possible exist an AI algorithm that encompasses all games. The AI used within a action game would be very different to the AI used in a card game, for example. However, there is a subset of games which can successfully use the same AI algorithm, namely two-player board games such as Tic-tac-toe, checkers, and chess. Games such as these, all use some form of the minimax algorithm described initially by Shannon (1950), to implement AI. The IBM engineered chess-playing computer, Deep Blue, which so famously defeated Garry Kasparov in 1996, was using highly optimised form of the minimax algorithm to calculate its moves. Massive amounts of research, and development has been made to produce ever stronger variations of the minimax algorithm. However, very little work has been done to develop an abstract implementation of the algorithm, which would allow any of these suitable games to use the AI. The only work that has been done, has provided minimal version of the algorithm, lacking in optimisations, and hence AI strength. The minimax algorithm is completely dependent on the computational power that drives it, hence the great importance of optimisations. This project will attempt to develop an API, which implements the minimax algorithm, 7
including optimisations. The API will be abstract enough, to allow any suitable board game to implement the AI. The question then remains, for which platform would such an API be the most beneficial? At the current time of writing, one of the most popular programming languages is Java, particularly for mobile platforms. At the current state of technology, mobile platforms are graphically limited. Board games are ideal for such devices, as the interface doesnt require massive amounts of computation. Finally a word about the development approach to be used by this project. As a result of the numerous number of optimisations that can be implemented into the minimax algorithm, it has been decided the project will take the rapid application development approach. This model will allow many deliverables to be seen by the end-user quickly, and as long as time permits, constant optimisations being included into the minimax algorithm.
8
Chapter 2
Literature Review 2.1
Application Programming Interface
An application programming interface (API), is a set of definitions, and routines, which help programmers develop software, and provide a level of abstraction between the programmers’ code and the API’s implementation. [5] states, APIs (application programming interfaces) are a common way of hiding component specification and implementation details from users of those components. They are commonly used in the industry to divide software development work. This hiding of an API’s implementation, allows developers to utilise the API’s functions, without worrying about the underlying implementation. Additionally, the level of abstraction provided by an API, reduces the dependency between the programmer’s code and the API’s implementation. A change in one, will not effect the other. [6] reiterates this point with the thought experiment described below. If component A is written first, and a second component B, utilising component A is written afterwards, the two components are working together. However, if component A is changed in some way, it is no longer guaranteed that the two components will continue to work together correctly, as is depicted by figure 2.1.
9
However, say component A has some specified API, and component A implements this API specification. Then if B works with component A’s API specification, the two components A, and B are independent of one another. Changes in A do not effect B, as long as A’ still honours the API specification, as depicted in figure 2.2.
[6] also raises the following point. If component B utilises component A’s API, and then the API itself is modified, it is no longer guaranteed that component A and B will work together successfully.
10
The thought experiment raises the point that an API should be stable. That is to say, the implementation of the API can change, but the interface provided should not. [5] states, API consumers expect that the API will not change often, and if it does happen, they also expect that these changes will not severely affect them. In other words, changing an API can result in applications that utilised that API being heavily modified, in order to function correctly once again. Therefore, the careful planning and development of an API is essential. As previously defined in the introduction, the solution of this project is to develop a Java API. Consequently, Java specific API design should be primarily taken into consideration.
2.1.1
Java API Class and Object Design
The concept of objects in object-orientated programming, is intended to assist programmers in developing software, by helping them manage complex systems. Well designed objects should be understandable by a programmer, that is to say the object should not be overly complex. [14] states that objects should, encapsulate an amount of complexity that can be readily grasped by human programmers. Furthermore, every class that instantiates objects should be named in such a way, that the name conveys what the object’s behaviour is, and what data the object stores. For instance, a class which instantiates objects that represent bank accounts, may be named 11
’Account’. This name indicates to programmers that any object instantiated from this class, will hold data related to bank accounts, and contain behaviours to modify such account data. If a complex system is broken down into well designed objects and classes, the overall system can be understood and manageable. A well designed API will take object design into consideration, in an attempt to improve the user’s ability to grasp the facilities provided, as quickly and easily as possible.
2.1.2
Encapsulation and Information hiding
Information hiding is the design principle of hiding design decisions in a program, such as the implementation of methods. Encapsulation is one form where information hiding can be used within a program. [11] differentiates encapsulation, and information hiding with the statement, Encapsulation is a language facility, whereas information hiding is a design principle. [11] also defines encapsulation as, the bundling of data, with the methods that operate on that data. The use of information hiding, through encapsulation, can greatly increase the robustness of an API. By forcing the user to use the public methods provided by an object to access its data, rather then accessing the data directly, the API gains control on how the data can be manipulated. The example below makes this point clearer. Assume a class Board is implemented as follows. public class Board { public int widthOfBoard; } The user has direct access to the variable widthOfBoard. They will be able to assign any integer value to this variable, regardless of context. Hence, the value assigned could possibly be invalid as is exemplified in the following code segment. Board myBoard = newBoard(); myBoard.widthOfBoard = -5; In this case, even though -5 is an accepted integer value for the variable widthOfBoard, it is obvious that this could possibly lead to errors further into the application. If however,
12
access to these variables was only permitted through public methods, the object designer could implement restrictions to prevent these invalid inputs. public class Board { private int widthOfBoard; public void setWidth(int userInput) { if (intput > -1 ) { this.widthOfBoard = userInput; } } } The above prevents any value below 0 being assigned to widthOfBoard. An additional benefit of information hiding, is the implementation of an object’s data can be altered, without requiring the user’s program which uses the API to be altered. public class Board { private String widthOfBoard; public void setWidth(int UserInput) { if (intput > -1) { this.widthOfBoard = Integer.toString(userInput); } } } The variable widthOfBoard, is now stored as a string, yet the user is not effected by this change whatsoever, and has no knowledge of the change in implementation.
2.1.3
Method Design
It is clear that an efficient, and successful implementation of an API’s facilities is important. What is also important, is the an API’s interface provided to the user. The user can only work with the API through the set of interfaces provided, and should have no access to any implementation. It is imperative that an API’s set of interfaces are designed in such a way that the user can easily understand what facilities are provided, and what is required to use these facilities. Joshua Bloch, who was once an architect in Sun’s Core Java Platform Group, stated some guidelines for method design [3], which are summarised below 1. Don’t make the client do anything the API could do. For example, the API shouldn’t force the client to import libraries that they won’t directly use, but the 13
API requires. The API should attempt to encapsulate as much as possible, to prevent requiring the client program to perform unnecessary tasks. 2. Methods should not perform unexpected actions. The user should not be surprised by a methods behaviour 3. The client should not have to invoke several methods to obtain data. A singular method should be enough to acquire a specific piece of data 4. Use of overloading should not be ambiguous. This could potentially cause complications which will be difficult to understand. Methods should require appropriate parameters, and return values of an appropriate type. They should aim to require the most specific input parameters possible. Conversely, methods should favour interface types over concrete types. The use of interfaces provides flexibility, as discussed laterfg in 2.1.4. 5. Methods should use consistent parameter ordering in their signature. The following code segment clarifies this point. public void setValue(int userValue, String userString) public void getValue(String userString, int userValue) Both methods require the same input to perform different actions. However, the second method requests the input in a different ordering. This may cause frustration, as the user will have to remember specific orderings for both methods. It is preferable if both methods use the same ordering of parameters, as shown below. public void setValue(int userValue, String userString) public void getValue(int userValue, String userString) 6. Methods should avoid using long parameter lists, which can overly complicate the use of a method. using helper classes, or breaking down the method to a series of methods are possible solutions in avoiding long parameter lists. 7. Methods should avoid returning types that demand exceptional processing. For example, for a method which returns arrays, it would be more appropriate to return an empty array instead of null for a failed query. 8. Methods should ’fast fail’. That is to say, they should report errors as soon as possible after they occur, and ideally at compile time. An API which follows these guidelines as closely as possible, will greatly benefit the users understanding of the API’s features. An additional benefit, is the reduced dependency on documentation by the user, to resolve confusion.
14
2.1.4
Inheritance
An important concept of Java hierarchies, is the idea of inheritance. A subclass is assumed to be a specialised version of its superclass, inheriting all non-private states and behaviours. The subclass may also provide additional states and behaviours then that of its superclass. Furthermore, a subclass may implement a number of inherited behaviours differently than that of its superclass. API design must take into consideration the consequence of allowing users to create subclasses of the API’s classes. Firstly, subclasses should only be used in the appropriate context. Figure 2.4 provides an example of inheritance which doesn’t seem appropriate, taken from the java.util package.
Figure 2.5, on the other hand, gives an example where inheritance is used in the right context.
Subclasses are dependent on the implementation details of the superclass it extends. This must be taken into consideration when designing an API. [2] states two possible 15
solutions for this problem, Design and document for inheritance, or else prohibit it. The use of inheritance can greatly increase the flexibility of an API, but it must only be provided carefully, and in the right context. If it is provided to the user, the documentation supplied with the API must be clear, and precise, on the usage of inheritance, and warn the user for any potential problems.
2.1.5
Interfaces
An interface is the name given to a particular type of Java construct, which is of enormous use in the implementation of an API. Interfaces act like contracts between two pieces of code. They contain descriptions of methods, but do not actually implement these methods. Any class which ’implements’ an interface, is telling every other class in the program that it has implemented every single method described in that interface. The example below makes the importance of interfaces clearer. Assume a program contains a class Calculator, and an interface Summable. As is shown in figure 2.6, Calculator contains a method called addition. This method takes in as input two objects, both of type Summable. The interface Summable contains a single, un-implemented method called getValue.
Any object which implements Summable, can be used as input for the Calculator class’ methods. Figure 2.7 shows two very different constructs, Integer, and Node, both of which implement the interface Summable. Because of this, they are both forced to implement the method getValue. Hence, it is possible for the Calculator class to add, or 16
multiply the values of these two constructs together, simply because it knows that they both contain a getValue method. The Calculator class will not accept any type that does not implement Summable, regardless of any methods it implements.
[9]states that the use of interfaces, has several advantages, among them being an increase in understandability of the code, and extended substitutability of classes in frameworks. Both these points are essential to good API design, in increasing the ease of use of the API, and the flexibility the API provides to the user.
2.2
Board Games
[10] states, A game is a description of strategic interaction that includes the constraints on the actions the players can take and the players’ interests, but does not specify the actions that the players do take. For example, the constraints on the actions players can take in a game of Tic-tac-toe are, 1. Each player can only place their symbol on the 3x3 Tic-tac-toe board, when it is their move 17
2. The players must move consecutively 3. Each player may only make a single move at a time 4. A game is is won by a player, if they manage to place 3 of their symbols horizontally, vertically, or diagonally next to each other on the Tic-Tac-Toe board. Board games are a subset of games, where the game must consists of a pre-marked board, and a set of pieces that are used in conjunction with the board. It is unrealistic for this project to be aimed at all possible games that exist. Game theory divides the set of all games into specific subsets, based on certain conditions. Many of these subsets are described below, in order to pinpoint which subset of games this project will target. Strategic Games and Extensive Games A strategic game models a situation where each player chooses their strategy simultaneously, and only once. An extensive game models a situation where each player is able to make additional decisions at specified instances. Perfect and Imperfect Information Game A perfect information game models a situation where each player is fully informed of every other players’ decisions. By contrast, an imperfect information game models a situation where each player is unaware of some information from the other players’ decisions. An example of a perfect information game is chess. Each player can see every piece on the board, and no hidden information is present. Scrabble, on the other hand, is an example of an imperfect information game. Each player is unaware of the letters which every other player holds. Zero-Sum and Non-Zero-Sum Games A zero-sum game models a situation where one player can only be better off, by making the other player worse off. In the context of two player games, zero-sum games are those in which both players cannot win, or cannot lose, yet both players can draw. A nonzero-sum game models a situation where this is not the case. An example of a zero-sum game is Connect-4. If one player wins, the other player loses, yet both players can draw simultaneously. An example of a non-zero-sum game is the famous prisoner’s dilemma, first proposed by Merrill Flood in 1951, and later formalised and defined by Albert W. Tucker. For completeness sake, it is described in the hypothetical situation below. Two suspects of a murder are arrested by the police. The police has have insufficient evidence to convict them of murder, but have sufficient evidence to convict them of a lesser crime, such as carrying a concealed weapon. They separate the prisoners, so there is no means of communication between the prisoners. They then offer both prisoners the same deal. If the prisoner confesses, they will be granted immunity and can go free, whilst the other prisoner receives ten year sentence. If both prisoners confess, they will both serve two years. However, if neither prisoner confesses, they will both be convicted of the lesser crime, and both will serve six months. The table below summarises the game. 18
As stated previously in the Introduction, this project is aimed at developing an API, implementing the Minimax algorithm. As a result of this requirement, the API will primarily be aimed towards aiding the development of zero-sum, perfect-information, extensive games. A few examples of such games are Chess, Checkers, Tic-tac-toe, and Othello.
2.2.1
Representation of Boards - Bitboards
Every board game application will require the ability to represent the game board at any particular time, using programming constructs. One such construct is the bitboard. Bitboards are commonly used for representing the 8 by 8 chess board using a 64 bit number. Each bit of the 64 bit number represents a particular square on the chess board. When a bit is set to 0, it indicates that no piece is present in the square the bit represents. Accordingly, when a bit is set to 1, it indicates that a piece is located on that square. Figure 2.8 depicts an example of a bitboard, that represents the inital positions of the white pawns, in a game of chess.
19
Obviously, a single bitboard cannot represent the entire state of a chess board fully. It would not be possible to distinguish between pieces. Therefore, multiple bitboards are used in conjunction to represent the chess board fully. This approach could be used to represent all manners of game boards, and not just chess boards. The advantage to using bitboards is the ability to use bitwise operators, which act on binary numbers. The following are four examples of common bitwise operators. 1. NOT 00000000 - 11111111 2. AND 00111100 AND 01100110 - 00100100 3. OR 00111100 OR 01100110 - 01111110 4. XOR 00111100 XOR 01100110 - 01011010 Bitwise operators such as these provide extremely efficient operations on binary numbers. Using these operators allows the manipulation of an entire board in a single operation. For example, figure 2.9 shows three bitboards that could exist to represent the game position on the far right. 20
Assuming it is Xs turn to move, and the player wishes to move into the shaded square, then only the following two bitwise operation would be required to update the bitboards to fully represent the new game position. 1. New move binary number 000001000 2. Empty position bitboard 001001111 XOR 000001000 -¿ 001000111 3. Positions of Xs bitboard 110000100 OR 000001000 -¿ 110001100 These operations provide extremely fast computation on the 64 bit numbers, as opposed to manipulating higher-level integers. One disadvantage of bitboards is that multiple binary numbers need to be used to represent a game board fully. This can become quite complicated when dealing with APIs, as the user may specify the creation of boards of any size. Consequently, the implementation, and debugging of providing arbitrary sized bitboards, can become very complex. Another disadvantage is that most processors at the current time are 32 bit processors. These CPUs cannot efficiently manage 64-bit instructions, as [16] points out, they will have to split the variable into two 32-bit instructions before sending them to the processor, and to rearrange them later: that’s a major drawback in performance. Even so, the benefits of using bitboards can easily outweigh other higher-level representation of game boards.
2.3
Minimax
Many zero-sum, perfect information, extensive games, implemented on computers use some form of the minimax algorithm, which was originally described by Shannon (1950). The minimax algorithm is used to decide the next move for a certain player in a twoplayer game. Initially, a tree is created, with the nodes representing game states, and 21
the arcs representing the possible moves from one state to another. The leaves of the complete tree represent the terminal states of the game, that is to say the layout of the game when it has finished. The root node represents the current game state. An example of a tree for a partially completed game of Tic-tac-toe is shown in figure 2.10.
From this complete game tree, each leaf node can be labelled a value representing its worth to the current player. So, a leaf node in which the current player wins can be valued by 1, a loss valued by -1, and a draw valued by 0. The tree is now partially labelled as shown in figure 2.11, with all leaf nodes valued.
22
The minimax algorithm uses these leaf node values, and propagates them upwards until the next move by the current player is decided. It does this by use of a recursive algorithm. First, the current player is labeled Max, and the opposing player Min. Each row of the tree is labelled according to which player’s turn it is. If it is Max’s turn, then the value of the node is the maximum of the values of the children of that node. If it is Min’s turn, then the value of that node is the minumum of the value of the children of that node. The basic idea of the algorithm is the player Max is attempting to maximise its chance of victory, and the player Min is trying to minimise Max’s chance of victory. [Chan:1997wb] provides pseudo-code for this basic recursive algorithm. Minimax(u) { //u is the node you want to score if u is a leaf return score of u; else if u is a Min node for all children of u:v1, ... , vn return Min{Minimax(v1), ... , Minimax(vn)} else for all children of u:v1, ... , vn 23
return Max{Minimax(v1), ... , Minimax(vn)} } The algorithm begins with the root node, and halts when all nodes have been assigned a value. The move to be made by the current player (Max), is then decided by the algorithm. An example of this is shown in the figure 2.12. In this case, the chosen move was to take the left branch, as the maximum of the root node’s children was 0. Therefore, the chosen option was to attempt to result the game in a draw, rather then a loss.
2.3.1
Alpha-Beta Pruning
For a game such as tictactoe, the maximum number of nodes in the tree will be 9! = 362880. A tree this size already takes a considerable amount of memory to store. A more complex game will result in a huge minimax tree. Shannon claims that on average a game of chess has on average 30 legal moves at any point, and that in a single game of chess 40 moves are played on average. Consequently, for simple games such as Tictac-toe, this algorithm is sufficient, but for more complex games, the number of nodes to be analysed is far too great to be of any use. Optimisations to the basic Minimax algorithm are needed. One such optimisations is called alpha-beta pruning. The idea behind alpha-beta pruning, is to prevent the algorithm from analysing sections of the tree which we know we can safely ignore, as they will in no way effect the final 24
result of the algorithm. To demonstrate this idea, take the minimax tree displayed below which has its leaf nodes valued.
The alpha-beta algorithm keeps track of two numbers, alpha and beta, as it analyses the tree. The alpha value stores the value of the best move for Max that has been computed so far, and the beta value stores the value for the best move for Min that has been computed so far. If at any time during the analysis of a node the situation arises where alpha becomes greater then or equal to beta, then there is no need to evaluate the sub-tree of that node any further, and all the unanalysed children of that node are ignored. Figure 2.14 demonstrates how this idea works. For this demonstration, the initial values of alpha and beta are -infinity, and +infinity respectively. Each node is labelled with a letter to allow ease of discussion.
After the analysis of B’s sub-tree, Min has selected the minumum value of E, F, and G, which in this case is -1. Max now at this point only has one choice for the node A, which is B. Therefore Max’s best value at this point is -1, and hence alpha = -1. Next the sub-tree C is analysed. Node H provides the value -2. The best value that Min can choose for node C at this point is -2. However, we know that alpha = -1, hence the 25
best possible move by Max for node A is -1. There is now no possibility that Max will select any value from node C. Max will select the maximum value of A’s children for the value of A, and Min will select the minumum value of C’s children for C. So, if nodes I, and J have a value higher or equal to -2, Min will not choose it for the value of C. Accordingly, Max will not choose any values lower then -1 from nodes B, C, and D, for the value of A, therefore there is no point in analysing sub-tree C anymore, as it will not be considered.
The same situation occurs for the sub-tree D. The beta value becomes -2 as K is analysed, and as alpha currently equals -1, nodes L and M are not analysed, as there is no possibility Max will select any value given to D. Finally, Max selects -1 for the value of A. In this case, the alpha-beta pruning method removed the need to analyse the four shaded nodes from figure 2.16, without any detrimental effect to the final result whatsoever.
[4] provides pseudo-code which describes this algorithm. evaluteMin(u, B) //u is a Min node { 26
Alpha = +infinity; if u = leaf return the score; else for all children v of u { Val = evaluteMax(v, B); alpha= Min{alpha, Val}; if Alpha= Beta then exit loop; } Return Alpha; }
2.3.2
Bounded Lookahead
As stated briefly before, the basic minimax algorithm is only suitable for very simple games, such as Tic-tac-toe. September states that the time complexity of the minimax algorithm is O(b to the m), where b is the branching factor of the particular game being analysed, and m is the depth of the game. For a game such as chess, the branching factor is roughly 35. If we take the depth as 100, which is 50 moves per player, the time complexity is 35 to 100. It is completely unfeasible to analyse a game of this complexity. September states that with alpha-beta pruning, the time complexity becomes at best O(b to the m/2). However, for a complex game such as chess, this is still too large. One solution to this problem is to use the concept of a bounded lookahead. This concept sets a fixed depth, after which no nodes are considered. The parents of the ’cutoff’ nodes will then act as the leaf nodes, and will be given approximate values. These values can be calculated using a heuristic evaluation function, which analyses the state of the game for that particular node, and sets a value to that node according to the strength of the game position represented by that node. A very simple example of an evaluation function for the game Tic-tac-toe is shown in the figure 2.17. Each section is assigned a value, with higher values being of more worth to 27
a player. Then using the following calculation, we can find a very primitive evaluation of a particular state. Value of state = Sum of Max’s positions - Sum of Min’s positions
Figure 2.18 shows the value of a particular state that could arise in a game of Tic-tac-toe, using this evaluation function.
Now that the evaluation function is set, the bounded lookahead algorithm can set a depth limit, and use the evaluation function upon the new leaf nodes as shown by figure 2.19.
28
There is a well known disadvantage of using the bounded lookahead method, known as the horizon effect. Simply put, the algorithm will not be able to detect threats after the bounded lookahead depth. There may be a case, where the best move has been calculated, using a bounded lookahead, but it becomes apparent later that choosing that particular move provided the player a huge disadvantage. This disadvantage couldn’t be seen previously, as it was located after the bounded lookahead depth. One possible optimisation of the bounded lookahead search, in an attempt to alleviate the problems of the horizon effect, is the quiescence search. The main idea is, once the bounded lookahead depth has been reached by the algorithm, it continues to analyse deeper, but only analyses moves that may cause large changes to the overall evaluation. Inn the game of chess, for example, the quiescence search could be made to only evaluate moves which involve captured pieces, after the lookahead bound. Another possible solution, which was used by IBM’s chess machine Deep Blue, is a technique called singular extension. The idea is, if a move is found to be vastly superior then any other possible move, using the bounded lookahead, then only that move is searched deeper. Both quiescence search, and singular extension are examples of secondary searches.
2.3.3
Iterative Deepening Alpha-Beta
The time it takes to search a Minimax tree to a specified depth D is not constant, but is dependent on a multitude of factors which may not all be obvious. It is most likely that searching a tree formed from a middle-game position will take a lot longer then searching a tree formed from an end-game position. In Tic-tac-toe, for example, an end-game tree will be significantly less complex then a middle-game tree, simply because there would 29
be less legal moves are available to each player. The idea behind Iterative Deepening Alpha-Beta (IDAB)[8], is that you specify a certain amount of time that the algorithm is allowed to search for. The algorithm will then initially search the tree to a depth of 1. If the time allowed is not yet complete, the algorithm will attempt to search again to a depth of 2, and so on until the allocated time runs out. A possible implementation is provided by the pseudo-code below, derived from [8]. while(time < maxTime) { value = AlphaBeta(depth, -INFINITY, INFINITY); depth++; } This seems like a very inefficient process, as the algorithm will search previously analysed nodes multiple times. Surprisingly, this is not the case. According to [8], for complex games IDAB can provide a great benefit. The strength of the AI is greatly increased the deeper it is able to search. For important end-game positions, the AI is allowed to search much deeper, as a result of the minimax tree being of a significantly reduced size for most games.
2.3.4
Transposition Tables
While the minimax tree for a game position is being analysed, it is highly likely that the same move will be encountered multiples times at different points in the tree. Figure 2.20 demonstrates an example of such a case for the game of Tic-tac-toe.
30
The two states A and B, shown in red, are exactly the same, yet they have been derived from two different routes of play. Both states will have the exact same set of child nodes, and therefore form the exact same subtree. As a result, both subtrees will propagate the same value upwards. It would be far more efficient if only the subtree A was analysed, and the resulting value was instantly used for subtree B, without actually analysing any children of B. Transposition tables allow us to do exactly this. Each subtree that has been previously analysed is stored in the transposition table, along with the value that it was assigned. When an identical subtree is met during further analysis of the minimax tree, the value is simply extracted from the table, and the subtree is deemed analysed. Using this method, the efficiency of analysing a minimax tree can be increased. However, it must be noted that the use of transposition tables does contain an overhead. This overhead is the result of the continuous storing of nodes, and searching the table for matching nodes. It is highly desirable that the transposition table is implemented using the most efficient storage, and retrieval method possible, such as a hash-table.
2.3.5
Killer Move Heuristic
During the analysis of the Minimax tree, certain subtrees will perform alpha-beta cutoffs more regularly that others at each depth level. As a result of this, these subtrees will cause much better pruning of nodes for other subtrees of the same depth level. The idea 31
behind killer move heuristics, is to store the best moves for each level. What is meant by the ’best move’, is the move which provide the most alpha-beta cutoffs than other subtrees of that level. If these ’killer move’ subtrees are analysed before other subtrees, alpha-beta pruning will become much more substantial for all other subtrees of on the same depth as the killer move. Unfortunately, the killer move heuristic is very game dependent, and could potentially cause a loss in efficiency to some games. If implemented in an API, it would be desirable to allow the user of the API to decide if they wish to use this facility.
2.3.6
Extending Minimax
[13] makes the very interesting point that, ”even with complete information, minimaxing is not the best decision strategy for a number of reasons.” One of these reasons is that the minimax algorithm only works optimally for complete game trees, and only if the opponent plays optimally. For human opponents, this is obviously not always the case. More often then none, human opponents do not choose the optimal move, and this can greatly effect the outcome of a game. [13] also makes the point that the minimax algorithm leaves a particular circumstance unresolved. That is if the algorithm had to choose a 0 node, and there where multiple 0 nodes, how does the algorithm go about deciding which node to choose? Similarly if the algorithm must choose a -1 node, which one does it choose if a multitude of -1 nodes are available? The chance that the algorithm must make these decisions is frequent with most games. The only way a Min node ever gets labeled with 1 is if all the children of that node are labeled with 1. Consequently the likelihood that 0’s will propagate throughout the tree is high, and hence the algorithm will face many situations where it must choose which 0 node to follow from multiple 0 nodes. [13]’s suggestion to this problem is that we need another measure of a node’s value other then 1, 0, and -1 for win, draw, and loss respectively. The suggestion is for a complete game tree, use a secondary label on each of the Min nodes labeled with a 0 or a -1. There are a number of strategies for calculating this secondary label presented by [13], all of which will be briefly described here. PLY2 Count the number of nodes labeled 1 one level below Min’s 0 or -1 node currently being analysed, and use this as the secondary label for that 0 or -1 node. The strategy works by realising that Min must select one of these child nodes, and the more 1’s available, the greater the possibility that it will be selected. 32
PLY3 Count the number of nodes labeled 1 two levels below Min’s 0 or -1 node currently being anaylsed. This strategy works by counting the number of 1’s on the Minimizing level, and attempting to choose a path where Max can select a 1 on its next move. SUM Count the number of nodes labeled 1 below Min’s 0 or -1 node currently being analysed, up to the maximum depth. There is no depth limit in this case. This strategy attempts to Maximize the chance of Max being able to select a 1 node further in the game LEAF Count the number of leaf nodes labeled 1 that stem from Min’s 0 or -1 currently being analysed. The strategy here is to try and Maximize the number of winning combinations for Max. Extensive testing has been done[13] testing the basic MiniMax algorithm, the PLY2 variant, the PLY3 variant, and several other strategies against one another, which are listed below. RAND Choose the next node randomly BLOC Attempts to initially select a node labeled by 1. If that isn’t possible, the strategy chooses the first node that would block the opponent from an immediate win. Finally the algorithm chooses a random node on the next ply. PLY2B Same as PLY2, but chooses random nodes 10% of the time PLY3B Same as PLY3, but chooses random nodes 10% of the time All together there were seven different strageries. Three of them (Minimax, PLY2, PLY3), were classified as perfect strategies, simply because there was no element of randomness in their algorithms. The remaining four strategies (RAND, BLOC, PLY2B, 33
PLY3B), were classified as imperfect strategies, due to the element of randomness they included within their algorithm. The seven different strategies were matched against each other over five different games. The games used were Tic-tac-toe, and four variants of Connect-Three (varying by the size of the board). The interesting conclusion derived from these experiments, was that the minimax algorithm underperforms when matched against slightly imperfect strategies. As stated previously, the minimax algorithm assumes its opponent is also playing optimally, and hence when an opponent does not, it causes the minimax algorithm to play moves it did not analyse fully. Minimax tended to perform worse then the other two perfect strategies PLY2, and PLY3, especially as the game became more complex. [13] states ”The technique of using secondary labels seemed to get more powerful as the size of the game increased and for games with fewer draws.” However, all these tests were carried on complete game trees. For very complex games, such as chess, heuristics would have to be used to evaluate leaf nodes. This is one area where the project will attempt to expand on, offering developers a choice of variants of the minimax algorithm. An implementation of randomness could also be added, such as the PLY2B, and PLY3B algorithms. The use of this could be to vary the level of difficulty within the game itself, at any point, by adjusting how many moves chosen by the algorithm are chosen randomly. Obviously, for more complex games, an evaluating function will need to evaluate leaf nodes as a Bounded lookahead is used, and the developer will need to make sure that the evaluating function is implemented effectively if they wish to work with the extended minimax algorithm.
2.4
Design Patterns
Design patterns are standard solutions to common object-orientated design problems. Instead of focussing on individual entities, design patterns focus on collections of objects, and details the interconnections between these collections. Using existing design patterns can help to decrease the development time of a project, by providing tried and tested paradigms. Two particular patterns, aimed at dealing with systems containing multiple algorithms, are described below.
2.4.1
Flyweight Design Pattern
The flyweight design pattern, is intended to help reduce the resources required by a program. It is used in the situation when there exists a very large number of objects that all share some invariant information, which is constant amongst them all. This
34
information can be removed from each object, and referenced in a separate flyweight object, as depicted in the diagram below.
The flyweight pattern helps to reduce redundancy, and more importantly, reduces the memory required by the system, as the shared data is only stored once. The flyweight object can also hold extrinsic information, which is determined by context, and can be calculated when required. However, this may increase the computation time required by the program, as the extrinsic value must be calculated every time an object requests it.
2.4.2
Strategy Design Pattern
The strategy design pattern de-couples an algorithm from its host, and encapsulates the algorithm into a separate class. Simply put, an object and its behaviour are separated and put into two different classes. Using this design, the algorithm using the object, can be switched at any time. The strategy design pattern is useful when a program contains several objects that are all basically the same, and only differ in behaviour.
2.5
API documentation
Any API requires accompanying documentation to be provided for the end-users. Sun Microsystems has provided to API developers a tool called Javadoc, which semi-automatically generates an API documentation. This documentation generated resembles the form of the API specification of the core Java APIs, developed by Sun Microsystems themselves. This resemblance helps to maintain consistency throughout the Java developers community. For example, if an experienced Java developer uses a third party Java API, which provides an API specification generated using Javadoc, the developer will have absolutely no problem using the layout of the documentation. [Oak:vx] makes the point, Much of the credit for Java’s growth can be attributed to Javadoc, because 35
it takes a lot less time for developers to learn new APIs if they have good API specifications on hand. Javadoc generates the specification, using special ’doc comments’ provided in the source code of the API. These comments are initiated with /**, and terminated with */. Additionally, a numerous amount of tags can be used within the doc comments, to further enhance the specification.
2.6
Existing Products
A few existing technologies related to the project have been found, but most Java game APIs were found to be graphically orientated. Only one API was focused on board games, but was still aimed at providing graphical facilities to the end user. No Java APIs were found which provided any form of minimax to the end user. In order to learn about the implementation of the minimax algorithm, within Java, concrete applications had to be evaluated, rather then APIs. For example, applications that only supply one game, with a inflexible minimax algorithm implementation. Java Board Game API - JABA JABA does provide information on how boards can be represented, in this case using a 2D array. However, the project quickly took the approach of providing graphical representations, and interactions to the end-user, and little else was able to be learnt from the API. Statements such as[15], Another element which is outside the scope of this design is computer-players. quickly distinguished that JABA would not provide much insight for this project. [15] summarised JABA with the comment, What JABA is required to do then, is provide a generic but flexible set of classes that will allow the general, underlying management of the board game to be constructed quickly, allowing the developer to concentrate more on the specifics such as the rules. Java Game API (MIDP 2.0) MIDP 2.0 includes, in its updated collection of APIs, the javax.microedition.lcdui.game package. This package contains five classes[7], which make up the suite of tools known as Java Game API. Note, that this API is not available for desktop platforms, and is primarily aimed at mobile devices. As with most of the other game APIs available, this 36
API provides graphical efficiencies to applications. It improves the performance of the application, by minimizing the work needed to be done by Java, as it passes most of this computation to standard low-level graphics classes from MIDP. Genuts Genuts is an API primarily aimed at aiding the development of online games. These games must be resouce-restricted, to allow the use by as many different platforms as possible. As [1] states, Genuts is Primarily intended for sprite-bases games conception. It does not provide any game representation facilities, and hence is far different from what this project hopes to achieve. Hard-coded Minimax for specific games Ironically, the most helpful information derived from existing applications, for this project, was from concrete applications, rather then APIs. A number of separate applications were evaluated. Most of the information that was relevant, was the implementation of the minimax algorithm, and its optimisations. Also relevant was how the input to the minimax algorithm was formed. Unfortunately, for concrete applications, the input is highly coupled with the representation of the game. For an API this is unsuitable, as the algorithm must be used for unspecified games.
37
Chapter 3
Requirements Documentation This chapter identifies the requirements elicitation process used during the project, and explains the reasoning behind the choice of process. This is followed by a discussion of the constraints impinged on the project, that will ultimately effect the derived requirements. Following this is the description of the key high-level requirements that were derived, and the discussion of the decisions made during the elicitation of more detailed requirements. In addition to the discussion of derived requirements, is the discussion of requirements that were omitted, and the reasoning behind these decisions. Furthermore, any conflicting, or difficultly derived requirements are also discussed. The chapter will conclude with an critical analysis of the requirement elicitation process used. This actual software requirements specification is located in appendix A.
3.1
Requirements capture an analysis method
The Java API to be developed by this project, is a relatively abstract final product. The end user, is the set of all Java programmers, who wish to develop board games for either desktop, or mobile platforms. Hence, there is no definitive end user. As a result of this consequence, deriving detailed requirements for the final product of this project was a difficult process. In order to do so, a handful of requirement elicitation methods had to be utilised. However, because of time constraints imposed on the project as a whole, each elicitation method could not be used in great detail. The initial requirement elicitation process used, was a form of brainstorming with the end-user, to derive a more concrete set of high-level requirements from the problem description. Two possible strategies emerged for this process. The first was to brainstorm with a collection of existing Java developers, and the second was to brainstorm with only the initial supervisor of the project. If the first strategy was employed, a form of enduser selection would have to be applied, to ensure experienced Java developers, with the appropriate knowledge regarding the project were chosen. This process would have
38
been far too time consuming to be practical. Thus the second strategy was applied. The initial supervisor had developed the initial project description, and had all the required knowledge regarding the project. It was decided that from this point on, the initial supervisor would be regarded as the end-user of the project. The next approach used, was an event scenario based requirement elicitation process, as described in Sommerville [12]. This process is well suited for adding detail to highlevel requirements, and was the natural choice at this stage. In order to increase the effectiveness of this technique, it was used in conjunction with the evaluation of existing APIs, and minimax applications, some of which are evaluated in 2.6. The final process used, was a variation of the scenarios technique, known as use-case diagrams. This technique is particularly aimed at object-orientated systems, and provided a tried and tested method of requirements elicitation for such systems. Use-case diagrams provide excellent representations of the system. They provide information such as, how the system gets used, what the boundaries of the system are, and what lies outside of the system. However this approach had to be used sparingly, as a major disadvantage is the time required to produce such diagrams.
3.2
Constraints
The set of constraints of any project will ultimately effect the requirements determined for that project. It was essential that all the constraints on this project were determined before any requirements were outlined. Only by taking these constraints into consideration, could the requirements of the project be detailed. The constraints were determined from a variety of resources, including previous project knowledge, project planning, and brainstorming with the supervisor. Once the set of constraints had been agreed upon with the end-user, every requirement that was later detailed was measured against this set of constraints, to ensure its feasibility. As a result of the importance of the set of constraints, they have been outlined below. 1. Developers The final product will be constructed by only a single developer, which will greatly increase the amount of time required to complete the project, relative to a group of developers. 2. Time There exists a time restriction by which the project must be complete. As a result, the amount of functionality, and the quality of the final product will be restricted. 3. Funding No funding is provided for this project, and in turn any tools which may aid the development of the final product cannot be purchased. 4. Lack of testing platforms The final product can only be tested on a small quantity of platforms. 39
5. Developers tools As a result of financial constraints, only freely available tools can be used to develope the final product. 6. Widespread use of J2SE version 1.5 J2SE 1.5 is not compatible with all platforms at the current time of writing. To increase compatibility of the final product, the project will be restricted to development with J2SE 1.4.2. The additional programming features provided in J2SE 1.5 will not be available for use. 7. Widespread use of MIDP 2 MIDP 2 is becoming widespread among devices, however at the current time of writing, many mobile devices still only provide MIDP 1.0, including the test devices available for this project. As a result of these factors, the final product will be restricted to be compatible with MIDP 1.0. Equally as important, is the scope of the project, which is available in the SRS located in appendix A. This section identifies what the software project will, and will not do. After evaluating many existing game APIs, it was apparent that their primary focus was aiding the graphical implementations of games. The scope of this project identified that the project will not provide any tools to aid graphical development, as these APIs already existed. This identification of what the project will not do, is very similar to a constraint imposed on the project, and was taken into consideration as significantly as the other constraints.
3.3
Requirements Specification
The remainder of this chapter, identifies, and discusses key requirements. Which were derived from the brainstorming process with the initial supervisor, and detailed using scenario based requirement elicitation methods, as described previously.
3.3.1
Board Games
Critical to the development of any board game application, not only board game APIs, is the ability to represent real-world boards using programming constructs. However, in this context, this is where the similarities between applications, and APIs end. APIs are required to provide additional tools to the end-user, which ease the development of an application, and hide the implementation of any constructs the API creates. The initial board game requirement for the API, is to provide the user with the ability to easily create board representations of any dimension. The user should be able to easily specify the dimensions of the board representation they require, and the API should successfully fulfil this request. However, the API is also required to implement the board representation in an efficient manner. It was initially decided at this point, that the API would be required to provide multiple implementations to represent boards, and provide the user with a choice of implementa40
tion. This requirement was derived from the evaluation of bitboards, described in 2.2.1. However, the requirement was later omitted, when it became apparent that the number of tasks required to be completed by the project was far too great, considering the time constraints. Implementing bitboards could potentially require a vast amount of time, as a result of resolving the problems of arbitrary dimensioned bitboards, and debugging bitwise operations. The second set of requirements relevant to this area, were regarding the additional suite of tools the API would provide to the user, in order to manipulate the board representation. These requirements were primarily derived using the event based scenario technique, which evoked thought experiments conducted alongside the evaluation of existing products. By assessing existing applications, it could be seen which processes, regarding the manipulation of board representations, were used often, and why such processes were required. Examples of such frequently used processes are setting values on the board, obtaining values from the board, copying boards, and comparing boards. As stated previously, one of the requirements was for the API to provide a sufficient number of additional utilities, but not an overly abundant amount. Therefore, the required board manipulation tools provided by the API had to take this point into consideration. The required number of tools was limited to those that were deemed necessary, and tools that may not be used frequently for all board games were not required.
3.3.2
Artificial Intelligence
Initially, two high-level requirements were derived from the problem description. Firstly, the API must provide the user with the ability implement AI into their application. Secondly, the AI must be based on the minimax algorithm. The requirements of the manner in which the minimax algorithm is provided to the user could not be specified, as it is wholly dependent on the design and implementation of the algorithm. As described in section 2.3, a multitude of optimisations have been developed for the basic minimax algorithm, such as alpha-beta optimisation, transposition tables, and killer move heuristics. As a result of the large number of possible optimisations, it was decided that only the implementation of a small sub-set of optimisations was mandatory. Any additional optimisations would only be included if time permitted. The required optimisations were chosen to be alpha-beta optimisation, described in 2.3.1, the bounded lookahed, described in 2.3.2, and the sum optimisation, described in 2.3.6. Alpha-beta optimisation is present in nearly every well developed minimax algorithm, and causes no detrimental effect to the final calculation, hence was chosen to be a requirement. As stated in 2.3.2, a complex game will usually have a minimax tree that requires a huge amount of resources. In order to allow complex games to be implemented using the API, it was therefore decided to make the implementation of the bounded lookahead optimisation also a requirement. Lastly, it was chosen that the sum optimisation was to be implemented, to provide variation to the user. It was deemed that implementing such a feature would not require too much coding time, as it is a simple extension of the
41
minimax algorithm. It was also decided, that the minimax algorithm was required to be de-coupled from board representations. This requirement would allow the minimax algorithm supplied by the API, to be used not just for board games, but for a variety of other situations, such as a mathematical tool. A requirement that was not chosen for this area, was to provide a flexible minimax algorithm. That is to say, the minimax algorithm will be implemented under the assumption that it will not be adapted by the end-user. If the algorithm was implemented with such a goal in mind, the level of complexity of the implementation would be far too great to be completed in time allocated for this project.
3.3.3
Node
Providing the minimax algorithms alone is not very useful for the end-user. All algorithms require some form on input, in order to produce an output, and minimax is no exception. As described in 2.3, the minimax algorithm requires a tree, built up of interconnected nodes, where each node has the ability to store a value. The API could allow the user to create their own inputs from the tools already provided by Java, however this would not be convenient. The user would have to learn how to create such inputs away from the API, and then use these inputs in conjunction with the API, which is not a desirable situation. Therefore, it was decided that the API would provide all the tools necessary to build an acceptable input for the minimax algorithms. The user would require the ability to create nodes which could store values. However, this is not yet sufficient. As a result of the defined minimax requirements, the nodes must also be capable of storing board representations, and two integer values. The ability to store board representations is a necessity for board game minimax algorithms. Each node would represent a certain move of the game, and the only way this can be done is for each node to represent the board. As a consequence of the requirement to implement the sum optimisation, each node would also require the ability to store two values, as described in 2.3.6. Additional requirements were also extracted from the event based scenario technique. It became apparent that the user may require the ability to define their own node types, that store additional values. For example, the user may develop a game which stores multiple boards at any one instance. The user should be able to adapt the tools provided by the API, in order to accommodate such a game. Furthermore, the new node type, if correctly constructed, should be able to be used for the minimax algorithm.
3.3.4
Memory
Another requirement derived from the problem description, stated the API must provide persistent storage facilities, such as saving, and loading games to memory. These facilities should not simply be a replication of the persistent storage tools already provided by 42
Java, but rather be high-level utilities using these tools. The memory facilities should be useable by end-users who have no knowledge of Java persistent storage management. Using the event based scenario technique, additional requirements that were considered useful to the end-user were introduced, such as the ability to save multiple games, and delete saved games. Another set of requirements regarding the undo/redo move facilities were also derived at this point. Undo/redo tools, should be handled far differently to that of the save/load game tools. They are required to be as efficient as possible, hence should not be stored in persistent memory. The API is also required to allow the the user to set a depth limit, of how many previous moves it remembers up to. This requirement was derived as a result of the API being developed for multiple platforms. For example, a small, high resource limited phone, would be unable to store as many past moves as a less resource limited desktop platform.
3.3.5
Portability Requirements
One of the reasons Java was chosen as the platform on which to develop the API, was the cross-platform facilities it provided. This requirement was made by the initial supervisor, during the detailing of the problem description phase. Even so, Java does supply a vast amount of native tools, which are specific to individual platforms. These tools would provide a great increase in efficiency for any program which used them, however greatly restrict the ability for the program to be used on another platform. It was decided, that the APIs would be developed to offer as much portability as possible, and hence restrict the use of native code, to most likely none.
3.3.6
Compatibility Requirements
At the time of writing, J2SE version 1.5 has recently been made available to the desktop platform, and provides a large number of improvements to the Java programming language. However, any program using these new features, would not be compatible with any Java run-time environment of a lesser version. Therefore, it was decided that the API would be restricted to be built upon J2SE version 1.4.2, to provide the greatest degree of compatibility. The latest build of Java for the mobile platform is J2ME, which contains CLDC 1.1, and MIDP 2.0. This version of Java does not include any of the new features available in J2SE version 1.5. Furthermore, many mobile platforms still contain MIDP 1.0, and CLDC 1.0. In order to provide the API to the greatest number of mobile devices, the API will be restricted to be compatible with MIDP 1.0, and CLDC 1.0.
43
3.3.7
Documentation Requirements
From the analysis of existing APIs, and utilising the event based scenario technique, it was clear that the final product should be accompanied by supporting documentation. To maintain consistency with other Java APIs, this documentation is required to be constructed using the Javadoc tool. However, in order to prevent too much work being done on the final product, the supporting documentation should not be excessively detailed. This decision was made as the result of the time constraints imposed on the project, and it was considered for the quality of the API to be of a higher priority then the documentation supporting it.
3.4
Conclusion
As stated previously, deriving requirements for an API aimed at such a large set of end-users, is a difficult process under the time constraints imposed on the project. However, the method of using multiple requirements elicitation techniques turned out to be successful. The brainstorming technique was extremely quick at deriving high-level requirements, from which the other techniques could expand. The use-case scenario technique was very effective at expanding the high-level requirements. However, it was not effective at deriving low-level requirements. This was unexpected, as the technique is primarily aimed at object-orientated systems. It provided insight on which systems certain constructs required, but was not helpful in deriving new, detailed requirements. The even based scenario technique, used in conjunction with the evaluation of existing software, was extremely helpful. It provided a deeper understanding of what processes are required from similar systems, and which processes are used most frequently. Unfortunately, it was a very time consuming technique, and was therefore only used to expand requirements that had been derived from the use-case scenario technique. As a result of the myriad of different features the API could implement, the requirements process had to quickly be re-evaluated before it was established. Many requirements had to be removed, as a result of time constraints, and these decisions could only be made by prioritising the features that could be implemented.
44
Chapter 4
Design 4.1
Introduction
This chapter describes the high-level design problems faced during the design stage of the project, and the design decisions made for those problems. The chapter begins with the discussion of the high-level architectural design of the system, and continues with the high-level discussion of all the other major components of the final product. The detailed descriptions of how the system was implemented, and the low-level design decisions made are described in chapter 5. A significant change in the project must be detailed at this point. During the design stage of the persistent storage facilities, it became apparent that the final product had to be divided into two separate products. One was an API developed for the desktop platform, and the other for the mobile platform. The reasoning behind this separation is discussed in section 4.5 of this chapter. Thankfully, the APIs were able to share vast amounts of code, and hence development time wasn’t effected critically. As a result of the rapid application model used for this project, the design, and implementation stages continued without much difficulty.
4.2
Architectural Design
The initial task in designing both APIs, was to establish a high-level architectural design, to help modularise the implementation. This would in turn aid the manageability of the implementation, as well as result in a more flexible API for the user. Using the requirements specification, it was evident that both APIs could be broken down into the same set of subsystems comprising of 1. Node Hierarchy
45
2. State Representation 3. Minimax Algorithms 4. Memory Facilities The UML package diagram figure 4.1, represents a very high-level isolated architectural design of either of the APIs. It is not taking into consideration any other external systems, such as existing Java packages used, or the user application.
From figure 4.1 it is apparent that the memory package is quite separate from any other package of the API. This is due to the facilities it provides being very different to any facilities provided by the other packages. Therefore, it can be implemented without any knowledge of any other package in the system. It contains all memory-orientated facilities that are supplied by the API, which include saving and loading to persistent storage, and providing undo/redo move facilities. The Minimax package is also reasonably self-contained, relative to the other packages in the system. It contains all variants of the minimax algorithm, and any constructs necessary for optimisations, such as transposition tables. Furthermore, it contains several 46
interfaces, through which the minimax algorithms can be used. The only external constructs required by the Minimax package, is the class Node from the Node Package, and the interface HasState, from the State Package. The necessity of these two constructs is detailed in chapter 5. The Node Package is highly dependent on the Minimax Package. It contains a hierarchy of nodes of varying types, each specialised to be used for a particular minimax algorithm. Due to this high amount of coupling between node types and their corresponding minimax algorithms, an early design solution was to deeply couple the minimax algorithms with their corresponding node types. This allowed the implementation to be very simple, however proved to greatly hinder the APIs flexibility, which is of key importance. It would have been very difficult for a user to develop their own node variation, and utilise a provided minimax algorithm, without knowing the complete details of the API. It was decided that maintaining a high degree of flexibility of the API was essential, and thus the decoupled solution was chosen. A high level discussion of the decoupled solution is outlined below in section 4.4. The State Package contains all constructs required to represent the state of a board game, as well as the interface HasState, which is used throughout the API. The Node hierarchy is highly dependent on the State Package, and as a result the initial decision was to bundle the State facilities into the Node Package. However, this was later changed, to maintain consistency throughout the API, and to follow the object-orientated design guidelines described in the literature review.
4.3
Minimax Package High-Level Decisions
To allow as much abstraction as possible, the Minimax Package has been developed in such a way that its implementation is vastly complex, with a large amount of interconnections. Due to this complexity, chapter 5 will detail the package to a much greater extent. Figure 4.2 is a highly abstract view of the Minimax Package. It primarily presents the group of constructs that exist in the package, and the basic group-to-group interconnections.
47
From the diagram, it can be seen that four minimax classes are available. The basic minimax algorithm is provided by the class Minimax, whilst the other three classes provide different variations to the basic algorithm. MinimaxNode, and DoubleMinimaxNode are of special interest, providing far superior optimisations of the minimax algorithm, then that of Minimax and DoubleMinimax. Initially, two possible solutions were available to the problem of high coupling between the Minimax package, and the Node package. The first solution was to bundle both sets of entities into a single package. This would have greatly reduced the APIs flexibility. A user would have great difficulty creating their own type of node to use for the minimax algorithms. The second solution, was to de-couple the two packages with the use of interfaces. It can be seen that four interfaces are provided by this package. Each interface corresponds to a particular minimax class. For example, the interface DoubleMinimaxable corresponds 48
to the class DoubleMinimax, and the interface MiniNodeable corresponds to the class MinimaxNode. Each interface ensures that the input to the corresponding algorithm is of the correct node type. To clarify this point, recall that the basic minimax algorithm requires as input a tree, where every leaf node has been assigned a value. As the algorithm analyses the tree, the values are propagated upwards. Therefore, every node in the tree must be able to store integer values. This is exactly what the interface Minimaxable ensures. Any tree of type Minimaxable is stating to the program that it is built of nodes that can store integer values. This solution provides far more flexibility to the API, then that of the previously suggested solution. A user will be able to define their own node type, and providing they implement the correct interface, they can use it as input for a minimax algorithm. The final class in figure 4.2 is the class TranspositionTable. It is used by both MinimaxNode, and DoubleMinimaxNode to perform optimisations using a transposition table, the details of which are in section 5.6.
4.4
Node hierarchy high-level decisions
The first decision concerning the Node Package, was whether develop a single class, which could be used for all the minimax algorithms provided, or to develop multiple classes, each associating themselves with only particular minimax algorithms. A single class would ease the use of the API, as the user would not have to decide which type of node to use, because only one would be available. There existed two disadvantages to this approach, which were lack of flexibility, and lack of efficiency. If the user desired to extend the singular node class, in order to develop their own variation of node, they would automatically inherit many, possibly unneeded, methods and states. They would not have the option to extend a certain subset of the node, and would most likely option to develop their own node type without aid from the API. Secondly, not all the minimax algorithms will require all the features provided by the monolithic node class. A single minimax tree may contain thousands of nodes, therefore it is imperative that each node requires as little resources as possible. A monolithic node type would not be the most efficient structure to use in this case. As a result of these consequences of using a singular node class, the decision was made to develop a node hierarchy. This hierarchy is depicted in figure 4.3.
49
The top class Node does not implement any interface from the Minimax package, hence any tree created using objects of this type cannot be used as as input for any minimax algorithm provided by the API. Node contains all the required methods to build a tree, and transverse a tree, but does not store any values, of any kind. The reasoning behind this approach, was to provide as much flexibility to the user as possible. For example, the user may wish to develop a node type which stores bits rather then integers. They will be able to simply extend Node, and their new class will inherit all the methods required to create, and transverse a tree, without any extraneous methods or states. A short summary of the classes are described below • ntegerNode - stores a single integer value • IntegerStateNode - stores a single integer value, and a State object • DoubleIntegerNode - stores two integer values • DoubleIntegerStateNode - stores two integer values, and a State object Noticeable from figure 4.3 is the lack of a StateNode type. One of the requirements of the API is to provide only the necessary facilities to the user. A StateNode type would not be able to be used for any provided minimax algorithm, as it would not be capable of storing integer values, hence it is not implemented.
4.5
Memory Package
Most games usually provide some sort of save game feature, which will maintain the saves even after the application has been closed. The only way this can be done, is by storing the data to a non-volatile storage. Java does provide many facilities which 50
enables programmers to store information to non-volatile memory, but it can become rather complicated. The primary aim for the Memory Package, is to provide a set of tools to the user which will allow them to save games made using the API, to persistent storage. Additionally, the user should be able to use the tools with as little knowledge about Java persistent storage management, as possible. However, a problem emerged at this point in the design stage. It was found that persistent storage for mobile platforms is handled much differently by Java, then that of the desktop platform. The only viable solution, was to divide the project into two APIs, one for the mobile platform (GameME), and one for the desktop platform (GameSE). Taking this into consideration, further requirements emerged, such as performance requirement 8, which states that both APIs should attempt to provide its facilities to the user as similar as possible. As the implementation of persistent storage was the only element which differentiated the two APIs, the design of the Memory Package had to take this requirement into greater consideration. The creation of save games, and the method signatures provided would have to be as similar as possible, without the user requiring any knowledge of the implementation of the tools. The final high-level decision made regarding the Memory Package, was that only a singular class, Memory, would be a member of this package. This class would provide all facilities required to the user. It was also regarded that this class would be implemented with the expectation of not being extended by a users class, because of the very inflexible tools that would be used from J2SE and MIDP.
4.6
Naming Conventions
The naming convention used throughout the API to name classes, variables, and methods is of great importance, as indicated in 2.1. The API is intended to hide all implementation of its functions from the user, therefore the user must be able to easily understand what functions the API provides, and what these functions do. It is desirable to maintain consistency as much as possible, hence the naming convention used throughout the API should attempt to be consistent within itself, and with other Java APIs. Looking at the design architecture of the API, the main cause for concern is the Node and Minimax structures. These two structures are linked together in multiple, complex ways. The naming between the two structures must be consistent and clear. This section outlines the initial reasoning behind the naming convention used throughout the API, in particular these two structures.
4.7
Accessors vs. Direct Access
Accessors are methods that directly manipulate the value of variables. There are two forms of accessors, getters and setters. Getter accessors retrieve the value of a variable, 51
and passes the value back to the caller. Setter accessors modify the value of the variable. Use of accessors improves information hiding, and encapsulation. The user does not need to know how variables are implemented. The naming convention currently used for accessor methods indicates to experienced Java programmers instantly what a particular method does. It is therefore beneficial if this naming convention is deployed in the Game API. The following table briefly outlines the current naming convention used throughout Java for accessors.
However, accessors decrease efficiency, which may be vitally important to the user. A more efficient, but less robust way, is to access the variables directly. The API should allow both the use of accessors and direct access to variables. Tree Naming Convention
4.8
API Specification Documentation Design
The requirements for this project state that a specification documentation must be supplied along with the API. A Java API Specification is a documentation accompanying the API, giving relevant and useful information about all the utilities the API provides for the user. Java has a standard form for its documentation, and this standard form can be easily reproduced with the help of the Javadoc tool, which is supplied by Sun. The Javadoc tool extracts information from the source code of the API, and builds the specification. However, it is not sufficient to simply leave the API specification in this state, as no extra information has been provided to the user. The user will only see the names of classes, methods, and fields available to them. Each class, method, and field, must be accompanied by some comments, giving extra and useful information to the user, which is essential for any good documentation. This extra information is written into the source code as doc comments, which are much the same as normal code comments, but have specialised tags and parameters. Javadoc is able to identify the doc comments, and compile them into the specification. Sun have supplied a ’style guide’ to API programmers, for writing doc comments. One of the major requirements of this project, is to attempt to develop an API which maintains consistency with other Java APIs. That is to say, it should look and act like a Java developer would expect. Consequently, the doc comments to be written in the source code of Game API, will be written as close to the guide provided by Sun as possible. However, there will be some adaptations to the style guide, and these will be explained below.
52
One adaptation to the style guide will be the extensive use of code examples. Code examples are small segments of code, placed in the specification to help users understand how they may be able to use the facilities provided by the API. The J2SE API specification only contains code examples for the more complicated constructs. The J2ME API specification utilises code examples to a greater extent. The reason for this difference in code example usage, is due to the popularity of use of the two variations of Java. J2SE is used much more widely than J2ME, hence there is a far greater amount of literature and information about it then J2ME. The Game API specification will have to use code examples to a much greater extent, due to no external information being present outside the specification. The programmer will not gain any help from any other source, so they must be able to fully understand how to utilise the API. The code examples will need to be small and precise, and contain no irrelevant code. The examples must have very clear variables names, indicating what their purpose is. The examples must also be accompanied by comments to explain the example fully.
53
Chapter 5
Implementation 5.1
Introduction
This chapter builds upon the high-level design decisions outlined in Chapter (DESIGN), and describes the low-level problems and decisions made during the implementation of the project.
5.2
Node Implementation
Every minimax algorithm requires the use of a tree, where each node in the tree may have an arbitrary amount of children. For example, one node may have 3 children, while another may have 40. As stated previously, the Node class contains all the necessary implementations to create such a tree, and transverse this tree. Therefore, the initial problem was how exactly the Node class would implement such a tree structure. Four possible solutions emerged. 1. Use the J2SE 1.4.2 Package javax.swing.tree J2SE provides users with the ability to create trees where each node may have an arbitrary amount of children. The class javax.swing.tree.DefaultMutableTreeNode contains all the facilities required to build a tree with an arbitrary number of children. It would be possible to extend this class, to define nodes that contain values, and states as required. Unfortunately, the javax.swing package, or any equivalent package, is not available in for the mobile platform. 2. Linked Lists J2SE contains the class java.util.LinkedList, which helps users develop linked list structures. Linked lists can be used to develop the tree structure required. Each node would contain two pointers, one pointing to a child node (if one exists), and the second pointing to the next sibling (if one exists), as is depicted in the following diagram. 54
A leaf node is distinguished by checking if its first node is not null (i.e the first node points to another node). However, the mobile platform is not supplied with any sort of linked list structure. Therefore, one would have to be developed in order to use this solution. Even so, in order to transverse the tree, additional utilities will have to be developed. In order to goto a specific child node, each sibling before it will have to be encountered. 3. Use Vectors J2SE contains the java.util.Vector class, which is also present in CLDC with the same package hierarchy naming. The required tree structure can be developed using vectors. Each node would point to a vector, which is able to hold all its child nodes. Each child node in turn is able to do the same, as depicted in the following diagram.
55
Vectors are able to be extended whenever required, so a node may have any number of children. There is also the advantage that in order to goto a specific child of a node, only the index of the child in the vector is required. 4. Use Arrays Each node could hold an array of pointers, whose size is set by the user. The array will be large enough to be able to handle the maximum possible of children a node can have for a specific tree. For example, for the game tic-tac-toe, a node may have a maximum of 8 children at any one point. This solution provides an extremely strait-forward implementation, reducing the coding time required. However, it is a very inefficient solution, which also requires the user to provide additional information.
56
Conclusion One of the deciding factors when the decision to which solution to use was made, was the performance requirement 8, which states the GameME API, and GameSE API would try to remain as consistent as possible. If solution 1 was chosen, a separately solution for GameME would have to be constructed, and it was unknown whether the two solutions of tree construction would be consistent with one another. It would be easier for the user of the API to learn one method of tree construction, and be able to use both GameME and GameSE, rather then having to learn a different method for each API. Solution 4 was designed as a fall back, in the case the implementation for the other designs failed. It was too inefficient to be considered initially. Both solution 2, and 3 seemed promising, as developing a linked list structure would be possible. However, the vector tools were already available for use for both the desktop, and mobile platforms. In order to decrease the coding time required, solution 2 was opted for the implementation of the node structure.
57
5.3
Node Casting Problem
During the development of the Node hierarchy, a method overwriting problem arose. many methods in Node would be inherited by all other classes in the hierarchy (refer to figure 4.3 from chapter 4). The problem was that many methods in Node had problematic return types. The UML diagram of the class Node below explains this problem more clearly.
The four methods indicated are all essential for every class in the hierarchy. However, the return types of these methods all provide an extra level of complexity for the user of the API. The following segment of code illustrates this point. StateNode node1 = node2.getParent(); //INVALID - REQUIRES CAST No matter what the type of node2 is, the getParent method will return an object of type Node. Java will require the programmer to cast the object returned by the getParent method to the appropriate type. StateNode node1 = (StateNode) node2.getParent(); //VALID This forced cast is not desirable. The user of the API will have no knowledge of this constraint until they read the documentation supplied with the API. Two solutions presented themselves for this problem. The first was to implement the methods separately within each class, each returning their own type. However, one constraint in J2SE 1.4.2 is a method overwriting a parent method, must have the exact same return type. In order to implement this solution the classes could not be implemented in a hierarchy, resulting in a very inflexible API, which would be difficult to extend or adapt. 58
The second solution was to use J2SE 1.5. This version of Java allows methods that overwrite parent methods to have a different return type. However, any application developed using the API could only be run by J2SE 1.5 or higher. This would not pose too much of a problem, as J2SE 1.5 is available for most desktop platforms. Unfortunately, this feature is not currently present in any version of J2ME. Consequently, this problem would still exist for GameME. Implementing this solution for GameSE alone would increase the inconsistency between GameSE, and GameME, which this project intended to keep as minimal as possible. Taking these factors into consideration, it was decided that forcing the programmer to cast object would be the best solution. The advantages gained from utilising the other possibilities was not great enough to implement them, especially with the time available to develop the API.
5.4
State Representation
The State class is used to create, and manipulate objects that represent game boards at an instance in time. In both APIs, states are stored as 2D integer arrays, representing a checkered game board, as depicted in the diagram below. Each cell is able to hold a single integer value.
As stated in the previous chapter, State object will normally be used in conjunction with IntegerStateNode, and DoubleIntegerStateNode, both of which allow the user to instantiate nodes containing states. State objects are vital to any board game which uses the APIs, consequently the class State provides a variety of tools to manipulate these objects. The more interesting tools are described below. • clearState() This method sets every cell value in the 2D array to 0. It is a useful to reset a state, without having to create a new object. 59
• copyState(State sourceState) This method copies every integer value from the sourceState to the corresponding cell of the state which called the method. During the initial test phase, it was found that many users would attempt something similar to the following code. State state2 = new State(3,3); state2 = state1; This causes the state2 variable to point to state1, and disregard the object that has just been created. Now if state2 manipulates the object it points to, it also manipulates the object state1 points to, as is made clear in the following diagram. The users usual intention was to have two separate states, with the exact same values. This code should now be written as. //DIAGRAM State state2 = new State(3,3); state2.copyState(state1); • compareX(State originalState) and compareY(State originalState) These two methods are extremely useful for quickly comparing the difference between two states. It returns the x-position, and y-position accordingly, of the first cell which does not have the exact same value in both boards.
5.5
HasState Interface
As described later, the transposition tables implemented only distinguish between nodes by their state. If the minimax tree contains nodes that do not hold states, the API will cause an error trying to locate it. A variety of solutions presented themselves to this problem. One solution was to divide the minimax algorithms further, to those which implement transposition tables, and those that dont. It was decided however, that this would overly fragment the Minimax Package, and would most likely cause the user confusion as they attempt to decide which algorithm to use. Another solution was to request another input from the user during the minimax initialisation, which would indicate if they wish to use transposition tables, or not. This would require the user to have knowledge of the transposition tables implementation, which is not desirable. The third, and chosen solution, was to allow the minimax algorithms to automatically check if the nodes in the minimax tree contained states using an interface, namely HasState. Any node type which implements this interface, is indicating that it has the ability to store states, and hence transposition table optimisation can be used.
60
5.6
Minimax
Each minimax algorithm can be broken down into three main subroutines. As shown in the following diagram.
The maximum, and minimum subroutines perform the bulk of the minimax process as described in the literature review. The third subroutine, minimaxStart, takes in the user input required, and initialises the minimax algorithm by calling the maximum subroutine. Maximum can only call minimum, which in turn can only call maximum. After the analysis is complete, maximum returns the process back to minimaxStart. At this point, the algorithm must decide which node it wishes to select as the next move. This is where minimaxStart performs its second task, by selecting the node, according to how it is implemented. It is clear it will select the node with the greatest value, but if it approaches a second node with an equal value, it needs some sort of method to choose between the two. The following diagram illustrates this point
61
It is here where Minimax, and DoubleMinimax differ. Minimax will simply choose a between best moves at random. Initially the algorithm would select the first best move it encountered, but to make the AI slightly less predictable, it was chosen that a random choice would be better. DoubleMinimax provides the modification to the minimax algorithm known as SUM, which is described in the literature review. Simply put, every node contains a secondary integer value, which stores a summation of how many wins (according to max) are present in that nodes subtree. DoubleMinimax will then select between nodes of equal value, using this secondary value. If the nodes are still exactly equal, DoubleMinimax chooses one at random. All of the above is true for the difference between MinimaxNode, and DoubleMinimaxNode as well. Each minimax, and minimum subroutine implements alpha-beta optimisation, to prevent the analysis of nodes that will not effect the final result. Nevertheless, it was found that most of the computation time was spent creating the minimax tree, rather then analysing it. TESTS Both Minimax, and DoubleMinimax, take an already built minimax tree as input, with every leaf node assigned a value. It became apparent that rather then letting the minimax tree be created by the user, it would be far more efficient to allow minimax to create the tree as it analyses it. This would prevent the creation of nodes that are going to be pruned by alpha-beta optimisation. Not only would this save computation time, it would also be much more memory efficient. This method has been implemented in MinimaxNode, and DoubleMinimaxNode. However, the initiation of these algorithms, require far different input from the user. They require the current state of the game, and also knowledge of how to create the correct node at any instance during analysis. In order to do this, the following four methods are required to be implemented by the user. gameEnd(node) Every board game has a finite number of states, which represent final game positions. In the game checkers for example, if a board contains only one kind of piece, and hence zero amount of pieces for the other player, the game has ended. This method takes in a node, and checks to see if the state that is held in that node represents an end game position of any kind. This information is used by the algorithm to check if a particular node 62
is a leaf node, as all end game positions are leaf node, and therefore have no children. isNext(turn, originalNode, previousNode) This method is used by the algorithm to check whether the parent node, originalNode, has any more children that have not been created. The node previousNode, is the last created child of originalNode. Using this information, it is possible to decipher whether originalNode has anymore children that it has not yet created. If so, the method returns true. getNext(turn, originalNode, previousNode) This method is used in conjunction with isNext by the MinimaxNode, and DoubleMinimaxNode algorithms. Once isNext has assured the algorithm that another child node exists, this method creates the appropriate child node, and returns it to the algorithm. It became apparent that isNext, and getNext both perform near identical tasks, and one of them could be removed. Unfortunately, time constraints prevented any further optimisations, and both methods are still required. assignValue(node, player) This method is used by the algorithm to assign a value to the node provided, according to the player provided. For games that have full trees, this method can be very simply implemented by checking if the appropriate player won the game. For more complex games, the minimax tree will likely have leaf nodes that do not represent end game positions, as a result of the cutoff bound. In these cases the user will have to analyse the state represented in the provided node, and assign a value to it using heuristics. Once these four methods are successfully implemented by the user, the algorithm only requires a single node representing the current state of the game. This form of optimisation caused a tremendous impact on the efficiency of the algorithm.
5.7
Transposition tables
One final optimisation which has been implemented into both MinimaxNode, and DoubleMinimaxNode is the use of transposition tables, which are described in the literature review. The initial problem was how to implement the table. Initially, the table was implemented using a linked list, where each node would point to the next stored node. There was no ordering involved, and storage was simply a case of first come, first stored. Therefore it was only possible to linear search the table for a particular node. However, the transposition table was later updated to be implemented as a hash-table. Thankfully, both J2SE and J2ME provided the class java.util.Hashtable, and therefore coding time was significantly reduced. Hash-tables provide a more efficient storage, and retrieval mechanism then that of linear search. The use of hash tables greatly optimised the minimax algorithm.
63
5.8
Minimax Package Architecture
All of the above problems, and their corresponding solutions, have shaped the way the Minimax Package is constructed. An extremely high-level view of the package was given in subsection 4.3. The diagram presented below represents a more detailed view of the packages elements and interconnections.
This complexity is hidden from the user, and they will only need knowledge of the interfaces to utilise any of the algorithms.
5.9
Persistent storage
As described in section 4.5, the goal of the Memory class was to provide an easy, highlevel way of saving games made by the API. The only construct from the API that completely represents a board games state is a State object. The Memory class stores 64
the value of every cell of a state into persistent storage, where each value is converted to its binary representation. This conversion provides much far greater memory efficiency, but loses the ability for the save to be human-readable. The Memory class is able to store user-defined state types, as long as they extend the class State. However, it is unable to store multiple-valued cell states, that is to say, a state type where each cell contains more than one value. GameSEs and GameMEs persistent storage provide the same facilities to the user, with almost identical method signatures. The only difference occurs as a result of multiple save games, which is explained below.
5.9.1
GameME Persistent Storage
Record Management System (RMS) As detailed in (lit review), the RMS is provided by the MIDP suite of APIs, to allow java applications to store data on mobile device to persistent memory, whilst maintaining the cross-platform ability. The Memory class was designed to allow users to quickly, and easily save State objects to persistent memory. Therefore, it was implemented in such a way, that the user requires absolutely no knowledge about the RMS. In the RMS, data is stored in arrays of bytes called records. One characteristic of records is that in order to be used, they must be open. They are automatically opened when they are created, however in order to delete a record, it must be closed. This can be done by using the public method closeRecordStore, situated in javax.microedition.rms.RecordStore. This is not yet completely sufficient to close a record. The method closeRecordStore must be called exactly the same number of times openRecordStore has been called. Therefore before any modification to a record is attempted, the Memory class attempts to open the record, and after modification is complete will attempt to close the record. In doing so, after every manipulation, a record will be fully closed, and will be able to be deleted successfully at any time. Every record must have a unique name assigned to it when it is initially created. The Memory class supplies two constructors to the user. One allows the user to create a new record with a specified name, and the other uses the default name saveGameDataBase. This allows users to initiate multiple records, and still use the tools provided by Memory if they so wish. The user has the ability to store multiple states on a singular record, and also has the ability to overwrite specific saves. Each saved state can be represented by a single integer value, indicating what save number it is. This is only possible with states that all have the exact same dimensions. If the user wishes to save states with different dimensions, they will have to create a new record. Finally, the user has the ability to load any saved state, by simply requesting the load position.
65
5.9.2
GameSE Persistent Storage
GameSEs Memory class utilises tools from the java.io package, such as FileOutputStream, ByteArrayOutputStream, DataOutputStream, and their corresponding input variations. Each saved game is stored in a separate file on the platforms persistent memory, with a unique name. Two constructors are available to the user, one takes in a string as input, which will be used as the name of the save file. The second constructor takes no input, and uses the default name saveGameFile. Each Memory object, points to a single save file, however a single file can be pointed to by multiple Memory objects. Furthermore, each save file can only store a single state. Consequently, in order to store multiple states, multiple Memory objects must be instantiated, each pointing to a different, uniquely named file. It is highly recommended to the user, that the filename is common only to the application which uses it, as another application using the API, in the same path, can access the save files as is depicted in the following diagram.
In order to maintain as much efficiency as possible, each state was decided to be stored as a set of bytes in a file on the platforms persistent memory. As stated previously, each saved state must be saved in a separate file. Recall that GameMEs persistent storage facilities, allows multiple states to be saved in the same file. This inconsistency was a result of time constraints. Resolving this inconsistency became less of a priority during the late stages of development, hence remained unresolved. 66
Chapter 6
Testing Testing Strategy The testing strategy applied to this project, was similar to the strategy applied during the requirements elicitation stage. That is to say, the strategy consists of using a few different testing techniques, each of which is not excessively used. Every test technique has its own strengths and weaknesses, and the intention was to use various techniques in an attempt to gain as much benefit from testing, as was permitted under the time constraints. This is reflected in the test plan described below, by the use of three different testing techniques; black box testing, white box testing, and demonstrations.
6.1
Test Plan
The test plan was divided into three separate stages, each of which was primarily aimed at a particular phase of the project. The documentation of the testing within each stage varies, as is detailed below. The test plan diagram in appendix D.1, depicts the test plan used throughout the project, and is a useful guide to follow alongside the description of each stage provided below.
6.1.1
First Stage
The first test stage encompasses the continual testing, and debugging that occurred throughout the development of the APIs. The APIs were developed in an incremental approach. Certain sub-systems could not be developed, until others had been fully implemented. Because of these inter-dependencies, it was crucial that newly developed sub-systems were tested to a certain degree, before development could proceed. The testing consisted of two separate strategies. The first would, if possible, test the newly developed sub-system independently of any other system. The second stage would test the newly developed sub-systems interactions with existing sub-systems. Any errors found during testing were resolved, and testing resumed to take into account these
67
resolutions. As a result of the large amount of testing, and debugging that would occur at this stage, the testing was restricted by time constraints. During the latter stages of development in the project, most of the implementation focused on optimisations of the API. This development still fell under this test stage, however, testing also served the purpose to indicate the effectiveness of the newly added optimisations. In some cases, testing showed that certain optimisations had a detrimental effect, such as the use of transposition tables on very small minimax trees.
6.1.2
Second Stage
The second stage of testing occurred after development was complete. At this stage, high-level black box testing was employed on every sub-system. Additionally, white box testing was also performed on what was deemed to be the most critical sub-systems of the API. Any faults found as a result of this testing, were attempted to be resolved, and the resolutions were themselves tested.
6.1.3
Third Stage
The third stage of testing was to develop multiple application using the APIs, to not only test the APIs, but to demonstrate the final product. Two applications were developed using GameSE, and two using GameME, for their corresponding platforms. Furthermore an independent programmer attempted to develop an application, using the API, and provided feedback of their experience. Any improvements that could be made using this feedback were implemented, and tested, and testing was them considered complete.
6.2
Testing Outcomes
During a second stage test, using a complete game tree for the game of tic-tac-toe, it was found that the AI was performing erratic moves. For a complete game tree, the AI has full knowledge of every possible move in the game. Therefore, it should not choose any moves which could potentially cause it to lose, which was not the case during the anomalous test. The error could potentially have resided in any part of the Minimax Package, Node Package, or the test data itself. The error was considered highly severe, hence it was decided that a lot of time was expendable in order to find a resolution. Thankfully, the error was found to be associated with the method the minimax algorithms chose between two nodes. As stated in section 5.6, it was decided that every minimax algorithm would randomly distinguish between nodes that it otherwise could not, in order to provide a slightly more human approach. However, this implementation caused an error, when used in conjunction with the implemented alpha-beta optimisations. The reason why the two conflicted is far too
68
detailed to be explained here. Three possible solutions were available. The first was to remove the randomisation of node selection. The second was to remove the alpha-beta optimisation, and the third was to implement the alpha-beta optimisation differently, to avoid the conflict with the randomisation. The third choice would have been the ideal solution, allowing the algorithms to maintain both features. Unfortunately, as a result of time constraints, this was not possible. Alpha-beta optimisation was critical to the effectiveness of both MinimaxNode, and DoubleMinimaxNode. The randomisation of node selection, was simply a tool used to provide variety during the analysis of a game. Therefore, it was decided to remove this feature completely, and maintain alpha-beta optimisation.
6.2.1
J2ME hasState
After the implementation of transposition tables into desktop API, the attempt was made to quickly port the implementation to the mobile API. However, a peculiar problem emerged, which has remained unsolved. As described in section 5.5, the implementation of transposition tables requires the use of the HasState interface. More specifically, the following line of code is used to detect whether a node contains a state or not. HasState.class.isInstance(currentNode) However, attempting to call the method isInstance, results in the J2ME compiler stating the error to locate a core Java class. Attempts where made to at first resolve the situation, and then to attempt to work around the problem. However, after both attempts failed, it was decided the best solution was to simply remove the implementation of transposition tables from GameME.
6.2.2
getNext/isNext similarity
During the third stage of testing, a potential improvement of the API was noticed as the demonstration applications were being developed. The independent test programmer, also commented on the potential improvement. As stated in section 5.6, when the MinimaxNode, or DoubleMinimaxNode algorithms are being utilised, four methods are required to be implemented by the user, to fully describe the game. However, it was noticed that the implementation of the methods isNext, and getNext are almost identical. A vast improvement to the usability of the API could be achieved if only a singular method was required to perform both functions. However, as a result of time constraints, this improvement was not implemented.
69
6.2.3
Transposition tables effectiveness
During the later stages of development, further optimisations were being developed to be included into the API. Each of these optimisations were tested, for there effectiveness. One such optimisation was the use of transposition tables, which are described in 2.3.4. In order to test their effect on the API, numerous tests were run on a varying number of games, the results of which can be seen in appendix D.3. To summarise the results, it was found that the larger the minimax tree, the greater the efficiency transposition tables provided. However, once the size of the tree reduced to a certain point, the overhead caused by the use of transposition tables outweighed the efficiency, and caused a slight detrimental effect. Even so, the efficiency gained was far more significant, as shown in one case where the increase of efficiency was 61%, and it was clear the use of transposition tables was overall beneficial.
6.3
Conclusion
The outcome of the testing stage showed that in general, both APIs were implemented successfully. GameME did contain an un-resolvable problem, which was simply removed, and hence provided less efficiency then that of GameSE. Of particular significance, was the development of MinimaxNode, and DoubleMinimaxNode. They both provide an excellent way for developers to include AI into their board games, without requiring much knowledge of the minimax algorithm. All that is essentially required of the developer, is to describe the games legal moves, how the game is ended, and an evaluation function. The implementation of the evaluation function wholly depends on the complexity of the game. For example, a simple game such as Tic-tac-toe, will have an evaluation function that simply states winning is beneficial, and losing isnt. The algorithm then has all the knowledge required, to create the minimax tree itself, and choose the best possible move it can play. For more complex games, which is usually the case, the evaluation function has to be more detailed. For example, the evaluation function for a game of chess may state that taking an opponents queen is more beneficial then taking their bishop. As stated in 2.3.2, the strength of the AI for complex games, is highly dependent on the level of detail implemented into the evaluation function. As a result of the quality of MinimaxNode, and DoubleMinimaxNode, it was contemplated at one point to remove the inferior Minimax, and DoubleMinimax classes. However, it was decided to continue to provide them, so as not to limit the end-user in any way. The situation could arise, where the end-user developed a more efficient method of constructing a minimax tree, and these algorithms would therefore be more beneficial. Finally, the independent programmer provided valuable feedback on the desktop API. There commented positively on the API, and particularly praised the ease of which
70
MinimaxNode provided AI. However, they did state that the documentation provided with the API, was not detailed enough, and should be expanded. Unfortunately, as a result of time constraints, this suggestion could not be fulfilled.
71
Chapter 7
Conclusion 7.1
Project Evaluation
The outcome of this project, was the development of two separate Java APIs, one aimed at desktop platforms, and the other at mobile platforms. The requirements elicitation process used was relatively successful, and did result in a set of requirements which the final products could be compared against. However, the set of requirements derived was fairly abstract in some areas, as a result of multiple factors. One of these factors was the use of an iterative development model. This allowed the continuous update of the requirements when required, but also prevented some requirements from being detailed. Another factor was the change of supervisors during the project. The initial supervisor was chosen as the end-user, and this caused ambiguities once the supervisor left. The final products managed to meet all critical requirements, but failed to meet a few minor requirements. Firstly, the undo/redo move facility was never implemented, as a result of time constraints. The constraints posed on the project were taken into consideration, however the unexpected problem of designing two final products was met during the design stage, and these requirements were not met. The second requirement that was not entirely met, was the performance requirement 8, which can be found in the SRS in appendix B.1. This requirement was again the result of time constraints posed on the project. The design and implementation of the project was ultimately very successful. Both APIs were produced to a high quality, as demonstrated by the test applications. Of particular significance, was the development of MinimaxNode, and DoubleMinimaxNode. These two classes greatly increased the advantages of using the API, as discussed in 6.3. Testing also indicated the successfulness of additional optimisations, such as transposition tables. Additionally, the implementation of the persistent storage facilities proved to be very successful, requiring minimal knowledge of Java persistent storage management. The problems faced implementing two separate, very differently implemented memory could 72
have caused severe problems to the project, but thankfully due to the careful scheduling of the project as a whole, additional time was reserved to solve any previously unaware problems.
7.2
Possible Extensions
As described in the literature review, a vast amount of possible extensions could be implemented for the minimax algorithm. It would have been interesting to see the results of implementing an imperfect minimax algorithm, in order to provide a more natural feel from the AI. It was also very unfortunate that the requirement to implement bitboards was removed, as a result of time constraints. The potential optimisations provided by bitboards was very large, and it would have been intriguing to see if arbitrary sized bitboards could be implemented efficiently. If the project was resumed, I would suggest that the implementation of bitboards should be a high priority. The replacement of conditional statements, with logical operators was also briefly tested at one point. However, the results of the test showed very insignificant increases in efficiency. Further research could be made into the use of logical operators to replace other conventional programming constructs, such as loops. This massive use of loops within the algorithms, could be benefited by the use of logical operators. A definite weak point of the final products, was the inflexibility of the memory facilities provided. The tools only allowed very specific storage, and retrieval of states. To remain consistent with the remainder of the API, the memory facilities could be implemented in a more flexible approach, to accommodate user defined nodes, and states to a higher degree.
7.3
Personal Evaluation
Overall, I am very pleased with the final products produced by the project. However, the time required to code two separate APIs was very large, and the decision to go ahead and produce two products could have resulted in very weak products. Thankfully, due to the suggestion made by my initial supervisor to use an iterative model, time previously allocated to implement advanced optimisations was reassigned to implement the mobile API. In one sense, I was a bit hesitant to go forth with this solution, as I was particularly interested in seeing the effectiveness of some of the advanced minimax optimisations, such as IDAB. This problem could have been foreseen, if less time during the literature review stage was devoted to minimax, and rather used to research into the difference between J2ME and J2SE. One area I was unhappy with, was the unresolved problem described in 6.2.1. It seemed a bit ironic at the time, that the development of my mobile API was effected by a flaw in the J2ME API. However, this problem was faced far too late into development to be resolved. I acquired a lot of knowledge from the development of this project. The importance of being able to perceive potential problems, and designing solutions to these problems was a skill I greatly improved. As 73
described in many situations throughout chapters 5, and 6, I would face a potential problem by outlining a number of possible solutions, and then selecting the solution I deemed the most appropriate. I had never really encountered this level of problem solving in a software project before, and I found it very rewarding. One decision I regret making, was to perceive the initial supervisor as the end-user. Once the initial supervisor left, it was very difficult for me to realise if the progress I was making was on track or not. Thankfully Dr. Richardson provided as much help as he could, and continued to provide support during the later stages of the project. In conclusion, I feel the project has developed a solution which many Java developers, would find very useful, particularly for developing games for resource-limited platforms. I did at one point attempt to gain test data from J2ME Java developers, and received very interested feedback, however no developer volunteered their time to provide test results.
74
Bibliography [1] The genuts project. [2] Joshua Bloch. Effective java programming language guide. 2001. [3] Joshua Bloch. How to design a good api and why it matters. 2004. [4] Pui Yee Chan, Hiu Yin Choi, and Zhifeng Xiao. Game trees. alpha-beta search. Data Structures and Algorithms, 1997. [5] Cleidson R. B. de Souza, David Redmiles, Li-Te Cheng, Daviv Millen, and John Petterson. How a good software practice thwarts collaboration - the multiple roles of apis in software development. Proceedings of the 12th ACM SIGSOFT twelfth international symposium on Foundations of software engineering, October 2004. [6] Jim des Rivi`eres. Eclipse apis lines in the sand, 2004. [7] Jonathan Knudsen. Creating 2d action games with the game api. [8] Fran¸cois Dominic Laram´ee. Chess programming part iv: Basic search. 2000. [9] Philip Mayer. Analyzing the use of interfaces in large oo projects. Information Systems Institute, Knowledge Based Systems, University of Hannover. [10] Martin J. Osborne and Ariel Rubinstein. A course in game theory. MIT Press, 1994. [11] Paul Rogers. Encapsulation is not information hiding. [12] Ian Sommerville. Software Engineering: Sixth Edition. Addison-Wesley, 2001. [13] David J. Thuente and Rhys Price Jones. Beyond minimaxing for games with draws. Proceedings of the 19th annual conference on Computer Science, April 1999. [14] Bill Venners. Interface Design, Best Practices in Object-Oriented API Design in Java. 2001. [15] Michael Wrighton. Jaba - java boardgame api. [16] Frank Zibi. An introduction to bitboards.
75
Appendix A
Full source code for both GameSE, and GameME are available on the compact disc provided with this dissertation. Also provided on the compact disc, is the API specification generated using Javadoc.
76
Appendix B
B.1 B.1.1
Software Requirement Specification Introduction
Purpose The SRS is an essential starting point for the implementation of the project. It is used to set the objectives to be met, and set constraints on the project. The quality of the final product is largely influenced by how well it meets the requirements that will be stated here. The successfulness of the final product is also determined by how consistent it was with its defined requirements. The intended audience for this SRS is the dissertation marker, and any developer who wishes to extend the project. Scope The project will produce two software products, the Java Board Game API Standard Edition (GameSE), and the Java Board Game API Mobile Edition (GameME). The GameSE will allow users to easily develop Java Games, with minimax based artificial intelligence, for the desktop platform. The GameME will allow users to easily develop Java Games with minimax based artificial intelligence for the mobile platform. Both APIs will also feature persistent memory storage facilities, to aid the user in implementing Save Game, and Load Game features. Both APIs will not contain any functions to aid the user in graphically representing their games. The Game SE will be useable with the J2SE 1.4.2 platform, and the Game ME will be useable with the J2ME 1.0 platform. Definitions, acronyms, and abbreviations API - Application programming interface J2SE - Java 2 Standard Edition GameSE - Java Board Game Application programming interface Standard Addition GameME - Java Board Game Application programming interface Mobile Edition Minimax - The 77
minimax algorithm Interface - A programming construct available in Java SE - Standard Edition ME - Mobile Edition J2ME - Java 2 Platform, Micro Edition. A collection of Java APIs, targetted at embedded consumer platforms such as mobile phones. MIDP Mobile Information Device Profile. Specification put out by Sun Microsystems for the use of Java on embedded devices, such as mobile phones. MIDP is used in conjunction with CLDC. CLDC - Connected Limited Device Configuration is a framework for J2ME applications targeted at devices with very limited resources, such as mobile phones.
B.1.2
Overall Description
Product Perspective put the product into perspective with other related products. Self-contained/Independent? relate requirements of the larger system to functionality of the software, identify interfaces between system and software. Block diagrams, major components of larger system, interconnections, and external interfaces. Operates under various constraints GameSE is intended to be used in conjunction with J2SE. It cannot be used independently. The following diagram shows a high level view of how GameSE fits into the desktop environment. ?Platforms - The machines the board game application runs on. Java Hotspot VM Runtime Java Hotspot Client Compiler Core APIs GameSE Board Game Application GameME is a mobile-platform specific API. It is intended to be used in conjunction with J2ME, and is only suited to the mobile-platform. The following diagram shows a high-level view of how GameME fits into a mobile device. ?Diagram explanation: MID represents the Mobile Information Device hardware. The actual hardware of the mobile device. Native System Software contains software such as the operating system, and additional libraries used by the device. CLDC provides the underlying Java functionality, upon which high-level Java APIs may be built. MIDP is a set of Java APIs which are mobile device specific. These APIs cannot be found in J2SE. OEM-Specific Classes are Java classes, provided by the device specifically for that platform. Any applications using these classes, cannot be ported to any other device. GameME contains the set of APIs to be developed by this project. It utilises functions from both CLDC and MIDP. MIDP Applications are applications which the APIs provided by MIDP and CLDC. Board Game Applications are a subset of MIDP applications, which use the GameME API. Memory Constraints Any Java application is restricted by Java VM heap size, which is set when the application is run. Obviously, the desktop platform can enable a much greater heap size for the Java VM then the mobile platform. The minimax tree, can become extremely large for relatively complex games, such as chess. The tree will need be stored in the Java VM heap, and as a result, both the GameSE and GameME APIs need to be as memory efficient as possible.
78
Product functions Both GameSE and GameME will provide the user with facilities to aid them in developing board games. The user will be able to represent real-world boards as Java objects, and will be provided with functions to manipulate these boards quickly. Both APIs will also provide the user with variants of the minimax algorithm, and facilities to create inputs for these algorithms. Lastly, the APIs will provide the user with facilities to easily implement Save Game, Load Game, and Undo Move features. User Characteristics The intended users of GameSE and GameME are experienced Java programmers, who understand the theory behind the basic minimax algorithm. More specifically, GameSE is aimed at experienced J2SE developers, while GameME is aimed at experienced J2ME developers. An experienced Java developer, is one that - understand object-orientated programming - understands the Java constructs - understands how to use, and understand, Java API documentation Constraints Developers The APIs will be constructed by only a single developer, which will greatly increase the amount of time required to complete the APIs, relative to a group of developers. Time There exists a time restriction to the development time of the API, hence the amount of functionality and the quality of the API will be restricted Money No funding is provided for this project, and in turn any tools which may aid the development of the APIs cannot be purchased. Lack of Devices There will be a lack of devices to test the API upon, specifically mobile devices. Developer tools Only free, or already available tools can be used to develop the API. Widespread use of Java 5 Java 5 is not compatible with all platforms at this current stage. To increase compatibility, Java 1.4.2 will be the development environment upon which GameSE is built. This may restrict the API, as Java 5 contains additional utilities to enhance code. Widespread use of MIDP 2 MIDP 2 is becoming widespread among devices, however many mobile platforms still only provide MIDP 1.0. In order to increase compatibility, and broaden the range of devices supported by the project, GameME will be constrained to be built upon MIDP 1.0s environment. Assumptions and dependencies This subsection lists any factors that could effect the requirements. - A mobile device with MIDP 1.0 support is available for which the GameME API is to be tested on. Multiple types of platforms with Java SE 1.4.2 is available to test the GameSE API on. - A platform with the Java SE 1.4.2 Software Development Kit is available. - A J2ME 79
SDK is available on an available desktop machine. - The GameME API will be mostly derived from the GameSE API
B.1.3
Specific Requirements
Functions 1 The system should aid the user in developing board games 1.1 The system should aid the user in representing real world boards, called states 1.1.1 The system should provide tools to manipulate states 2 The system should allow users to implement artificial intelligence 2.1 The system should provide AI based on the minimax algorithm 2.1.1 The system should provide tools to create a minimax tree, to be used to be used as input for the minimax algorithm 2.1.1.1 The system should provide multiple types of nodes that can be used as input for the minimax algorithm 2.1.1.1.1 The system should provide tools to manipulate the nodes 2.1.2 The system should provide variants of the minimax algorithm 2.1.2.1 The system should ensure the correct inputs are used for the cor rect minimax algorithm 3 The system should provide users with a Save Game facility 3.1 The system should allow the user to implement a multiple Save Game facility 3.2 The system should allow the user to save games over existing saved games 3.3 The system should allow the user to delete saved games 4 The system should provide users with a Load Game facility 4.1 The system should allow the user to implement a multiple Load Game facility 5 The system should provide users with a Undo Move facility 5.1 The system should allow users to implement a Redo Move facility 6 The system should provide an API for desktop platforms 6.1 The system should be compatible with J2SE 1.4.2 and above 7 The system should provide an API for mobile platforms 7.1 The system should be compatible with CLDC 1.0 and above 7.2 The system should be compatible with MIDP 1.0 and above Performance Requirements 1 The API should be relatively easy to use for Java programmers 1.1 All of the features provided by the API should be relatively easy to use by Java programmers 2 The APIs architecture should be understandable 3 The API should be flexible 3.1 The API should allow developers to incorporate their own types. 4 The API should be stable 4.1 The API should not contain any sever faults in any facilities it provides 4.2 The API should not contain any sever faults with the interconnections of entities within the system. 5 The minimax algorithm should be efficient 6 The documentation supplied with the API should not be difficult to use 6.1 The documentation supplied with the API should be understandable 7 The API should provide a sufficient amount of facilities, but not an abundant amount 8 The APIs should strive to be as consistent between themselves as possible
80
Appendix C
C.1
Requirement Elicitation Diagrams
81
82
83
84
85
Appendix D
D.1
Test Plan
86
D.2
Testing Stage Three Screenshots
Connect-Four: DoubleMinimaxNode with transposition tables, using desktop API.
Tic-tac-toe: MinimaxNode with transposition tables, using desktop API
87
Connect-Four: DoubleMinimaxNode, lookahead bound of 2, using mobile API.
88
D.3
Optimisation Analysis
89
90
91