final year project - Google Sites

4 downloads 251 Views 726KB Size Report
School of Computing Sciences. FINAL YEAR PROJECT. Simulating Human Behaviour in a Zoo Environment. Emma Cotgrove. Year 2
School of Computing Sciences

FINAL YEAR PROJECT

Simulating Human Behaviour in a Zoo Environment

Emma Cotgrove

Year 2005/2006

Supervisor: Prof AR Forrest

Summary This report describes the project “Simulating Human Behaviour in a Zoo Environment” and describes the approaches and methods implemented, as well as the research which has been done for this project to be successful. This project simulates how humans behave when they are in a Zoo environment. This involved the modelling of humans and animals and then putting them together in an environment where the humans react to the different animals. The humans also react to other events such as rain, where they leave the zoo and they will also enter the zoo again if it stops raining. When the clocks reach a certain time, the humans leave the Zoo. There are currently methods which simulate crowd behaviour and other such methods which are described in the background section. Some events occur in the zoo to which the humans respond, particle systems have been investigated to add rain to the simulation. There are two clocks in the zoo and when these reach closing time the humans leave. If it rains the humans leave the zoo for the duration of the rain. Collision detection has been researched to gain a thorough understanding of different methods to stop the humans hitting other objects and walking through each other. These have aided in making the simulation realistic. This project has been undertaken because it can have many uses such as showing how humans behave when in contact with animals. It could be extended to show how animals behave going from the natural environment to a much different environment. The human behaviour could be included into other such projects such as urban modelling projects, so for example the humans can be put into a university model to show student activities. The simulation is real time and the user can control the camera movements. From this the user can have a look at what the humans are doing in the zoo.

Acknowledgements Thank you to Prof AR Forrest for being the supervisor for this project. For this project to come together the importing of models into OpenGL from 3dsmax was required. The piece of code which dealt with this has been taken from Paul Gasson. His code allowed one 3dsmax model and one animation to be imported into OpenGL Also thanks to Dr Stephen Laycock who helped with the adapting of Paul Gasson’s code.

1

Contents Page Summary

1

Acknowledgements

1

1 Introduction 1.1 Overview 1.2 Aims and measurable Objectives 1.3 Resources used

5 5 5 6

2 Literature survey 2.1 Overview of Simulating Human Behaviour 2.2 Artificial Intelligence, Collision Detection and Collision Avoidance 2.2.1 Artificial Intelligence 2.2.2 Collision Detection 2.2.3 Collision Avoidance 2.3 Character Animations 2.3.1 Vertex / Skeleton Animations 2.3.2 Making Animations Realistic 2.3.3 Motion Capture 2.4 Level of Detail 2.5 Impostors 2.6 Particle systems 2.7 EXODUS 2.8 Critical review

7 7 7 7 7 8 9 9 10 10 10 11 11 12 13

3 Theory / Design 3.1 Initial Approaches 3.2 Animations 3.3 Importing models 3.4 Events 3.5 Environment 3.6 Sound 3.7 Collision 3.8 Animals and Human Movement

15 15 16 16 17 17 17 17 18

4 Implementation 4.1 Importing models 4.2 Animations 4.2.1 Creating the Models 4.2.2 Creating the skeletal structures 4.2.3 Animating 4.3 Events 4.3.1 Rain 4.3.2 Closing Event 4.4 Environment 4.4.1 Inside the zoo 4.4.2 Outside the zoo 4.4.3 Foliage 4.5 Sounds 4.6 Behaviour Implemented 4.6.1 Walking

20 20 20 20 21 22 22 23 23 23 24 24 25 25 26 26 2

4.6.2 Standing at Enclosures 4.6.3 Bumping into each other 4.6.4 Animals 4.7 Animal and Human Movement 4.8 Collision 4.8.1 Animal 4.8.2 Human 4.8.3 Camera 4.9 Problems 4.10 Rejected Method 4.11 Final Solution - Paths

26 27 27 27 28 28 28 29 29 29 29

5 Testing and Results 5.1 Collision 5.2 Behaviour and Animation Sequences 5.3 Human and Animal Movement 5.3.1 Human 5.3.2 Animal 5.4 O’Rourke’s Line Intersection 5.5 Rain

32 32 32 34 34 34 34 35

6 Conclusion 6.1 Success of Project 6.2 Management of the Project 6.3 Further Work

36 36 37 37

References

39

Appendix A: The Environment and models B: Controls for the Simulation C: References of Video’s studied for the animal movements. D: Classes and header files E: Work Plan Diagram

41 41 42 42 43 44

3

Figure List Section 2: Literature Survey Figure 1: Bounding spheres showing a collision. Figure 2: Skeletal Structure of an articulated Object Figure 3: a) 150 faces, b) 500 faces, c) 1,000 faces, d) Original, 13,546 faces Figure 4: The Visibility Catchment Area of a sign for 2 humans Figure 5: The observation probabilities of signs when viewed by a human

8 9 11 12 12

Section 3: Design / Theory Figure 6: The breaking down of the Zoo into block sections Figure 7: The testing of collision detection with surrounding blocks Figure 8: Diagram showing O’Rourke’s line intersection Figure 9: Equation of a line

15 15 18 19

Section 4: Implementation Figure 10: Human Model Figure 11: Penguin Model Figure 12: Human model showing the skeletal structure with IK solvers and Dummy points Figure 13: Screenshot of the clock Figure 14: Dolphin pool sunken into the ground Figure 15: Environment outside the zoo Figure 16: Screenshot which show the clouds Figure 17: Screenshot showing the trees and bushes in the environment Figure 18: Overhead view of the zoo showing paths

21 21 22 23 24 24 25 25 31

Section 5: Testing and Results Figure 19: Screenshot showing the sky grey, the rain and the humans leaving the zoo. Figure 20: Screenshot showing the humans leaving the zoo at closing time 5:45pm Figure 21: The human situations and the sequences Figure 22: Textured Polygon Figure 23: Using GL_LINES

4

32 33 33 35 35

1 Introduction 1.1 Overview This project simulates how humans behave in a crowded environment. As the environment is a zoo there are many animals for the humans to react to and to wander around to see. There are events to which the humans can react such as the zoo closing and rain. The project involves creating models of the animals and humans and then putting them together in an environment with different situations. There have been different projects and applications which have already been undertaken which have already looked into crowd behaviour and other human behaviour. Research into these areas has been undertaken to gain a better understanding of ideas to include in the simulation and more importantly how these ideas can be included. There are many ways to go about animating a model but there are important points to remember to make the models move as realistically as possible which will be discussed in this report. Models can be created and then animated to simulate the human and animal movements, which is needed for the simulation. The environment is just as important as what is happening as the environment needs to be just as realistic for the simulation to be complete. This involves including the surrounding the environment not just the zoo and also objects such as lamp posts, trees etc. 1.2 Aims and measurable Objectives The aims of this project are as follows and as described beneath;          

Create a realistic environment Model a Human Create the Human Animations Importing Models into OpenGL Collision detection Animals modelled and animated Human and Animals moving about Human behaviour Events Include background sound

Creating the environment, the human model and its animations were the important first few aims as then there is a model of the human to work with and an environment in which to place the human. There is just one human model to save on loading in many human models. The environment includes trees and lamps. This is followed by the importing of the model and its animations, which is very important as without the importing the project could not come together. The next few aims were to get the humans walking around the environment and to then apply collision detection on the walls of the environment and also with each other, so they do not walk through walls or each other. The next aims were to create the animals with their animations in 3dsmax and to then introduce them to the stimulation so the humans have animals to react to. 5

This involved getting the animals to move about their enclosures, making sure they are limited to moving about their enclosures. The next aim for the project was to get the humans to react to the different events and animals in the zoo in a realistic way. The other aims for the project were to include events such as, rain and the humans leaving the Zoo at the certain time into the simulation. This means the humans have additional events to react. Adding sound to the simulation was another aim which makes the simulation feel more realistic for the user. 1.3 Resources used Visual Studio.net has been used to program the environment and to bring all the pieces of the project together to create the final simulation. 3dsmax has been used to create the animals and humans and their animations. Research has been done to how the different types of animals move and behave as well as looking into how humans move when put in different situations. Multisequence is a program used to merge many different sounds into one .wav file. This has been used to add dolphin, penguin, flamingo and zebra sounds to a general crowd sound.

6

2 Literature survey 2.0 Project Background To obtain an understanding of some of the work which has already been done on the topic of human simulation, background reading was carried out. This helped to gain a better understanding of the different ways of approaching the simulation, which have been carried out in the past. 2.1 Overview of Simulating Human Behaviour There are some issues which need to be considered when simulating human behaviour. Work has been done in the field of Artificial Intelligence, which provides this project with more scope and background knowledge. Simulating simple collision avoidance for the humans in the zoo needed to be considered when programming the actions. There have been different approaches to this issue which have been covered, which are to be discussed later on in section 2.2 of this report. The issue of how realistic the actions look also needed to be taken into account, as there are different ways in which to go about animating the models. 2.2 Artificial Intelligence, Collision Detection and Collision Avoidance For collision avoidance to work to its full potential; a certain amount of artificial intelligence (AI) must be programmed because the humans need to have a certain amount of intelligence to actually avoid collisions, this is discussed in section 2.2.3. The extent of how the humans will behave to each other in the Zoo project needed to be decided upon. The simulation will only be able to do very simple amount of AI if any at all as no prior knowledge of AI has been studied for this project. The research into AI provides some background to how humans actually behave when confronted with each other. 2.2.1 Artificial Intelligence Some simulations of crowd behaviour have gone into depth about the way in which the humans move and react to each other [Musse, 1997]. Musse and Thalmann have gone into depth about applying rules to the humans which they must obey and when they cannot complete one rule successfully they must follow another set of rules which will enable the humans to reach their overall target successfully. Human grouping in crowd simulations has been previously modelled with the idea that the group of humans will share a general behaviour. The inclusion of relationships between humans has been studied and put into this model, these relationships will in turn influence the response to each other in the simulation. 2.2.2 Collision Detection To ensure realism in the simulation, collision detection needs to be included so that the humans do not walk into each other. The best way to go about detecting collision is to split up the world into blocks and the object can test for collision with the surrounding blocks in each direction [Edenwaith, 2003]. This way is a more efficient way of collision detection than testing against every other object in the world. This is especially efficient in a world with a lot of objects which are moving about. This is more efficient due to the use of 2D sort and it reduces the n in O(n2). If 7

the collision is tested for every object in the environment the O(n2) will be greater due to more calculations done. But if the testing is done via blocks then the n value will be reduced. Collision detection only needs to be tested in 2D for the project, as the human objects which move around do not move vertically; therefore it would be unnecessary to calculate collision detection along an axis which is not going to be used. The paper ‘Collision Detection’ [Edenwaith, 2003], describes a technique of calculating collision detection which involves using spheres. The idea is to put a bounding sphere around the object which is moving. Then from this sphere, test from the centre of the object plus the radius of the bounding sphere and if the position of this is the same as another object (or the distance between them is equal to zero), then a relevant response will occur. Another way is to perform a response when the sign of the distance changes, i.e. from positive to negative. This will work better, as if the object is moving at a high speed the collision detection might not always work.

Figure 1: Bounding spheres showing a collision. 2.2.3 Collision Avoidance Different types of collision avoidance are described in the Musse and Thalmann paper [Musse, 1997]. One type of avoidance is for one of the humans to stop and let the other walk past, and then this human can continue, this having the flaw of the humans deciding which one should stop and let the other pass in collinear situations. The other type of collision avoidance discussed [Musse, 1997], is the change in direction of the humans, so passing without collision, and then returning to the original path to the goal. To avoid calculating and performing unnecessary collision avoidance, the idea of only carrying out collision avoidance on humans within a specific range from the viewer is used. This means time is saved as the computer does not need to calculate every collision in the simulation, but still gives the appearance to the viewer that it is still happening. This idea is a good way of ensuring unnecessary calculations are not carried out and makes the simulation as efficient as possible; this will be considered when performing operations such as collision detection. The psychology-based paper written by Rymill and Dodgson [Rymill, 2005], details the different types of collision which can occur in a crowd simulation. When a human is travelling in the same direction as another, by positioning itself to the left or right, an assessment of the possible collisions up ahead can be determined. This makes the choice of overtaking at certain points more efficient. The 3 different types of collisions described in the paper [Rymill, 2005], are called ‘Towards’, ‘Away’ and ‘Glancing’. ‘Towards’ collision, is where the humans are moving towards each other and can either change their speed, the direction, or both to avoid collision. The ‘Away’ collision is when the humans are moving in the same direction, but the collidee is moving faster. This gives two options, to either slow down to the speed of the other human or to overtake. The ‘Glancing’ collision is where the paths of two humans cross, which uses the same reactions as the ‘Towards’ collision, but if no successful result is seen then the human is forced to stop and wait 8

for the other to pass. With all of these types of collision the human will rejoin its original path once the particular action has been carried out. With this study of collision the background knowledge of what types of reactions are possible in the event of two humans colliding. This gives an insight into the way in which humans behave when wanting to reach a certain destination. 2.3 Character Animation To successfully animate a character some ideas needed to be researched to get an idea of the best way to animate the models. Some factors need to be addressed when animating a character mesh. The mesh needs to be easy to move about and it needs to have realistic motions. 2.3.1 Vertex / Skeleton Animations There are two ways of animating a mesh and these are known as the Vertex method and the Skeleton method. The vertex animation consists of moving the vertices, so at each key frame the vertices are moved to where they need to be for the character to be animated. There is one major problem with this type of animation, which is due to the mesh deforming so the limbs shrink. This is due to the fact there is no structure to the mesh and the curved trajectory of the limb is ignored during the interpolation stage [Laycock, Lapeer 2005]. By using a skeletal structure this takes away the problem of the mesh deforming as the skeletal is implemented by first creating the skeleton, and then attaching the vertices of the mesh to the corresponding bone or bones. By animating the skeleton the mesh cannot deform as the bone is a rigid structure and has a proper structure with limbs [Laycock, Lapeer 2005]. The skeletal structure of a human is shown below in Figure 2. The left drawing shows the character, the middle structure is the hierarchical structure which shows the joints of the human structure in the left drawing. The right drawing is the tree structure of all the links in the body of the character. The node with the X is known as the root node, and it is this node which position is known by the global coordinate system. The position of the other nodes in the hierarchy are relative to the root node. The nodes store the translation and rotation information for the links.

Figure 2: Skeletal Structure of an articulated Object.

Neither of these methods have any form of collision between parts of the mesh whilst animating the model. This means that care must be taken when animating these models so that the mesh does not pass through another part of the mesh, for example the arm passing through the torso. 9

2.3.2 Making the animations realistic To get a better idea of how to go about doing the animations research was done to see what had been looked at in the past. The character animation article by Michael Comet [Comet, 2002] describes how to effectively animate actions that a character performs. So for a human to successfully perform an action there must be emotion behind what the human is doing. If the human is a child then the actions will be slightly more exaggerated and energetic to those of an adult. Also the way in which the actions are performed, so when the humans clap the speed of the clapping needs to be faster depending on happiness and enjoyment. To make the actions as realistic as possible the actions need to be smooth and relaxed and not be too rushed or symmetrical. Actions which are symmetrical look fake and false, as in the real world actions are not performed with exact symmetry. The build up of the action needs to provide some anticipation and the follow through after the action is just as important to make the action as realistic as possible. 2.3.3 Motion Capture Motion Capture (also known as MoCap) could be a very useful tool in seeing exactly how humans react to certain situations. This method could be applied by getting people to interact with each other and the movements would be recorded. By looking at these recordings the difference between male and female movements and actions can be observed as well as how people react to being in a crowd of people. MoCap works by attaching sensors to the subjects at the joints and they record the position and the motion of each of the joints. This could also be applied perhaps to animals as well, as for this project the animals motion is also needed to be realistic. [Meta Motion, 2004] 2.4 Level of Detail In the Crowd and Group Simulation paper it discusses the level of detail idea which is if an object is too far away from the viewer then the amount of polygons of the object is reduced [SIGGRAPH, 2004]. So depending on how far away the object is from the viewer then the appropriate level of detail is applied on the object. As stated in S.D. Laycock’s lecture [Laycock, 2005] “no point in rendering many polygons onto the screen if the resulting projection of the object occupies a few pixels”. Also mentioned in the Crowd and Group Simulation paper [SIGGRAPH, 2004], is that with the human body most parts are cylindrical, so this suggests that by applying the change of detail it would be best to use cylinders. One way of changing the level of detail is by applying edge collapses to the object, this involves removing edges from a mesh. To change the level of detail back again the adding of edges is done [Laycock, 2005]. Figure 3 shows an airplane which has different levels of detail. The picture on the left shows the airplane with the lowest level of detail and the right picture shows the greatest level of detail. From left to right it shows how adding edges improves the detail of the object, and from right to left it shows how removing edges changes the level of detail.

10

a)

b)

c)

d)

Figure 3: a) 150 faces, b) 500 faces, c) 1,000 faces, d) Original, 13,546 faces [Hoppe, 2005]

2.5 Impostors Impostors are defined as “… a set of transparent polygons onto which we map meaning opaque images” [SIGGRAPH, 2004]. This is an idea which was considered for some of the humans in the simulation if the simulation was running slowly as this could make the simulation faster than actually producing each of the humans individually. The idea is to exchange the geometry which is some distance away from the viewer in the scene with a 2D image. The lifetime of the impostor depends on 3 different aspects. The first is if the impostor is too close to the viewer as this would make it noticeable that it is a 2D image and not the actual character. The second is if the viewer moves a direction parallel to the impostor and the third is if the viewer moves in a direction which is too perpendicular to the impostor. Both these would view the impostor at a different angle and would mean that the impostor would show that it is not 3D. Ways to calculate when the impostor should be recreated are shown in Stephen Laycock’s lecture on Virtual Environments II [Laycock, 2005]. Also discussed in the Virtual Environments II lecture [Laycock, 2005], billboards which are a type of impostor. With billboards the image is continually facing the viewer no matter what angle it is viewed from. This is useful for things such as smoke, explosions, fire and has also been used for clouds. There is another type of impostor which is also a billboard; this is called an Axial Billboard. This is where the 2D image rotates around its axis, although this can only be used for symmetrical objects like trees for example. The problem with this type of billboard is that if the billboard is viewed from above it would not look like the object at all [Laycock, 2005]. 2.6 Particle systems Particle systems can be used to create many effects for example, fire, explosions and rain. To create a particle system there are many attributes which need to be considered. The initial position needs to be considered so that the effects have somewhere to start. The initial velocity needs to be considered so the speed of the particles falling for example can be known. Also the size, colour, shape, transparency and lifetime also need to be considered so that the right effect is shown [Owen, 2000]. To make the particles move the position of the particles need to be computed. This is done by performing a simple calculation using two factors of the particle system, which are the speed and direction. Another factor which needs to be considered is gravity. The particle system can have an acceleration applied to it like gravity for example. The lifetime of a particle can be defined when the particle reaches a certain position it resets back to its original position. This would mean that the particles will continue to fall for example if it is rain [Owen, 2000].

11

2.7 EXODUS The EXODUS project simulates how humans behave in different situations given a set of predetermined rules. This project spans over many areas including the events which occurred at the World Trade Centre. The project deals with how exactly humans behave when evacuating buildings, aircraft and also boats in panic situations. It has been used to recount the events occurred in tragedies like the WTC and Gustloff and calculate how many people would get out of these situations and compare them with the actual events [EXODUS, 2003]. In the lecture "Simulating the Interaction of Pedestrians with Wayfinding Systems” which is presented by Prof Galea, he talks about a method of human behaviour using a signage system. This works by placing signs throughout the environment and the humans react to these signs when there is an emergency. This is called Visibility Catchment Area (VCA), is it used so that if a human sees the sign in its field of vision then the human will react to that particular sign i.e. an exit sign, shown in Figure 4. The figure shows that when a human is behind a wall they cannot see the sign and it is beyond their VCA of that particular sign. The diagram also shows another human who is not obstructed by a wall and shows that this human can indeed see the sign and can therefore react.

Figure 4: The Visibility Catchment Area of a sign for 2 humans The research they did whilst creating EXODUS was to see how likely the human was to actually see the sign from different angles, shown in Figure 5. This is important as if the sign only just in the human VCA then the human may not recognise the sign as a sign. This may be due to the sign being too small or the human may not be able to see it. So if the human is facing the sign there is a probability of 1 that the sign has been seen but facing at an angle 45% reduces this angle to just over 0.2.

Figure 5: The observation probabilities of signs when viewed by a human 12

2.8 Critical review For the project, the depth in which Musse and Thalmann went into with collision avoidance has not been done as no prior knowledge of Artificial Intelligence is known. Creating groups of humans and applying rules and relationships to each other was unnecessary for this project, as the main idea of the project was to model how the humans move in their reactions graphically, as opposed to how they act. Also the zoo is not big enough to include many groups of people to which to apply group rules and relationships. The idea discussed for collision detection has improved the project as the simulation includes many humans. By breaking down the simulation into blocks it has improved the overall efficiency of the simulation and has also made it easier to implement the human reactions, as the humans go over to the enclosures and react when they are close to the animal enclosures. The research on level of detail could have been useful if a simulation was running slowly. By finding an algorithm to perform this task it would have ensured that the simulation looked consistent, when actually the humans who are furthest away look very basic. Although for this project the simulation was running at a reasonable speed and none of the animation sequences or movements were affected, so everything runs well and this idea was not needed. Section 2.3.1 describes the ideas behind animating a character in terms of its structure. For this project the models use the skeletal idea as this made the animating of the mesh a lot easier and reduced the risk of producing an animation which looks unrealistic due to the shrinking of the mesh. By animating the skeleton and then attaching the mesh to the skeleton it has made the animations as realistic as possible. Care was taken to avoid any parts of the mesh intercepting another part of the mesh. Section 2.3.2 discusses how to make the animations as realistic as possible and was particularly relevant to the project, as this describes some good points to consider when creating the animations for all the characters in the simulation, which is the main basis of the project. However the idea of Mocap cannot be implemented in this project as the project does not have the resources to implement this type of research. Also, to use it on animals would take a great deal of time and might not be possible to do. The ideas of impostors were considered for the project as depending on the amount of humans in the simulation it could have made the process a lot faster and more efficient. The humans can be objects until a certain point away which can be calculated and when the viewer reaches a certain point or views from a different angle, the impostor would change back to the object. Although due to the environment being quite small the idea of impostors for the humans was not implemented. Although if the environment was bigger in size this would definitely be a good idea to implement and, the humans would be impostors which would change depending on the aspects which were mentioned. This idea was implemented in terms of the trees as these are pictures which cross over so they look like trees, although from above they just look like a cross. The houses are also impostors which do not change due to the camera restrictions not allowing the camera to get close. The project includes rain which has been done using a particle system as described in section 2.6. The rain begins from above the simulation and falls vertically down and has speed applied to it. When the rain reaches the ground the rain particles reset themselves back to the original position meaning that it will constantly rain. The ideas discussed in the EXODUS section were useful ideas to implement for this project. This is due to the humans walking around the zoo and reacting to the animals they see. The signage system could have been implemented so that signs located around the zoo ensure the humans 13

know where to find specific animal enclosures. Also it could have been used if a human walks past an animal and notices that they are doing something in which to react to, they do indeed react to it using the VCA idea. Unfortunately these ideas could not be implemented due to the time restrictions.

14

3 Theory / Design 3.1 Initial Approaches The initial approach to the project was to create the basic human models and a simple environment to start with, then the reactions and collision detection could be included. This section describes the ideas planned to be implemented at the design stage, section 4 describes what has been implemented and how. Screenshots of the environment are found in section A of the Appendix. To deal with the modelling of the characters, software needed to be used. There are many different pieces of software which deal with this such as Maya and 3dsmax [3dsmax, 2006]. The decision was made to use 3dsmax as more knowledge of this particular package was known and was more familiar. Also when exporting the models the code which was provided by Paul Gasson was specific to 3dsmax models. It was planned to model the humans and animals in 3dsmax by creating a mesh using splines or a basic polyhedral shape, for example as a cylinder, and then use the different tools in 3dsmax to adjust the shape and to add on other shapes. For example, extrude and bevel, which extrudes parts of the mesh to create, for example, arms and legs. To animate the modelled animals and humans the Key frame and time slider features of 3dsmax were planned to be used to move along to certain frames. Then using the select and rotate or select and move options, the models were animated. The models are simple as it can take up a lot of time modelling the animals and humans models, this meant other parts of the project could be looked at sooner. As suggested in the literature survey in the collision detection section, it was planned to divide the environment into smaller blocks like shown in Figure 6. This meant the humans test for collision against the other humans in the same block and the blocks surrounding it. As shown in Figure 7

Figure 6: The breaking down of the Zoo into block sections

Figure 7: The testing of collision detection with surrounding blocks 15

These blocks are used to reduce the time complexity on the overall collision detection. By ensuring that the collision calculations are only applied where it is necessary it reduces the global O(n2) problem down to just the localised O(n2) problem. For example, with the collision testing with bounding spheres calculation with every other human in the area is costly as the following case would occur. for (int i=0; i