MMOG Player Identification: A Step toward CRM of MMOGs - CiteSeerX

3 downloads 310 Views 216KB Size Report
issues such as player identification and player profiling. In this paper, we discuss an .... In our work, we adopt memory-based reasoning (MBR) [5] for use as the classifier .... Patrick Miller. pyMPI - An introduction to parallel Python using MPI.
Proc. of the Sixth Pacific Rim International Workshop on Multi-Agents (PRIMA-2003), Seoul, Korea, pp. 81-92, November, 2003.

MMOG Player Identification: A Step toward CRM of MMOGs Ji-Young Ho, Yoshitaka Matsumoto, and Ruck Thawonmas Intelligent Computer Entertainment Laboratory Department of Computer Science, Ritsumeikan University Kusatsu, Shiga 525-8577, Japan [email protected]

Abstract. The market of massively multiplayer online games (MMOGs) is expanding rapidly. MMOG business can be considered as eBusiness where CRM (Customer Relationship Management) is a key factor of success. This paper discusses an effective approach for player identification, the results of which can be exploited for CRM of MMOGs. A feature extraction method is proposed that is based on the frequency of each type of game items acquired by each player. To validate the proposed feature extraction method, experiments are conducted using game log data obtained from an MMOG simulator. The experimental results show that the proposed feature extraction method outperforms a method, previously proposed by the same authors, that is based on the frequency of each type of actions performed by each player.

1

Introduction

These days CRM (Customer Relationship Management) [1] is applied to many business fields. CRM is an improved database-marketing method basing on Individual Marketing, One-to-One Marketing and Relationship Marketing. We argue that a new potential application of CRM might be online-game business. The ultimate goal is to satisfy the players through offering them the contents that they need. To achieve this goal, we are confronted by a number of technical issues such as player identification and player profiling. In this paper, we discuss an approach for identification of player types in massively multiplayer online games (MMOGs). We compare the identification ability of input features derived from the frequency of each type of actions performed by each player with those derived from the frequency of each type of

Fig. 1. Zereal architecture.

Fig. 2. Screen shot of ZerealViewer.

items each player acquired. We proposed recently in [2] a procedure for deriving the former type of input features, and discussed the need for normalization of the selected features, which makes them more robust against decision ambiguity of players. Here, we propose a new procedure that uses the frequency of each type of items acquired by each player. We pursue our approach using a PC1 Cluster based MMOG simulator. In future, we plan to apply our findings to real MMOG data. The work presented in this paper is divided into two parts, namely, modeling and identification. In the modeling part, player agents with different characteristics are modeled using the aforementioned MMOG simulator. By player agents, 1

In this paper, PC stands for personal computer, and should not be confused with player character.

we mean agents that imitate player characters in real MMOGs. The player agents reside in multiple game worlds and migrate to adjacent game worlds, each world running on a single PC node. A game world also accommodates monsters, representing no player characters in real MMOGs, that could kill (or be killed by) player agents. In the identification part, the task is to correctly identify the type of a given player agent merely from its log. The information on the agents in the modeling part is not used. To perform this task, selection of input features and selection of a classifier, for identifying a given player agent to a particular type based on the selected input features, are discussed. The organization of the rest of the paper is as follows: Section 2 gives a description on the PC Cluster based MMOG simulator that we use, followed by a description on agent modeling. Section 3 discusses feature selection and classifier selection. Section 4 describes the experiments and shows the experimental results. Finally, Section 5 concludes the paper by summarizing the main results and suggesting future work.

2

MMOG Simulator and Agent Modeling

We use Zereal[3] as a MMOG simulator running on a PC Cluster. Zereal is designed to be a testbench for testing player models, monster intelligence, and various data analysis techniques for MMOGs. With this, one can simulate multiple agents and multiple game worlds simultaneously, and can control environment of the worlds such as world maps, number of items, vision radius of agents, and so forth. As shown in Fig. 1, Zereal is composed of a visualization client, and a single master node and multiple world nodes(sub-nodes). Except for the visualization client, every node runs on a PC Cluster System having pyMPI [4] programming interface for parallel computing. Each world node controls its own world such as agents and items. The master node gathers world-status information from all world nodes. It then forwards such information to the visualization client. The visualization client receives this information and makes log files with various formats, including the log files for analysis of the games and those for showing of the game screens. In particular, we have developed a visualization client tool called ZerealViewer using Java programming language. The roles of the tool are to convert the received data to pre-defined formats for analysis and to observe activities in each game world. Fig. 2 shows a ZerealViewer screenshot of an area in a game world. In the version of Zereal that was licensed to us from the Zereal developing team, three types of player agents, namely, Killer, Markov Killer, and Plan Agent, are provided. Each type was designed and implemented to have different characteristics and different intelligence levels. The first two player agents represent human players who see killing as the soul purpose of the game and often think that player status depends on how many creatures their avatar has killed. In Zereal, Killer and Markov Killer have similar characteristics, but Markov

Table 1. Relative frequencies (columnwise) of player agent actions. PC Types

Walk Attack PickFood PickPotion PickKey LeaveWorld

Killer

L

H

M

M

L

L

Markov Killer

M

M

H

H

M

M

Plan Agent

H

L

L

L

H

H

Table 2. Relative frequencies (columnwise) of player agent items. PC Types

Monster Food Potion Key Door

Killer

H

M

M

L

L

Markov Killer

M

H

H

M

M

Plan Agent

L

L

L

H

H

Killer is more intelligent. Regarding the characteristics of Plan Agent, they roam around the game world(s) to discover new places, features and items. The behaviors of these player agents are summarized as follows: – Killer puts the highest priority on attacking monsters. – Markov Killer is also keen to attack monsters, but selects the next action according to the corresponding state-transitional probability. – Plan Agent aims to find a key and leave the current game world through a door. In this simulator, agents have six common actions, namely, Walk, Attack, PickFood, PickPotion, PickKey, and LeaveWorld. As the results of these actions, an agent can acquire items having five types2 , namely, Monster, Food, Potion, Key, and Door. The main hypothesis employed in the paper is that the tendencies of agent behaviors can be found from the frequencies of action types and item types that the agents performed and acquired, respectively. Therefore, it is possible to identify player agents based on this kind of information. To preliminarily verify this hypothesis, we complied the relative frequencies of each type of actions and items for each player type from the log data obtained in the experiments discussed in Section 4. The results are given in Tables 1 and 2 for actions and items, respectively. In these tables, each element is compared columnwise, and H, M, L stand for high, medium, and low, respectively. 2

Here, when an agent attacks one particular monster, we consider that the agent acquires an item of Monster type. Likewise, when an agent leaves the current game world through a door, the agent acquires a Door item.

Fig. 3. Typical game log.

Fig. 3 shows a typical game log sent to the client from the master node. The first and the second columns in the log indicate the simulation-time steps and the real clock time, respectively. The third one shows the agent identifier numbers with the most upper digit(s) indexing the current world node. The fourth column represents the actions3 of each agent, and the fifth and sixth columns the coordinates in the game world before and after such actions, respectively. The last column gives information on the types of agents.

3

Player Identification

The task here is to identify the type of a given player agent merely from its log. In our case, though type information is already available in the log, this information is used only in the training phase. 3.1

Feature Selection

A sequence of actions can be easily achieved for each player agent from the game log shown for example in Fig. 3. Fig. 4 shows a typical example of action 3

The presence of Removed indicates that the corresponding agent was killed, and thus removed from the game world.

Fig. 4. Typical action sequences.

sequences for some agents. We recently proposed in [2] the following procedure for deriving a set of input features based on the frequency of each type of actions performed by each player agent. – Step I: For each player agent, sum up the total number of each action that the player agent performed. – Step II: For each player agent, divide the number of each action by the results of Step I. – Step III: For each action, divide the results of Step II by the number of action that agents performed most frequently. Henceforth, the resulting features are called Action features. As discussed in [2], introduction of Step III makes the performance of the classifier of interest more robust against noises, synthetically introduced to represent decision ambiguity of human players, compared to those features derived at Step II. In this paper, we propose a new procedure for feature selection based on the frequency of each type items acquired by each player agent. We first need to extract an item sequence for each player agent from the corresponding action sequence. An example of an item sequence is ”Monster, Key, Monster, Potion, Door” that indicates that the agent of interest attacks monster, finds a key, attacks a monster; then, he gets a potion and leaves the current game world through a door. An item sequence shows more compactly what story is going on compared to the action one. This is because it is generated by removing

Fig. 5. Typical item sequences.

insignificant and redundant portions in the corresponding action sequence such as continuous Walks or continuous Attacks. The newly proposed feature-selection procedure is the same as the one above, except that action is replaced by item and performed by acquired. The resulting features are called Item features. The proposed procedure is applied to item sequences obtained by the following preprocessing. – For Monsters, if a player agent attacks a particular monster, add one Monster item to the item sequence. Even if the player agent attacks many times the same monster, only one Monster item is added. – For Foods, Potions, and Keys, if a player agent picks up Food, Potion, or Key, add one Food, Potion, or Key item to the item sequence, respectively. – For Doors, if a player agent leaves the world through a door, add one Door item to the item sequence. Fig. 5 shows the resulting item sequences by preprocessing the action sequences in Fig. 4. In the experiments, it will be shown that Item features outperform Action features in more realistic environment. Tables 3, 4, and 5 respectively show typical results of Steps I, II, and III for both Action features and Item features. With these tables, one can see how log data be represented differently according to action information and item information. Here, we used a game log taken during 200 simulation-time steps. Note that in Table 3 (a) the rowwise sum for each player agent need not be 200 since a player agent may not perform any action during some simulation-time steps. 3.2

Classifier Selection

In our work, we adopt memory-based reasoning (MBR) [5] for use as the classifier in the experiments. Our main reason is that the initial setting of parameters in MBR requires less effort compared to other classifiers. The algorithm of MBR is also transparent , namely, for a given unknown data, find k nearest neighbors

Table 3. Typical results of Step I. PC Types

Walk Attack PickFood PickPotion PickKey LeaveWorld

Killer 1

67

92

2

0

0

0

Killer 2

93

104

0

1

2

0

Markov Killer 1 107

1

6

2

0

0

Markov Killer 2 177

11

4

7

1

0

Plan Agent 1

113

0

0

0

8

1

Plan Agent 2

119

0

1 0 (a) Action features

4

0

PC Types

Monster Food Potion Key Door

Killer 1

8

2

0

0

0

Killer 2

11

0

1

2

0

Markov Killer 1

1

6

2

0

0

Markov Killer 2

1

4

7

1

0

Plan Agent 1

0

0

0

8

1

0 1 0 (b) Item features

4

0

Plan Agent 2

in input feature space among the training data. The majority vote is then taken to decide the type of the unknown data. In MBR, thereby, we need to decide only k and the type of distance measure. Fig. 6 shows the concept of MBR on 2-dimensional data, when k = 3 and Euclidean distance is used. In this figure, we assume that there are 2 types of data, depicted respectively by • and ◦. The classification result for unknown data × is that it is of type •.

4

Experiments

A classifier in use should be able to correctly identify unknown data not seen in the training data. This ability is called generalization ability. To approximate the generalization ability, we use leave-one-out method [6]. In leave-one-out method, supposing that the total number of training data is M , first, data number 1 is used for testing and the other data are used for training the classifier of interest. Next, data number 2 is used for testing and the other data are used for training the classifier. The process is iterated in total M times. The averaged recognition rate for the test data is computed, and is used to indicate the generalization ability of the classifier.

Table 4. Typical results of Step II. PC Types

Walk Attack PickFood PickPotion PickKey LeaveWorld

Killer 1

0.4161 0.5714

0.0124

0

0

0

Killer 2

0.4650 0.5200

0

0.0050

0.0100

0

Markov Killer 1 0.9224 0.0086

0.0517

0.0172

0

0

Markov Killer 2 0.8850 0.0550

0.0200

0.0350

0

0.0050

0

Plan Agent 1

0.9262

0

0

0.0656

0.0082

Plan Agent 2

0.9597

0 0.0081 0 (a) Action features

0.0323

0

PC Types

Monster Food Potion Key

Killer 1

0.8000 0.2000

Killer 2

0.7857

0

0

0

0.0714 0.1429

Markov Killer 1 0.1111 0.6667 0.2222

0

Markov Killer 2 0.0769 0.3077 0.5385 0.0769 Plan Agent 1 Plan Agent 2

0

0

0

0 0.2000 0 (b) Item features

Door 0 0 0 0

0.8889 0.1111 0.8000

0

The objective of our experiments is to compare the generalization ability of MBR using Action features with that of MBR using Item features. In other word, we want to compare the identification ability of information on the frequency of each type of actions performed with that of information on the frequency of each type of items acquired by each player. For experiments, log data were obtained by running 5 game worlds simultaneously up to 200 simulation-time steps on a Fujitsu PC Cluster system having 5 CPUs. Each game world accommodated 5 player agents of each type, 5 monsters, and 5 items for each of the other game objects. For the log data, we conducted the feature selection procedures discussed in Section 3.1, and obtained Action features and Item features for the above 75 player agents. This number of player agents represents a typical number of players who play simultaneously in a particular zone on a larger world map, having thousands of players [7], in real MMOGs. In real MMOGs, players sometimes might select different actions even when they face the same situations. To make our experiments more realistic, we added the following 3 levels of Gaussian noises to the above data. – N0 Gaussian noises with mean = 0 and variance = 0.001. – N1 Gaussian noises with mean = 0 and variance = 0.1.

Table 5. Typical results of Step III. PC Types

Walk Attack PickFood PickPotion PickKey LeaveWorld

Killer 1

0.4336 1.0000

0.2402

0

0

0

Killer 2

0.4845 0.9100

0

0.1429

0.1525

0

Markov Killer 1 0.9612 0.0151

1.0000

0.4926

0

0

Markov Killer 2 0.9222 0.0963

0.3867

1.0000

0

0.0762

0

Plan Agent 1

0.9651

0

0

1.0000

1.0000

Plan Agent 2

1.0000

0 0.1559 0 (a) Action features

0.4919

0

PC Types

Monster Food Potion Key

Killer 1

1.0000 0.3000

Killer 2

0.9821

0

0

Door

0

0.1327 0.1607

Markov Killer 1 0.1389 1.0000 0.4127

0

Markov Killer 2 0.0962 0.4615 1.0000 0.0865 Plan Agent 1 Plan Agent 2

0

0

0

0 0.3000 0 (b) Item features

0 0 0 0

1.0000 1.0000 0.9000

0

– N2 Gaussian noises with mean = 0 and variance = 0.2. For each experiment, we repeated the leave-one-out method 100 times, and took the averaged result. Tables 6(a), 6(b), and 6(c) respectively show the generalization ability of MBR using Action features and Item features with k = 1, 5, 9, respectively. As can be seen from these tables, Item features outperform Action features, especially when the levels of noises are high. Our explanation for the above results is as follows. The player-type information in Item features is enhanced from Action features through the preprocessing proposed in Section 3.1. In other word, the preprocessing discards meaningless portions in action sequences and thus makes the resulting data more compact. For example, assume that we got two partial action sequences such as ”Walk, Walk, Walk, Attack, Attack, Attack, Walk, PickKey” of a Markov Killer agent and ”Walk, Attack, Walk, Attack, Walk, Attack, Walk, PickKey” of a Killer agent. These sequences indicate that the Markov Killer agent fought with one monster and the Killer agent fought with three monsters. As we explained in Section 2, Killer-type agents have offensive characteristics so they frequently move to monsters to attack. Markov Killer-type agents, however, consider also their conditions, implemented by transitional probabilities, so sometimes they

Fig. 6. Concept of MBR on 2D data.

escape a monster when their stamina is low. After preprocessing these action sequences, we can achieve two item sequences, namely, ”Monster, Key” for the Markov Killer agent, and ”Monster, Monster, Monster, Key” for the Killer one. As a result, we get two different results at Step I for the two agents, namely, (1 Monster, 1 Key) for the Markov Killer agent, (3 Monster, 1 Key) for the Killer agent. However, these two agents would have the same results at Step I, namely, (4 Walks, 3 Attacks, 1 PickKey) if action information was used.

5

Conclusions

In this paper we have presented an effective approach for identification of player types in MMOGs using the information on the frequency of each type of items that each player acquired (Item features). With Item features, MBR, adopted as the classifier, could successfully identify the type of unknown player agents. Moreover, as the noise level is increased, the performance of MBR using these features drops more gracefully than that of MBR using the features derived based on the frequency of each type of actions each player performed. In the future work, we plan to conduct experiments using agents with more complicated behaviors and to investigate use of order information in either action sequences or item sequences. Eventually, we will apply our findings to real MMOG data.

References 1. Paul Greenberg. CRM at the speed of light. McGraw-Hill Osborne Media, 1st Edition, 2001.

Table 6. Generalization ability of MBR. Features N0 N1 N2 Action 97% 85% 69% Item

96% 94% 79% (a) k = 1

Features N0 N1 N2 Action 96% 88% 74% Item

97% 95% 84% (b) k = 5

Features N0 N1 N2 Action 93% 89% 77% Item

98% 95% 85% (c) k = 9

2. Ruck Thawonmas, Ji-Young Ho, and Yoshitaka Matsumoto. Identification of Player Types in Massively Multiplayer Online Games. Proc. of the 34th Annual conference of InternationalSimulation And Gaming Association (ISAGA2003), Chiba, Japan, August 25-29, 2003. Accepted for oral presentation. http://www.ice.ritsumei.ac.jp/ ruck/PAP/isaga03.pdf. 3. Amund Tveit, Oyvind Rein, Jorgen V. Iversen, and Mihhail Matskin. Zereal: A Mobile Agent-based Simulator of Massively Multiplayer Games. Submitted. http://gamemining.net/amund/publications/. 4. Patrick Miller. pyMPI - An introduction to parallel Python using MPI∗ . http://sourceforge.net/projects/pympi. September, 2002. 5. Michael J.A. Berry and Gordon Linoff. Data Mining Techniques-For Marketing, Sales, and Customer Support. John Wiley & Sons, Inc., N.Y., 1997. 6. Sholom M. Weiss and Casimir A. Kulikowski. Computer Systems That Learn. Morgan Kaufmann Publishers, San Mateo, CA, 1991. 7. Alex Jarett, Jon Estanislao, Elonka Dunin, Jennifer MacLean, Brian Robbins, David Rohrl, John Welch, and Jeferson Valadares. IGDA Online Games White Paper. 2nd Edition, March, 2003.

Suggest Documents