When it comes to video games, imitation can be dangerous. If one game makes changes that players love, there’s no guarantee that players of a different game would welcome the same changes. So, do you make the change anyway? Or do you sit tight and ride out the status quo? Time to do some math.
Because players behave differently in different gaming environments, it’s not as simple as comparing apples to apples. It’s more like comparing apples to oranges. Or possibly apples to manatees. Here’s where the math comes in.
When developers – or researchers – make a change to a gaming environment, they can quantify how players respond to that change. And researchers have come up with a math-based technique to extrapolate how a similar change would play out in a different gaming environment.
“This is the first principled approach that allows us to quantify information about a new environment based on information from an old environment,” says David Roberts, an assistant professor of computer science at NC State and co-author of a paper on the research. “To an extent, developers will always rely on their intuition, but this could help them take advantage of previous mistakes or successes by themselves or other developers.” Roberts also notes that this research looks exclusively at interactive storytelling environments (like RPGs) – so its usefulness for arcade style games is not yet known.
Roberts did the research with Fred Roberts, a professor of mathematics at Rutgers University, and essentially put a new spin on mathematical psychology concepts that have been around since the Hoover administration: models of utility and preference. These models use concise mathematical equations to describe human behavior. The paper is available here.
In these models, a person’s preference for a given alternative is a function of its “utility” – or value. In other words, the more value a person places on something, the more likely he/she is to select it.
In this work, the researchers developed a technique where they evaluated baseline behaviors in one game (Game A), as well as alterations in behavior after a specific change was made to the game. This behavioral data was then used to quantify the utility (or value) of the various behaviors exhibited by players in Game A. Essentially, they analyzed how and why players behaved differently after the change.
The researchers then evaluated baseline behaviors in a second game (Game B). Researchers used Game B’s baseline data and the utility data from Game A to estimate what the utility of behaviors would be in Game B if it made the same changes that had been made to Game A. With me so far? Basically, you need to understand the essential value of doing specific things in Game B if you want to predict how its players will respond to change.
While it may sound complicated (and you should really see the equations), it works pretty well – with error rates ranging from +/-0.38 percent to +/-27 percent, depending on the quality of input data.
Let’s explain what I mean by error rates. Let’s say researchers use the new technique, and predict that 72 percent of players will respond to a game-change in the same way. If the error rate is +/-2 percent, that means that between 70 and 74 percent of the players actually responded that way (they were within 2 percent of the correct outcome).
“We don’t think this is going to make a huge difference in the way games are designed,” David Roberts says, “but it could help take some of the guesswork out of the process.”
Roberts will be presenting the paper at the 4th International Conference on Interactive Digital Storytelling, being held Nov. 28 through Dec. 1 in Vancouver, British Columbia.