I am currently testing yet another application of the versatile “Lock in Feedback” (LiF) algorithm: Dynamic in-game Difficulty Adjustment.
Inspired by Mihaly Csikszentmihalyi’s Flow theory, dynamic difficulty adjustment (DDA) or dynamic game balancing (DGB), “is the process of automatically changing parameters, scenarios, and behaviors in a video game in real-time, based on the player’s ability, in order to avoid making the player bored (if the game is too easy) or frustrated (if it is too hard).” [W].
To the left an “Inverted-U“ model (also known as the Yerkes-Dodson Law), which assumes a static “zone of optimal performance”. The chart to the right represents a “Flow“ model – here, the optimal level of difficulty is expected to grow over time as the player improves his or her skill.
As LiF is particularly adept at finding and keeping track of drifting optima, it presents itself as a potentially ideal method to locate “the zone of optimal performance” (see chart above to the left) while keeping the player “in the zone” as he or she progresses (where “the zone” is represented by the green area in the chart to the right).
A first experiment with a (very) simple Android “pop the bubble” type game seems to indicate that this is indeed the case – with the GIF screencast to the left (with the to-be-popped bubbles below a chart that keeps track of LiF’s inner workings) as preliminary evidence. Now on to multi variable optimization!