Facebook Twitter Gplus RSS
Home Development Implementing “AI” for YaMeMo

Implementing “AI” for YaMeMo

Published on 9 May 2012 by in Development

In this post I’ll describe some mechanisms used in YaMeMo to create artificial players for the versus mode. I’m no artificial intelligence expert, but I hope there are some interesting ideas to discuss there. I tried to create opponents that are different on several aspects. Therefore I’ll be speaking of artificial players instead of artificial intelligence.

Game engine

In the solo modes, the game engine only reacts to events triggered by the user. For a game with an artificial player, or other kinds of artificial opponents (enemies in an action game), the engine must request the artificial player to perform its actions.

For an action game with a variable number of opponents, you can imagine that the engine request each opponent to perform its action in the main game loop (basically as often as possible). In a turn based game, it’s easier since the engine only has to poll the artificial opponent when it’s turn has come.

YaMeMo uses two types of events to pace the game:

  • user input: user touched a card
  • end of animation (animationComplete): event triggered when a card animation is complete (card covered or card uncovered)

The game engine also using states (CPU_TURN / PLAYER_TURN) to know which turn is in progress, and methods that indicate whether the card selection is complete (two cards uncovered), and if the selection wins (two identical cards). Some examples:

  • user touched card / CPU_TURN : do nothing
  • animationComplete / CPU_TURN / selection complete / selection wins : next turn, same player
  • animationComplete / CPU_TURN / selection incomplete : request another card from artificial player

Difficulty vs personality

It seems straightforward to create several difficulty levels for a “versus” mode. Handling a probability of “forgetting” the known cards seems the easiest implementation of difficulty. This parameter is relevant to adjust difficulty, but I like when the challenge evolves on several axes. When I made a color tetris clone some time ago the difficulty would change on game speed, block complexity, and number of colors. I tried to implement the same kind of mechanism. I came to “personalities” more than difficulty levels. I determined different parameters to tweak between artificial players:

  • the order used to uncover cards: different players, different approaches
  • overall strategy: YaMeMo rewards the player for chains of matching pairs. Some players will try to make pairs as soon as they can, others will try to make “combos”
  • probability of forgetting: a simple difficulty tuning parameter mentioned above
  • probability of making combos: a set of tresholds to control the number of consecutive pairs an opponent will make
  • flag to uncover cards that were uncovered by human player: this one does not bring much but it’s still one more parameter ;)

A “Factory” is used by the game to get a preconfigured instance of ArtificialPlayer at the start of the game. With all these parameters, only one implementation of ArtificialPlayer is used. For the order of card discovery, different iterators have been created, and the correct class is injected to the player when starting the game:

The ArticialPlayer code is unique; the configurations are centralized in the factory. The game engine itself does not know about the artificial player, it just requests it to pick one card when needed.

Possible improvements

“Natural” implementation

In the implementation, the engine maintains the number of times a card was seen in the card objects themselves, and passes the whole board to the artificial player. The player uses a simple probability to determine if it will forget a known card. A more natural implementation would be to let the artificial player maintain a map of known cards, and have it “blur” this map at each turn. This way, when forgetting a card, the player would pick a card in the same area, not a complete random card, giving a more realistic feel.


In the combo-first strategy implementation, the player waits until it knows N % of the cards to start making pairs. A better implementation would be to wait until it knows N pairs, then make them, then start again.


Player instance configuration could be stored in a configuration file, or even retrieved from internet, but I don’t really see the advantage of this.

Player experience

With the current implementation, its always best for the human player to make pairs as soon as possible. I’d like to find a way to reward combo making in versus mode.


It was quite fun to implement the versus mode and the different opponents. I played with Strategy and Factory patterns, and java iterators. I hope the choices I made in implementation and settings give a correct result and a feeling of different opponents. Don’t hesitate to comment, I’d like to see feedback about this post and how it feels in-game.

 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
No Comments  comments 

Leave a Reply

Your email address will not be published. Required fields are marked *


You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

WordPress.org - Theme based on iFeature