Building a Better Brain

brain_cache.jpg

In the last few weeks, we've focused a lot on the player AI for our game. We've used a few more advanced tricks to help our player navigate the maze using drills. But that's come at a performance cost. The game can now get a little choppy when there are a lot of enemies, or when our player is far away from the goal. It also takes longer to run our analysis iterations than we would like.

This week, we'll improve the performance of our AI by caching the determined path. Lots of our calculations for shortest path measurements get repeated. We can keep track of these, and avoid the entire BFS algorithm altogether in a lot of circumstances!

This week, you should take a look at the search-caching branch on our Github repository for the complete code we're implementing here. We'll focus on changes in the MazeUtils.hs file.

We're also going to do a little bit of profiling for this article. Profiling your code is an important skill to learn about if you ever want to use Haskell in production. For some other useful skills, check out our Production Checklist!

Profiling Our Code

As alluded to above, we have a pretty good idea of where the performance bottleneck is for our code. But it always pays to be sure. So to double check, we're going to run our code under profiling. We'll go through some of the basics here, but you should also check out this article we did on profiling a while back.

We'll get a readout for our code that will tell us which functions are taking the most time. This will tell us where we can make the most effective improvements. It will also give us a concrete way to prove our improvement later.

To start, we'll need to rebuild our code with stack build --profile. Be warned this can take a while, since all the libraries also need to be re-built. Then we can re-run the analysis program we used last week:

stack exec -- analyze-game maze_save_2 --enemies +RTS -p

Here's the abbreviated readout in the file `analyze-game.EXE.prof:

total time = 32.62 secs

COST CENTRE                                  %time
drillBFS.newParentsMap.\                     21.9
drillBFS.unvisitedNextItems.\                21.7
drillBFS.newVisitedSet                       19.4
getDrillAdjacentItems                        6.2
drillBFS                                     4.5
drillBFS.newSearchQueue                      4.0
getDrillAdjacentItems.mkItemFromResult       3.0
bfs.newParentsMap.\                          2.1
bfs.newVisitedSet                            2.0
getDrillAdjacentItems.mkItemFromResult.(...) 1.7
drillBFS.unvisitedNextItems                  1.4
bfs.unvisitedNextCells.\                     1.1
drillBFS.newParentsMap                       1.0
getDrillAdjacentItems.bounds                 1.0
bfs                                          0.6
getAdjacentLocations                         0.5

Unsurprisingly, we see that drillBFS and it's helpers are the biggest culprits. They account for the top seven entries on the list and a whopping 82% of the time we spend. The enemy AI calculations come in a distant second at around 6.3% of the time. So let's focus on fixing the player algorithm.

A Basic Cache for the Player

As we try to improve our player AI, there's one big observation we can make. Perhaps some of you already noted this when reading about that AI in the first place. For the most part, our player follows a single path the whole time. We calculate the complete path from start to finish on each player move cycle, but then throw most of it away. The only time we get "blown off" this path is when we have to run away from enemies.

There are only a few circumstances where we change this path! So let's make PlayerMemory type that will keep track of it. This should save us a ton of time!

newtype PlayerMemory = PlayerMemory (Maybe [Location])

data Player = Player
  { …
  , playerMemory :: PlayerMemory
  }

We'll add this memory to our player type. When we initialize it from JSON instances, it should start out empty. There's no need to keep track of this in a save-game file.

This change will complicate our move API a little bit. It will now produce the PlayerMemory as an output:

makePlayerMove :: World -> (PlayerMove, PlayerMemory)

Using Our Memory

When it comes to making out move, we first need to put the path into memory. To start, we'll make PlayerMemory out of the path we get from BFS.

makePlayerMove :: World -> (PlayerMove, PlayerMemory)
makePlayerMove w =
  ( PlayerMove finalMoveDirection useStun drillDirection
  , ...
  )
  where
    shortestPath = getShortestPathWithDrills …
    memoryFromMove = PlayerMemory (Just shortestPath)
    ...

In general, we'll want to return this "memory". But there's one circumstance where we'll want to invalidate it. When we have to retreat from our enemies, we'll diverge from this ideal path. In this case, we'll return Nothing. Here's what that logic looks like:

makePlayerMove :: World -> (PlayerMove, PlayerMemory)
makePlayerMove w =
  ( PlayerMove finalMoveDirection useStun drillDirection
  , if emptyCache then (PlayerMemory Nothing) else memoryFromMove
  )
  where
    (finalMoveDirection, useStun, emptyCache) = if not enemyClose
      then (shortestPathMoveDirection, False, False)
      else if canStun
        then (shortestPathMoveDirection, True, False)
        else case find (/= shortestPathMoveLocation) possibleMoves of
          Nothing -> (DirectionNone, False, True)
          Just l -> (getMoveDirection playerLoc, False, True)

Now let's consider when we use the cached information, as this will let us skip the BFS call altogether! We'll add one more validity check when doing this. We'll ensure that the list is non-empty and that our current location is at the head of the list. Then we can use the tail of the memory list as the shortest path call!

makePlayerMove :: World -> (PlayerMove, PlayerMemory)
makePlayerMove w = ...
  where
    (useCache, cachePath) = case playerMemory currentPlayer of
      (PlayerMemory (Just (first : rest))) ->
        (first == playerLoc, rest)
      _ -> (False, [])
    shortestPath = if useCache then cachePath
      else getShortestPathWithDrills ...

The last thing we need is to ensure that the cache goes back into memory. This is a simple modification of our function for making the player move:

modifyWorldForPlayerMove :: World -> Location -> PlayerMemory -> World
modifyWorldForPlayerMove w newLoc memory = ...
  where
    currentPlayer = worldPlayer w
    playerWithMemory = currentPlayer {playerMemory = memory}
    playerAfterMove = movePlayer newLoc playerWithMemory
    ...

Now we can run our analysis again. We'll see that our Player's AI functions are still the biggest contributor. But the percentage has gone down a lot. They now take only take up around 55% of our total time, instead of 82%! Meanwhile, the percentage of time from the normal BFS functions is now up to around 35%. Most importantly, the total time for the analysis declined five-fold. On the first run, it was 32.62 seconds, and it now only takes 6.79 seconds, a huge improvement!

total time = 6.79 secs

COST CENTRE                                  %time
drillBFS.unvisitedNextItems.\                14.3
drillBFS.newParentsMap.\                     14.2
drillBFS.newVisitedSet                       12.6
bfs.newParentsMap.\                          9.9
bfs.newVisitedSet                            9.2
bfs.unvisitedNextCells.\                     5.7
getDrillAdjacentItems                        4.3
drillBFS.newSearchQueue                      2.8
getAdjacentLocations                         2.8
drillBFS                                     2.6
bfs                                          2.6
getDrillAdjacentItems.mkItemFromResult       2.0
bfs.newSearchQueue                           1.8
getDrillAdjacentItems.mkItemFromResult.(...) 1.1
bfs.unwindPath                               1.1
bfs.unvisitedNextCells                       1.0
drillBFS.unvisitedNextItems                  0.9
bfs.newParentsMap                            0.7

Conclusion

Profiling is an important tool we can use for improving our code, no matter what language we're working in. When our program isn't performing how we like, we have to be sure to address the right parts of it. It may have been tempting to make a different assumption from the start. Since there are many enemy characters, it would be natural to tackle that algorithm first. But our profiling output made it clear that the player AI was the problem.

Next week, we'll start exploring different AI concepts. We'll start moving towards a kind of AI that can be machine-learned. Our code will be simpler, but our product won't be as good, at least at the start! But we'll start getting used to the way an AI can evaluate positions.

For more useful resources in improving your Haskell skills, download our Production Checklist! It has a lot of different tools and libraries to check out!

Previous
Previous

Adding Features for Better Behavior

Next
Next

Moving Towards ML: Evaluation Functions