Update: here are some links to interesting write-ups etc:
Memetix, 10th place
FlagCapper, 8th place
Xathis, contest winner
MBCook, another Clojure bot, rather different from mine
Official strategy forum for all the rest
---
So the AI challenge is over. Sadly I didn't really have enough time to properly participate; I started in earnest with less than a week remaining, so I guess mine was a last-minute entry. It turned out suprisingly well except for one large detail: completely unoptimised, the code runs fast enough on my machine with the official settings, but times out on the official game servers almost every game. This was due to a bug in the timeout code that I didn't notice until after the deadline. Oh well.
The code is immature enough that it's not worth publishing, but the general ideas are pretty interesting, so I'll say a few words about those.
The best writeup I've seen so far is the one by Memetix, the UK champion. My ideas, most of which were scavenged from the excellent forums, were similar to some extent, except of course for the whole UK champion bit.
As I see it, the two major parts of a successful ant bot are the general path finding and target assignment algorithm, and the combat engine. These are basically the macro and the micro.
Macro: path-finding routines
Starting with path finding, the one that appealed to me the most was the rubber surface attraction thing, some version or variation of which seemed to be used by everyone and their cat on this competition.
The attraction algorithm is basically just breadth-first search, with some optional tweaks. One important idea is to begin the search at each source, such as food or an unexplored tile, instead of from each of the player's ants; this way the algorithm scales fairly well when the number of visible things grows, because the time requirement doesn't depend on how many ants you have. (It might still depend on the number of food, the number of enemy ants etc, depending on exactly how you tweak it, but initialising the search from the sources is a good start.)
Collaborative diffusion, which I guess can be described as a further variation of attraction, has also been used a lot in this competition. The scents used in diffusion are explicitly averaged for each square based on the neighbouring scents, instead of just taking the maximum attractiveness like in basic attraction. Of course, you can use this idea with attraction too. An important consequence of this averaging is that an area with, say, a lot of food gives off a stronger combined scent than an area with just one food. I'm not certain however how much of a benefit this gives (see below).
In the collaborative part of diffusion, friendly ants block some or most of the scent from proceeding further, depending on the type of scent. E.g. a food scent would be mostly blocked by an ant, since one or two ants are usually enough to harvest all the food in an area; but an enemy hill's scent would go through ants more or less intact, since we want as many ants as possible to help in razing enemy hills. Such collaboration seems like a pretty useful idea. Notably though some very highly ranked bots such as Memetix's didn't use it at all. I didn't have time myself to implement anything like this, so I can't really say.
Another thing with diffusion, that you can also do with attraction, is to a non-linear distance function. E.g. exponential falloff is often used with diffusion, and Memetix used weight over distance squared (as in gravity). Sadly, I didn't have time to properly experiment with this either. It seems to me that neither the collaboration part nor the non-linear distance function is truly necessary for a very well performing algorithm, but this is mere speculation - hopefully some of the other top contenders will do a writeup soon so we can learn how they dealt with this.
One final note about diffusion: for some reason, some people implemented it by doing multiple sequential passes over the entire map and hoping that they can do enough passes to propagate the scents to every square - which takes one pass for each square traveled. Instead, the standard BFS method is of course to use a queue for the squares yet to be processed and a hash set or similar for the squares already visited. In this case one can easily replace the hash set with a double array of booleans since the maximum map size is only 200 by 200. The standard queue-and-visiteds version should perform much better than the multiple-pass version, since it skips over a zillion uninteresting squares, so it's not entirely clear to me why anyone would opt for the latter.
Tweaking it
Once you have your attraction or diffusion or whatever, the endless and endlessly fun tweaking can begin. As described above, there are many of variations of the basic algorithm. But there are also lots of parameters to tweak: which weight should I give to a food compared to an unseen tile? What about tiles that I've seen at some point, but not recently (and what is "recently")? Should I assign some of my ants to be food gatherers and some to be soldiers, or should I just rely on the different weights? Should I change the weights, or the assignments, as the game progresses, or as I gain or lose area or ants? Countless questions - and after all this, there's still combat to worry about!
It turns out that early exploration and food gathering is extremely important, which is why my bot was doing surprisingly well against my coworkers' bots on our private TCP server. I implemented a good food finding routine first, then a good exploration routine; then I was nearly out of time, so my combat code is hilariously primitive, and of course everything times out when ran on a machine slower than my own.
To assign food gatherers, I do a simple BFS (linear distance function, no blocking of "scent" by friendly ants; just the basic vanilla thing) separately from each visible food until I either find a friendly ant or I hit a distance cutoff. The BFS doesn't consider non-visible squares. I set the cutoff at twice the sight radius (which might be too small though); the idea is that each food is obviously within the sight radius of a friendly ant, but a food is only worth going after if there are not too many obstacles on the way. After finding the closest ant for each food, I then find the closest food for each ant. This way when there's a cluster of food only one ant will get assigned to it, which seems good. A few obvious improvements to this routine would be remembering where food was last seen (at least for a few turns after seeing it) and ignoring food when an enemy ant is closer to it than we are.
For the ants that weren't assigned as food gatherers, I do another BFS, starting at each interesting non-food square simultaneously. Such interesting squares are enemy hills, enemy ants, and the squares just off the edge of visible space. I assign a very large weight (that is, a very small "distance") to the enemy hills, a medium weight to enemy ants and a small weight proportional to last-seen time to the non-visible squares.
The exploration part here seems to work quite nicely. When there are no enemies around, my ants will explore in all directions, and will try to bring and keep the whole map under their control. At first every non-visible edge square has the same score, so the deciding factor is the distance: after the first ant goes, say, north, the new northern edge is now farther away from the hill than the other edges, which causes the ants to spread out naturally. As the game progresses, ants that don't have anything better to do than explore will tend to run around in their own small area, chasing the edge that they've least recently seen, which maximises the visibility and map control. And of course, when new food spawns, gathering it takes priority for the ant nearest to it.
For combat targeting this doesn't work quite as well (all the combat stuff was a last-minute addition). Even though the attraction of enemy hills is very large (and my code remembers the location of currently unseen hills, and forgets hills once they're razed), the BFS doesn't have an option of going through unseen squares, so my ants are only drawn towards enemy hills in those turns when the hill is seen by at least one of my ants. In addition, when searching a path to enemy hills the search cutoff is the same as for exploration, which is probably suboptimal. Well, still better than nothing.
It seems to me that on the exploration and macro side, a few relatively simple BFS routines, when properly tweaked, would be enough for even a top ten bot. I refer again to Memetix, whose code is remarkably simple yet performs very well. A shame that I didn't have more time to polish my own routines.
Micro: combat routines
I had no time to really get into the intricacies of combat, which are many and detailed. Apparently however there are simply too many possibilities to evaluate them all: five possible moves per ant for each of n ants leads to 5^n different moves on every turn. Even if you can cull most of these, you can't always cull enough.
An interesting approach then is a1k0n's random sampling, which attempts to iteratively figure out a plausible probability distribution corresponding to the best moves. Of simpler heuristic approaches, I refer again to Memetix. His experiences in general seem to show that non-complicated approaches, both to path-finding and to combat, can be sufficient to do very well on this challenge. Update: Xathis uses a traditional min-max algorithm with heavy heuristic pruning, highly polished, which obviously works very well.
My combat code is ran after I've figured out the general moves with the path finding routine. The combat routine then checks these moves and, if necessary, replaces them with hopefully safer moves. For each ant I add up the number of enemy and friendly ants within potential combat range (well, actually within the square root of (combat range squared + 12), which in some cases might be almost the same). If there are as many enemy ants as friendly ants, or more, I arbitrarily choose the "first" enemy, figure out where it is, and tell my ant to go in the opposite direction. This works surprisingly well: if there's just one enemy ant, my ant will run away from it; if there are several enemies and they're surrounding my ant, it doesn't really matter where my ant will move. (Well, it does, but figuring out the best move would have taken me too long.)
Other stuff
In addition to the micro management of individual combat moves, there's also base defence (see e.g. Memetix - apparently static defence is a bad idea, and dynamic defence can be good enough), zone control and blockading enemy ants (see contest winner Xathis for a good example), symmetry detection etc etc. I don't have enough time to go through all of these in detail, so I recommend checking out the strategy forum for lots of good discussion on this stuff.
In summary, if only I had had more time to polish and optimise my code and implement better routines, this would have been the most awesome programming project ever. Really looking forward to the next AI challenge - hopefully I'll do better then!
No comments:
Post a Comment