In my first post on the subject, Training an AI, I described how I intend to use genetic algorithms to find the optimum priority values for each of the ten actions the strategic AI can perform.
My ‘playground’ allows for 10 empires on the Surface and 10 in the Netherworld and this means that I can train one Surface race and one Netherworld race at the same time. Humans and Greenskins are the ‘standard’ races of their respective levels and I’ve decided to start with them and then continue with the other races.
In my first trial my aim was to run at least 20 generations, but since this is the first time I’ve actually made the computer able to run a series of games by itself the code crasched after the 4th generation. I selected the fittest set of priority values and re-started. This happened quite a few times before I had removed all bugs (found an endless loop in the code for creating mountain ranges that’s been nagging me for quite some time). By then I had reached generation 18 and I decided to run until I’d reached 40 simulations.
Since I had stopped and re-started the first trial several times I decided to run a second trial to verify the result. This second trial was run for 50 generations. Interestingly the priority maps for the Greenskins developed similarly in both trials but for the Humans the result from the first and second trial differed quite a bit.
For each of the 40 generations in the first trial I marked the three actions with the highest values. Then I counted how many times each action was in the top three list:Diagram 1 – After each generation I collected the three values highest priority and after 40 generations I plotted how often each came out on top.
After studying the last generation from both trials I selected a set of values. This was very simple for the Greenskins which had developed a taste for conquering settlements and upgrading/improving their garrisons. Humans were more difficult since the result from the two trials differed. In the first trial they came out quite peaceful except for a wish to raid enemy terrain improvements (see diagram 1) but in the second trial they seemed to prefer a more even set of values. I selected a set of values from the second trial that gave a higher priority to warfare than did the first trial:Diagram 2 – The resulting priority values after I’ve selected the most fit set of values from the two runs.
The set of values for the Greenskins is very similar to the ‘top 3′-list in diagram 1. The values for the Humans deviate a bit from diagram 1. Another thing that’s interesting to note is the big difference between the Greenskin and Human values.