Well the results of scoring process are made available by the contest judges. Loosy Alligator has lost the contest. Well it is exactly what I expected (in fact, I never expected to win the contest). However, I've got some words about why it has lost.
As I tested my robot before submitting it to the contest, it was in fact weak in agressive multirobot fights. It was however very good in delivering packages in single-robot games. It always did almost the best ever possible with the money given to it at a particular map. This fact has proven by one of the contest participants. If noone interfere with our program, we do the best.
So I was... hgm... surprised with the results of the preliminary tour. The robot scored zero in 7 of 10 preliminary games (being alone at the map!). So I've taken a look at the logs my robot produced. All the lost (zero scored) games were actually lost because the networking module produced an exception right after the first response from server. Like this:
*!!! token  Catched: [unknown token in server response] ************** LOOSY ALLIGATOR ****************** by Andrey Vikt. StolyarovThe server response parser generated an empty token and crashed. Seems to be a bug. Ok, I agree it is a bug. But where the bug originated from and why didn't I catch this bug (which crashed my robot in 7 of 10 games) as I tested it?
wishes good luck to the Judges Of The Contest!!! Using address 127.0.0.1 port 30445 The map is 1 x 1 1 @ Using id = 87, having capacity = 200, money = 3 Bases are: ((1 . 1)) *** Response: [#87 X 1 Y 1] * Packages: [1:(1.1)102] [2:(1.1)100] [3:(1.1)18] [4:(1.1)100] [5:(1.1)102] Money 3 Score 0 Me:[(1 . 1)] Others: Carry: NIL * Issuing command [1 Pick 5 4 3 2 1] *** Response: [#87 X 1 Y 1 P 5 P 3 ] real 0m0.084s user 0m0.020s sys 0m0.010s
Look at the line where the server response is shown.
*** Response: [#87 X 1 Y 1 P 5 P 3 ]See these two spaces after each P n command? I didn't expect them. The server judges made available for testing never generated such responses. Instead, it always separated commands and arguments with exactly one space.
So, is it my fault? Certainly I should not have assumed there's exacly one space between tokens. But, well, software always have bugs. It is necessary to test a program to catch and eliminate them. The server judges gave us was just for this purpose -- that is, to test the robots on it. No opportunity was therefore given to me to catch this particular bug.
Furthermore, it was not qualified in the rules that there can be any amount of whitespace between any tokens in the server's response. So formally it is not clear whether the bot was actually buggy or the server finally used for the scoring process.
And finally, anyway this stupid inconsistency didn't let me see was my robot written good or not. All the decision-making algorythms, pathfinders, datastructures, everything I've made during these 3 days of the contest, is now just wasted. Just because judges decided to use the server program other than that used for testing purposes.
For now I'm not sure whether I decide to join the ICFP contest next year. The results made me feel a great disappoinment, not because I've lost, but because of why I've lost. I would better have done several parachute jumps that weekend.
The last module is the most interesting one, because it is the player itself. It is implemented in InteLib Lisp (the file play.lsp is the real source of the module). The file is translated into a C++ module (header file play.hxx and implementation file play.cxx). Take a look at them to understand what the InteLib is. Both files are quite human-readable and human-maintainable; furthermore, they actually could be hand-written. Only the comments tell the reader the files are generated. The code produced from the Lisp code remains in the same order with the same names and is easily recognised, altough it is no longer Lisp; now it is C++.
The implemented strategy is much simpler than it could be. Unfortunately I was very short of time so I implemented only about a half of the features I planned as the contest started. Robot has the list of interesting destinations which initially consists of the list of bases. As robot steps the map, it memorizes what packages it saw at the map position, all their data etc. Once a package was seen, the robot never forgets it. When the robot reaches a base and learns it is empty, the base is removed from the list of interesting destinations. When one of the robots (including our one) drops a package, that place is added to the list of interesting positions (unless we know the position is the destination of that package). If we don't know the dropped package's destination, we remember that the package of the particular id may be there, and if we truly know it was not the destination, we remember there is the particular package there. It also allows us to track down the other players' picks so if all the packages layed somewhere are picked, we forget that destination. Certainly, when we explore a map position where there are packages, and we can't take them all, then we remember theis list anyway.
The robot can be in one of the two main states. If it has at least one package, it is in delivery state. Else it is in the cargo-hunting state. In both states, the robot always tries to get all the packages it can take. No optimizations was actually made because of the shortage of time, so robot just detects whether its capacity is sufficient to pick at least one of the packages at the current location, and if so, it issues the command to pick all the packages letting the server do the rest of the job.
In the delivery state, the list of destinations to go to is the list of the destinations of the packages we're carrying. In the cargo-hunting state, unexplored bases are considered the most attractive destinations so if there is at least one unexplored base, the list of the destinations to go to is simply the list of the unexplored bases. If there's no unexplored bases, then the list of known package-containing destinations is used. If it is also empty, then we use the list of the positions that may contain packages. If there are no such positions, we assume it's a package-respawning game, mark all the bases unexplored again and go on with it.
In both states, we use the path finder on our current location and the list of destinations to go to. The algorythm always find the destination with the shortest path. The path is memorized, so at the next move we can chesk if we are still at the path (e.g., we were not pushed) and the path ends on one of our destinations, we can continue following this path without recalculating it.
In the delivery state, we check every turn if we're at the position to which at least one of our packages is to be delivered to. Is it is so, we drop the packages whose destination is our current location.
Regardless of the state, we can enter an urgent mode. This happens if there's an enemy robot within 2 moves from us. In this mode, it is checked if we can get killed or we can kill an enemy ourselves. If we can get killed (that is, there's a robot just right near us, and there's water at the opposit location), we check if we can try to kill it ourselves, and if so, we move towards it, else we move to the left or to the right (away from the position). If we can't get killed, but we can kill someone, we try to do it. The bet is based on the self-learining technique (that is, if we're pushed, we increase our bet; it is reduces again if we're not pushed). The bet-calculation technique is too dumb and stupid. I should certainly have implemented learning from fighting with a particular robot. Damn shortage of time!
Last updated October 11, 2002.