A long open question is. Is gladiabots more complicated than chess?
let's try to slowly start somewhere and then we may refine the answer.
At the moment (beta 3.2 is the last stable release) a condition or an action node can have a ridiculous number of possible configurations.
Let's consider an AI of 100 nodes and let's be very, very conservative. Each node has 5 sensible configurations (those can be much more).
Let's also not focus too much on the arrangements of the nodes, although this is also important and is going to explode the possible configurations. For example let's consider the following cases
or
Those two configurations are vastly different but still have 3 nodes. We focus only on the nodes.
Therefore focusing only on 100 nodes, with each 5 sensible configuration, we have an upper bound of
7.8 * 10^69 possible configurations (whether many of those make sense, it is another story).
And this is likely a fraction of the real possible configurations considering all the options for conditions and actions and all the options for arrangements and connections.
In comparison, considering a game of chess, where at each turn a player consider 5 sensible moves, having games normally lasting around 50 moves (100 half moves), one has an upper bound that is the same of gladiabots. 7.8 * 10^69 possible games.
I wrote this only to consider the search space of a possible "self configuring" AI attempt, especially those based on machine learning with loose initial heuristics vs those with stronger fixed heuristics given by the programmer.
This is also valid for sentences like "the game is solved". Surely there were and there are very good AIs out of there, but it is very likely that among all the possible good AIs, those discovered just scratched the surface. The amount of tiny but crucial improvements that can be made is likely massive. Good for the game to be played like chess, for hundreds of years.
let's try to slowly start somewhere and then we may refine the answer.
At the moment (beta 3.2 is the last stable release) a condition or an action node can have a ridiculous number of possible configurations.
Let's consider an AI of 100 nodes and let's be very, very conservative. Each node has 5 sensible configurations (those can be much more).
Let's also not focus too much on the arrangements of the nodes, although this is also important and is going to explode the possible configurations. For example let's consider the following cases
Code: Select all
node1 (condition)

node2 (condition)

action
or
Code: Select all
connector 
 \ 
 \ 
action1 action2 action3
Those two configurations are vastly different but still have 3 nodes. We focus only on the nodes.
Therefore focusing only on 100 nodes, with each 5 sensible configuration, we have an upper bound of
7.8 * 10^69 possible configurations (whether many of those make sense, it is another story).
And this is likely a fraction of the real possible configurations considering all the options for conditions and actions and all the options for arrangements and connections.
In comparison, considering a game of chess, where at each turn a player consider 5 sensible moves, having games normally lasting around 50 moves (100 half moves), one has an upper bound that is the same of gladiabots. 7.8 * 10^69 possible games.
I wrote this only to consider the search space of a possible "self configuring" AI attempt, especially those based on machine learning with loose initial heuristics vs those with stronger fixed heuristics given by the programmer.
This is also valid for sentences like "the game is solved". Surely there were and there are very good AIs out of there, but it is very likely that among all the possible good AIs, those discovered just scratched the surface. The amount of tiny but crucial improvements that can be made is likely massive. Good for the game to be played like chess, for hundreds of years.