

But that ELO difference is defined not so much by the fact that W plays a perfect (i.e., evaluation-preserving) move at every position, but by how well it exploits the fact that V plays imperfectly - i.e., two programs W,W' that both play perfectly can have a different relative ELO against V. This ELO, again, can only be assigned relative to some weaker program V.

So the only way to define the ELO of God is to say that it is equal to the ELO of any program W that itself plays perfectly, but is not necessarily omniscient. In the latter case, it is impossible to define the ELO of God relative to W. So, either W itself plays perfectly, in which case every game will be a draw, or God will win 100% of games. But since God is omniscient, not only it knows the best move in each position, it also can exploit ideally any imperfection in W's play. Now, how do we assign ELO to God? Inevitably, by measuring it against a weaker program, say W, that plays imperfectly. The differences between ladders are better to be reasonable, as, e.g., pitching AlphaZero against a human would lead to 100% of wins in practice, and no numeric value of ELO can be assigned on that basis. And so on, up to maybe 3700 ELO of AlphaZero. A program B winning the same percentage of points against A will be assigned ELO 2975. A program A winning a certain percent of points against a 2775 ELO grandmaster will be assigned ELO 2875.

The ELO of the strongest computers is really defined only relative to a ladder of weaker computers, starting from the ELO of best human players. I'm arguing below that it is logically problematic to even define such a thing.
