Total results of all analysed games
Top 1 Match: 385/658 (58,5% )
Top 2 Match: 509/658 (77,4% )
Top 3 Match: 563/658 (85,6% )
Top 1 Match: 368/657 (56,0% )
Top 2 Match: 461/657 (70,2% )
Top 3 Match: 525/657 (79,9% )
So, in the 1972 Fischer-Spassky match my analysis confirms that Fischer achieved results comparable to the ext ...[text shortened]... yes they can; Fischer achieved this against Spassky, over-the-board nearly 40 years ago![/b]
One other thing that I think we should get into account is how the nature of the game changed in the meantime. Computer aided analysis is now a part of any selfrespecting GM (it is even a question of survival), and how the chess style of chess engines has evolved too.
I'm not a chess savant but I think that the frontiers of human chess style
and engine chess style
are getting blurred. Rybka is a string example I think. People that know what they are talking about say it has a very human like feel of positions (plus the fact it can evaluate a gazzilion positions per second) and the previous fact I mentioned that more and more people study games with engine help. of course I'm aware that no human will get the ability of evaluation of an engine but I can see a convergence
of styles taking this into account. And of course I'm bot talking about the average RHP user but I'm talking the real deal guys like Anand, Carlsen, etc...
I mean put the game of the century through your chess engine and all of the moves that Fischer got, and are considered extremely original and imaginative, just pop out in seconds.
So the next time someone shouts from the rooftops that such & such a player is clearly a cheat because they exceeded some of the thresholds of
Top 1 Match: 60,0%
Top 2 Match: 75,0%
Top 3 Match: 85,0%
and no human CC player can do that in many games with non-book moves, you can point them in the direction of this thread and say yes they can; Fischer achieved this against Spassky, over-the-board nearly 40 years ago!
Here I don't know if you are being facetious or not, but if any one says such a thing the poor person is just wrong. And he/she is wrong by two reasons:
1- Things should actually be done in reverse order. One should not not throw a some three numbers and define them to be our threshold. One should adopt the inverse methodology: study a good number of games (in the order of thousands) of the all time greats and see how they match up against the engines (I'd even think that this calibration should be done regularly - if the game evolves and changes why should our calibration be static
?) After this is done than a reasonable and rational threshold can and should be defined.
2- The second wrong thing is that a match up rate can't be enough since we are dealing with a statistical tool to detect the cheats. So, intervals of confidence, or some other statistical indicator, must be used in order for one justify their conclusions. Of course this last step can be ignored but then one shouldn't tell anyone that one is using statistical means to detect cheats because that it isn't statistics.
And last but not least I thank you for taking the time to do and post this analysis.
By the way have you thought about doing the same for some other (more recent) championship matches?