Can anyone say what the difference is between a 1400 and 1500 player? Or a 1700 from 1800? 2300 from 2200, etc? Like in the number of positions seen without calculation or # of moves projected, something like that? Does anyone know of work in that regard?
Trying to see if there was an objective way to see what level you can attain after X amount of study, play, training, etc., and what you have to see in chess to go from one level to the next.
the elo rating system IS a quantification. 200pts difference corresponds to the stronger player winning roughly 75% of the games. THAT is what defines a rating, NOT what specific tactics & strategies were used to achieve those wins. ratings aren't connected to any specific tactics or strategies.
you could know all the theory in the world, and still be crap if you were very sloppy and careless.
or you could know only tactics, and nothing else, and be stronger than any engine.
Originally posted by sonhouseI'm not aware of anything being done for every 100 points, but there has been some work done by class differences. Adriaan de Groot's "Thought and Choice in Chess" was an early attempt. It was weighted more towards masters and grandmasters, but it contained some work in strengths roughly equivalent to as low as C class.
Does anyone know of work in that regard?
A recent work is Dan Heisman's "The Improving Chess Thinker", which covers strengths from Class F to beyond Expert.
Originally posted by wormwoodYes, the rating quantifies wins but not specifically what number of moves you see or positions you evaluate and so forth. Obviously various tradeoffs in tactics V strategy will achieve the same end. Couldn't a computer program work that stuff out? Analyze a set of games of person A Vs person B and see what the lower rated is missing in terms of tactics AND strategy? And presumably what the lower rated needs to do to get to the next level.
the elo rating system IS a quantification. 200pts difference corresponds to the stronger player winning roughly 75% of the games. THAT is what defines a rating, NOT what specific tactics & strategies were used to achieve those wins. ratings aren't connected to any specific tactics or strategies.
you could know all the theory in the world, and still be cr ...[text shortened]... careless.
or you could know only tactics, and nothing else, and be stronger than any engine.
I don't think chess playing ability is that easily quantifiable. Sure, some facets lend themselves towards measurability, such as the famous perceived ability to think x moves ahead, but many don't. How do you measure things like positional and strategic sense for example.
The best we have is something like an ELO rating, which is nothing more than a statistical gauge of a players strength, being entirely based on a players performance. This is usually good for a decent approximation of how two players would likely fair if they played each other, but then again is not infallible, explained easily by just looking at how some players have consistently better head-to-head performances against certain players with the same rating.
Originally posted by sonhousethe problem with the prevalent engine paradigm is programs can't recognize a 'strategy', they have absolutely no understanding of the position even in the most elementary way. everything they do regarding 'positional chess' is faked, simply by hard coding things into the evaluation function. when you push a castled pawn, the engine is not trying to UNDERSTAND in any depth whether that's actually weakening or not. it'll instantly assign it whatever value the programmer saw fit, completely regardless of the position. they just brute force their way through tactical search trees and wait for the opponent to 'step out of it'. they can only mechanically follow the programers pre-laid instruction, and not an atomic operation more. trying to implement positional chess into and alpha/beta pruning cruncher is like trying to teach physics to a dog. it's not equipped to grasp physics, because it fundamentally lacks the required building blocks. you can teach it to ACT as if it 'understands' in some degree, but it's really just a trick to fool the audience. an illusion of understanding.
Yes, the rating quantifies wins but not specifically what number of moves you see or positions you evaluate and so forth. Obviously various tradeoffs in tactics V strategy will achieve the same end. Couldn't a computer program work that stuff out? Analyze a set of games of person A Vs person B and see what the lower rated is missing in terms of tactics AND strategy? And presumably what the lower rated needs to do to get to the next level.
now, that could be changed by discarding the whole alpha/beta pruning paradigm, which itself is pretty much a corollary of the structural programming paradigm. take up a whole new way of looking at it, and actually teach the programs chess. there have actually been some tries with neural networking, but none of them very successfull so far. but that is The way, IF you wan't a program to do things that are characteristic for a human. in effect, you want a classifier instead of a search algorith.
and even if you were successfull at that, you'd still be left outside when you'd try to quantify the results of such a machine. it would understand the concepts, recognize them in totally unseen new positions, but it couldn't give you that specific kind of answers you appear to look for. it would basically say: "depends from the position" or something equally general. just like a human would.
Originally posted by Varenkaokay, not the best choice of words there, but it hardly makes any difference regarding my point. what I mean by it was the whole sequential type of thinking related to structured programming, and everything before it since babbage times.
Structured programming (circa. 1970)
😕
There are many anecdotes told about different rating groups i.e. to get to 1700 requires being able to see 2 ply perfectly.
The difference between Class A and Class B is transition from opening to middle game...
When a player reaches expert (2000) they are halfway to Master (2200)
Club players play the opening like a GM, the middle game like a master and the endgame like a child.
etc.
but these are all generalizations.
Perhaps you should setup some sort of study with RHP ratings to determine these things?
Originally posted by Shallow BlueWe were discussing in the context of the paradigm (see wormwood's post), so on that basis, 1970 is correct.
Erm... not quite.
That's when they started calling a fixed set of academe-invented rules "the structured programming paradigm". Good programmers had been writing their programs in a structured way long before that.
Richard