Originally posted by Fat Ladythat's not true anymore Fat Lady. you think of engines' way of playing chess as pure tactics, but it's not at all like that. my positional knowledge is a joke against Rybka's. I believe that's the case for the majority of masters too.
if it's just the engine's opinion about a materially equal position then it doesn't matter a fig whether it thinks it's +0.2 or +1.0 for White.
I very rarely use engines to specifically enhance my repertoire, my experience is more on annotating games for my blog and analyzing games to understand where I messed up, not only tactically, but most of the time, positionally.
I should also mention Rybka's code writer is an IM himself, and he hired an additional GM to work full time to "teach" the program positional knowledge. they worked full time together for 2-~3 years I guess.
we are really talking about a serious chess entity here, I think a lot people miss that.
Originally posted by Varenka1) how can we know the percentage of wrong best moves, if we can't trust the evaluation? that would require some kind of absolute chess truth to compare against, which doesn't exist.
[b]what if that move is wrong?
There are people who make up high quality test sets. These positions get verified by running lots of various engines for long periods of time. Also the lines get played out using interactive analysis. If other people find mistakes in the analysis, they report it and the quality of the test set improves. After some ti ...[text shortened]... ased on results only. You don't get compensation for losing due to a tactical oversight.[/b]
also how many positions in those sets? if many but not millions, it's like needles in a hay stack. unsufficient sample. if even only one, it took hundreds of years by all players in the world to get where we are now with the opening position. a handfull of people can't possibly top the amount of analysis.
so I can't see how the positions could be studied on the level the opening position has been, hence the collective evaluation must be inferior. and if engines fail in the more accurately analyzed opening position, well I think the conclusion is obvious.
also no matter how many different alpha-beta pruning engines you analyze it with, you're still stuck with the lack of absolute reference evaluation and the ignored branches in the search space.
2) yeah, nalimov tablebases and simple endings in general are exceptions of course. that's why I wrote 'rich' positions, because forcing ones are hardly a challenge.
3) yep, I fully agree. I already addressed this at length. theoretical level of chess vs. winning aren't the same thing. in practice even loads of known dubious moves score huge amount of wins. that's why they're called cheapos.
P.P.
I really don't want to fall out with you and am not looking for a blazing row.
But getting a box to evaluate that position is pointless.
Honestly and I'm not kidding now. Chess Player to Chess Player.
There is a whole game to come.
How can that position be anything else but equal.
Anything can happen. Half the pieces are still at home.
What has the computer evaluation done there to help you
play that position?
You say that position is important to you.
Why not play six games on here.
Play 6 set games from that position 3 as White - 3 as Black.
1 day + 3 days.
Those 6 games will give you a better understanding of that position
than some box evaluation. Trust me - you learn by playing.
I'll even take one of the White's you play 6...b6 and I'll play 7.Bb5.
There you are - a piece for nothing.
Originally posted by philidor positionthat's just rybka sales pitch. it's an alpha-beta pruning algorithm with some hardcoded heuristics. as long as the paradigma doesn't change, neither will its repercussions. it has absolutely no positional understanding, just a set of rules which the coder coded in, assuming things about positions he will never actually see.
that's not true anymore Fat Lady. you think of engines' way of playing chess as pure tactics, but it's not at all like that. my positional knowledge is a joke against Rybka's. I believe that's the case for the majority of masters too.
I very rarely use engines to specifically enhance my repertoire, my experience is more on annotating games for ...[text shortened]... we are really talking about a serious chess entity here, I think a lot people miss that.
Originally posted by Fat Ladythat also brought to my mind what GM ziatdinov said about evaluations, which was something like: "there are three type of positions, won, drawn and unclear. everything else is rubbish. if you don't know that it's won or drawn, you should admit you don't know."
but if it's just the engine's opinion about a materially equal position then it doesn't matter a fig whether it thinks it's +0.2 or +1.0 for White.
and he wasn't even talking about engine evaluations, but evaluations like 'slight advantage' etc. 🙂
Originally posted by wormwoodhow can we know the percentage of wrong best moves
We try to refute them. If we find it impossible to refute them then our confidence of the best move increases.
Supposing I post a position where after a minute or so an engine's evaluation jumps from around 0 to say over 5. i.e. from level to clearly winning. What are the chances the the engine is going to be wrong? Now do that for the top 10 engines. Suppose they all eventually find the same move and score it as clearly winning. Now what is the chance that they are all wrong? We're not talking about some contrived position designed to expose the weaknesses of engines. We're talking about standard positions from GM play. What do you think the odds are? I'll say something like 0.01% chance they are all wrong.
no matter how many different alpha-beta pruning engines you analyze it with, you're still stuck with the lack of absolute reference evaluation and the ignored branches in the search space
Alpha-beta doesn't "ignore" any branches within its search depth. It only prunes those that are safe to prune. It's an optimised mini-max search.
Originally posted by wormwoodDescribe it as you like. Play Rybka as many games as you like and then forward those games to a GM without mentioning the players involved. The GM will generally state that the side played by Rybka played better positionally than you.
it has absolutely no positional understanding, just a set of rules which the coder coded in
You can talk all you want about how it is coded; the way the algorithms work or not; etc. The end result is still high quality positional play. What goes on inside the box matters little; it's the output we're interested in, i.e. the moves being played.
Originally posted by Varenka+5 means it doesn't see a mate. between that +5 and mate can be anything. maybe a piece sac, following with a mate one half-move beyond the horizon. you can't even know if that won material it's showing is dropped clean or with compensation, not to mention sufficient compensation.
[b]how can we know the percentage of wrong best moves
We try to refute them. If we find it impossible to refute them then our confidence of the best move increases.
Supposing I post a position where after a minute or so an engine's evaluation jumps from around 0 to say over 5. i.e. from level to clearly winning. What are the chances the the engin epth. It only prunes those that are safe to prune. It's an optimised mini-max search.[/b]
and like I said, I'm not contesting the practical strength of engines, but theoretical.
claiming a pruning algorithm doesn't ignore branches is semantic at best. how do you think it gets deeper than breadth-first brute force in the same time? does it make up more cycles out of thin air to cover more area? of course not. it makes a blind guess on a branch to defer searching it, leaving vast areas of search space uncovered, that's the whole point. that's where it gets the cycles to get deeper elsewhere. and that's also why it has hard time finding moves like the Nf6+ in the kasparov karpov game. or the famous sac of vipiu.
it's a bit like making a mathematical proof for all real numbers but proving it only for rational numbers. -while in practice it might often work, it's absolutely rubbish for a proof. if you leave huge (in fact infinite) gaps in the search space, the theoretical validity of your practical solution is zip. broken.
Originally posted by wormwoodhow can we make sense of this? of course it has some kind of rules coded in. I don't expect Rybka to do housecleaning, it's a software!
it has absolutely no positional understanding, just a set of rules which the coder coded in, assuming things about positions he will never actually see.
Rybka, for example, knows that, with everything else being equal, if you have two pieces against a rook and a pawn, you should try to stay in the middlegame and try to develop some kind of attack, instead of heading for the endgame.
or that a queen and knight works better than a queen and bishop. or that you should think twice before moves like Nc3 if you haven't played c4.
or that if you have a kingside pawn majority and the opponent has a queenside majority and both have castled kingside, you shouldn't be too eager for the endgame etc.
and not only such basic general knowledge. I play the symmetrical english a lot and it totally understands the fights around d5&d4, the f4&f5 pawn breaks, the potential Rb1 a3 b4 queenside play. In the french it always has an eye for c5, in the nimzo it does understand giving up the bishop pair isn't too bad, which, by the way, with everything else being equal, it rates pretty highly.
it has hundreds of little positional nuances like this. it has so many positional parameters, and most importantly, it can combine these in an excellent way, deciding which are relevant and which are not. the thing DOES know how to play chess. it's not just "me takes rook he takes piece mmm more material me likey". those days are history.
so I definitely think it shows excellent positional understanding of the game, perhaps much better than the average master. the fact that it isn't flesh and blood doesn't make this wrong.
Originally posted by wormwoodyou can't even know if that won material it's showing is dropped clean or with compensation, not to mention sufficient compensation
We're talking about a chess playing machine that is better than any human player. So don't talk as if it's clueless. It's playing ability speaks for itself.
I'm not contesting the practical strength of engines, but theoretical
Can you explain the difference and why the theoretical strength matters? The thread discussed the *practical* benefits of using an engine for analysis.
it makes a blind guess on a branch to defer searching it, leaving vast areas of search space uncovered
i) the mini-max search algorithm searches every branch up to a certain depth
ii) the alpha-beta algorithm is guaranteed to return the same result as the equivalent mini-max search; there is no pruning that risks throwing away a best move
What do you disagree with? Alpha-beta does NOT leave vast areas of search space uncovered with the given depth of search. You're confusing other pruning algorithms such as null-move pruning; futility pruning etc.
Originally posted by greenpawn34There is a whole game to come.
How can that position be anything else but equal.
Anything can happen. Half the pieces are still at home.
I've played this countless times. I'm inclined to believe white is actually better in this position, not that there's a forced win, but black has to be VERY careful and accurate, while white keeps making natural moves and they work. the only reasons I'm insisting on playing the line is that I kind of want to learn how to defend and that I'm planning a complete rebuild in my openings, but it will be a big project for me and I don't have enough time to undertake it at the moment. until than, I'll just stick to what I know.
What has the computer evaluation done there to help you
play that position?
lots. for one of my games in this opening (well, not exactly the same) I annotated using Rybka, see: http://blog.chess.com/philidor_position/how-not-to-defend-against-flank-attacks