Originally posted by sh76
As much as I love Nate, this line has me scratching my head:
[quote]Our method of evaluating pollsters has typically involved looking at all the polls that a firm conducted over the final three weeks of the campaign, rather than its very last poll alone. The reason for this is that some polling firms may engage in “herding” toward the end of the campaign, ch ...[text shortened]... llster was inaccurate because it predicted Obama underperforming with 2 weeks to go seems silly.
This might be valid, if the comparison is to "consensus polls" of that period.
For example,
3 weeks before the election, the average poll gives candidate A a 10 point advantage, polling firm X gives him a 15 point advantage.
On election day, the average poll gives candidate A a 5 point advantage, which is also the actual difference in the election. Polling firm X also gave him a 5 point advantage.
In this case, you could argue that the average of the polls were accurately showing the voters intentions. If that is the case, then the poll from X 3 weeks before the election was of by 5 points, so that should be reflected in a polls 'report card'. It gets a bit more complicated if polls as a whole are shown to have a bias in an election and of course there is the chance that this bias has changed over the election cycle. Maybe the difference at election-3 weeks was in fact 15 points and there was bias in the average of polls which dissipated before election day, so firm X was right all along.