Mar 292017
 

The other day, someone on twitter suggested that the polls on the upper margins of my latest model-based estimates of AfD support were conducted by the “notoriously AfD-friendly INSA company”.

This view is not uncommon in Germany. INSA, who do polling for Germany’s premier tabloid Bild, want to prop up support for the AfD via a band waggon effect, because there is an alleged business and ideological link between their CEO and the AfD’s notorious Thuringia chapter (in German) – or so the story goes. A more innocent explanation is that INSA polls are based on internet access panels, and that the AfD is probably the most internet-savy party in Germany and hence overly popular with heavy users.

But is there any evidence of overreporting? As you can see here, INSA’s average estimate of AfD support is just one percentage point above the grand mean of all measurements – not such a huge difference. Also, they have only contributed seven polls to the total pool of 43 major polls whose results have been published since January.

Pollster n  AfD % (average)
Allensbach 2 10
Dimamp 6 12
Emnid 11 10
Forsa 11 10
GMS 2 10
INSA 7 11
Poba 4 7
All 43 10

And perhaps these polls were conducted earlier in the year, before the model suggested that the AfD’s support fell by a couple of points? But no, this is not the case. INSA polling goes back until the first week of February. (In fact, there are even some INSA polls from January, which are not in the database because of a glitch in the script that I use to pull the data from the interwebs.) More importantly, each and every of the seven INSA polls is well above the credible interval around the model-based estimate:

afd-insa-2017-03-29.png

So yes, INSA’s readings of AfD support are clearly unusually positive. But so are the Dimap polls, whereas FGW (Poba) are taking a distinctly bullish view.

Avid readers will remember that the model tries to account for party-specific house effects, under the overly optimistic assumption that these sum to zero across all firms. Given the small overall number of polls, and given that Allensbach and GMS have only published two polls each so far, I wouldn’t trust these estimates too far, but just for fun I’ve pulled them out of the mountain of simulated draws. According to the model, INSA overreports AfD support by 2 points (CI 1.5-2.6), Dimap by 1.8 points (CI 1.1-2.5), whereas FGW underreports AfD support by 2.5 points (CI -3.2-1.8). It will be interesting to see how the numbers develop over the coming months, and whether these estimates will square with the actual result. For the time being, take them with an appropriately large dose of salt.

Nov 132016
 

The one and only Philip Schrodt has written what I think is the perfect seven-take-home-messages rant on that election and it’s likely outcomes. Skip all the self-flagellation/yes-but posts and read this instead:

Then again, there is one thing that does not get enough coverage in there, and that is the whole polling/prediction disaster. So you should read this, too:

There. Your Sunday sorted out.

Jul 102013
 
Used Punchcard
BinaryApe / Foter / CC BY

In a recent paper, we derive various multinomial measures of bias in public opinion surveys (e.g. pre-election polls). Put differently, with our methodology, you may calculate a scalar measure of survey bias in multi-party elections.

Thanks to Kit Baum over at Boston College, our Stata add-on surveybias.ado is now available from Statistical Software Components (SSC).  The add-on takes as its argument the name of a categorical variable and said variable’s true distribution in the population. For what it’s worth, the program tries to be smart: surveybias vote, popvalues(900000 1200000 1800000), surveybias vote, popvalues(0.2307692 0.3076923 0.4615385), and surveybias vote, popvalues(23.07692 30.76923 46.15385) should all give the same result.

If you don’t have access to the raw data but want to assess survey bias evident in published figures, there is surveybiasi, an “immediate” command that lets you do stuff like this:  surveybiasi , popvalues(30 40 30) samplevalues(40 40 20) n(1000). Again, you may specify absolute values, relative frequencies, or percentages.

If you want to go ahead and measure survey bias, install surveybias.ado and surveybiasi.ado on your computer by typing ssc install surveybias in your net-aware copy of Stata. And if you use and like our software, please cite our forthcoming Political Analysis paper on the New Multinomial Accuracy Measure for Polling Bias.

Update April 2014: New version 1.1 available