Nov 222013
 

Measuring Survey Bias

In our recent Political Analysis paper (ungated authors’ version), Jocelyn Evans and I show how Martin, Traugott, and Kennedy’s two-party measure of survey accuracy can be extended to the multi-party case (which is slightly more relevant for comparativists and other people interested in the world outside the US). This extension leads to a series of party-specific measures of bias as well as to two scalar measures of overall survey bias.

Moreover, we demonstrate that our new measures are closely linked to the familiar multinomial logit model (just as the MTK measure is linked to the binomial logit). This demonstration is NOT an exercise in Excruciatingly Boring Algebra. Rather, it leads to a straightforward derivation of standard errors and facilitates the implementation of our methodology in standard statistical packages.

Voter poll
Those Were the DaysFoter.com / CC BY-SA

An Update to Our Free Software

We have programmed such an implementation in Stata, and it should not be too difficult to implement our methodology in R (any volunteers?). Our Stata code has been on SSC for a couple of months now but has recently been significantly updated. The new version 1.0 includes various bug fixes to the existing commands surveybias.ado and surveybiasi.ado, slightly better documentation, two toy data sets that should help you getting started with the methodology, and a new command surveybiasseries.ado.

surveybiasseries facilitates comparisons across a series of (pre-election) polls. It expects a data set in which each row corresponds to margins (predicted vote shares) from a survey. Such a dataset can quickly be constructed from published sources. Access to the original data is not required. surveybiasseries calculates the accuracy measures for each poll and stores them in a set of new variables, which can then be used as depended variable(s) in a model of poll accuracy.

Getting Started with Estimating Survey Bias

The new version of surveybias for Stata should appear be on SSC over the next couple of weeks or so (double check the version number (was 0.65, should now be 1.0) and the release date), but you can install it right now from this website:

net from https://www.kai-arzheimer.com/stata 
net install surveybias

To see the new command in action, try this

use fivefrenchsurveys, replace

will load information from five pre-election polls taken during the French presidential campaign (2012) into memory. The vote shares refer to eight candidates that competed in the first round.

surveybiasseries in 1/3 , popvaria(*true) samplev(fh-other) nvar(N) gen(frenchsurveys)

will calculate our accuracy measures and their standard errors for the first three surveys over the full set of candidates.

surveybiasseries in 4/5, popvariables(fhtrue-mptrue) samplevariables(fh-mp) nvar(N) gen(threeparty)

will calculate bias with respect to the three-party vote (i.e. Hollande, Sarkozy, Le Pen) for surveys no. 4 and 5 (vote shares a automatically rescaled to unity, no recoding required). The new variable names start with “frenchsurveys” and “threeparty” and should be otherwise self-explanatory (i.e. threepartybw is $B_w$ for the three party case, and threepartysebw the corresponding standard error). Feel free to plot and model to your heart’s content.

Jul 102013
 
Used Punchcard
BinaryApe / Foter / CC BY

In a recent paper, we derive various multinomial measures of bias in public opinion surveys (e.g. pre-election polls). Put differently, with our methodology, you may calculate a scalar measure of survey bias in multi-party elections.

Thanks to Kit Baum over at Boston College, our Stata add-on surveybias.ado is now available from Statistical Software Components (SSC).  The add-on takes as its argument the name of a categorical variable and said variable’s true distribution in the population. For what it’s worth, the program tries to be smart: surveybias vote, popvalues(900000 1200000 1800000), surveybias vote, popvalues(0.2307692 0.3076923 0.4615385), and surveybias vote, popvalues(23.07692 30.76923 46.15385) should all give the same result.

If you don’t have access to the raw data but want to assess survey bias evident in published figures, there is surveybiasi, an “immediate” command that lets you do stuff like this:  surveybiasi , popvalues(30 40 30) samplevalues(40 40 20) n(1000). Again, you may specify absolute values, relative frequencies, or percentages.

If you want to go ahead and measure survey bias, install surveybias.ado and surveybiasi.ado on your computer by typing ssc install surveybias in your net-aware copy of Stata. And if you use and like our software, please cite our forthcoming Political Analysis paper on the New Multinomial Accuracy Measure for Polling Bias.

Update April 2014: New version 1.1 available

Jun 232013
 

All surveys deviate from the true distributions of the variables, but some more so than others. This is particularly relevant in the context of election studies, where the true distribution of the vote is revealed on election night. Wouldn’t it be nice if one could quantify the bias exhibited by pollster X in their pre-election survey(s), with one single number? Heck, you could even model bias in polls, using RHS variables such as time to election, sample size or sponsor of the survey, coming up with an estimate of the infamous “house effect”,.

Jocelyn Evans and I have developed a method for calculating such a figure by extending Martin, Kennedy and Traugott’s measure A to the multi-party case. Being the very creative chaps we are, we call this new statistic [drumroll] B. We also derive a weighted version of this measure B_w, and statistics to measure bias in favour/against any single party (A'). Of course, our measures can be applied to the sampling of any categorical variable whose distribution is known.

We fully develop all these goodies (and illustrate their usefulness by analysing bias in French pre-election polls) in a paper that
(to our immense satisfaction) has just been accepted for publication in Political Analysis (replication files to follow).

Our module survebias is a Stata ado file that implements these methods. It should become available from SSC over the summer, giving you convenient access to the new methods. I’ll keep you posted.

Oct 292012
 

Like social networks, multilevel data structures are everywhere once you start thinking about it. People live in neighbourhoods, neighbourhoods are nested in municipalities, which make up provinces – well, you get the picture. Even if we have no substantive interest in their effects, it often makes sense to control for structures in our data to get more realistic standard errors.

Now the good folks over at the European Social Survey have reacted and spent the Descartes Prize money on compiling multilevel information and merging them with their own data. So far, the selection is a little bit disappointing in some respects. Homicide rates, for instance, are reported on the national level only. But there are some pleasant surprises (I guess due to Eurostat, who collect such things): We get unemployment, GDP growth and even student numbers at the NUTS-3 level. Since you asked, NUTS is the Nomenclature of (subnational) Territory, and level 3 is the lowest level for which comparative data are normally published.

Regrettably, the size and number of level 3 units is not necessarily comparable across countries: For Germany, level 3 corresponds to about 400 local government districts, while France is divided into 96 European Departments. But if you need to combine top-notch survey data with small(ish) regional data, it’s a start, and not a bad one.

Aug 072011
 
Photograph of a sketch of the French author an...

Image via Wikipedia

Fails/Pierce 2010 article in Political Research Quarterly 2010 is easily the most interesting paper I have read during the last Academic Year (btw, here are my lecture notes). Ever since the 1950s, mainstream political science has claimed that mass attitudes on democracy matter for the stability of democracy, while the intellectual history of the concept is even older, going back at least to de TocquevilleBut, as Fails and Pierce point out, hardly anyone has ever bothered to test the alleged link between mass attitudes and the quality and stability of democracy. This is exactly what they set out to do, regressing levels of democratic attitudes compiled from dozens of surveys on previous  and succeeding polity scores. As it turns out, levels of democratic attitudes do not explain much, while they seem to follow changes in the polity scores. If these results hold, the Political Culture paradigm would have to be thoroughly modified, to say the least: It’s the elites, stupid.

My students poured a lot of primarily methodological criticism on these findings (I can see my bad influence on them), and I’m not sure that the interpretation of the last (first-differences on first-differences regression) is conclusive. But nonetheless, this is fascinating stuff. I wonder if the big shots will have to say anything interesting about it, or whether they will just ignore the work of two annoying PhD students.

Enhanced by Zemanta
May 192010
 

I’m teaching a lecture course on Political Sociology at the moment, and because everyone is so excited about social capital and social network analysis these days, I decided to run a little online experiment with and on my students. The audience is large (at the beginning of this term, about 220 students had registered for this lecture series) and quite diverse, with some students still in their first year, others in their second, third or fourth and even a bunch of veterans who have spent most of their adult lives in university education.

Which of my students are most likely to gang up against me? 1

Who knows whom in a large group of learners?

Fortunately, I had a list of full names plus email addresses for everyone who had signalled interest in the lecture before the beginning of term, so I created a short questionnaire in limesurvey and asked them a very simple question: whom do you know in this group? Given the significant overcoverage of my list – in reality, there are probably not more than 120 students who regularly turn up for the lecture – the response rate was somewhere in the high 70s. If you want to collect network data with limesurvey, the “array with flexible labels” question type is your friend, but keying in 220 names plus unique ids would have been a major pain. Thankfully, one can program the question with a single placeholder name, then export it as a CSV file. Next, simply load the file into Emacs and  insert the complete list, then re-import it in limesurvey.

Getting  a data matrix from Stata into Pajek is not necessarily a fun exercise, so I decided to give the networkx module for Python a go, which is simply superb. Networkx has data types for representing social networks, so you can read in a rectangular data matrix (again as CSV),  construct the network in Python and export the whole lot to Pajek with a few lines of code:


#Some boring stuff omitted
#create network
Lecture=nx.DiGraph()
#Initialise
for i in range(1,221):
Lecture.add_node(i, stdg="0")
for line in netreader:
sender = int(line[-1])
#Sender-ID at the very end
edges=line[6:216] #Degree-scheme
Lecture.node[sender]['stdg']=line[-8] #Edges
for index in range(len(edges)):
if edges[index] == '2':
Lecture.add_edge(sender,int(filter(str.isdigit,repr(knoten[index]))),weight=2)
elif edges[index] == '3':
Lecture.add_edge(sender,int(filter(str.isdigit,repr(knoten[index]))),weight=3)
nx.write_pajek(Lecture,'file.net')

As it turns out, a lecture hall rebellion seems not very likely. About one third of all relationships are not reciprocated, and about a quarter of my students do not know a single other person in the room (at least not by name), so levels of social capital are pretty low.  There is, however, a small group of 10 mostly older students who are form a tightly-knit core, and who know many of the suckers in the periphery. I need to keep an eye on these guys.

Which of my students are most likely to gang up against me? 2

260 reciprocated ties within the same group

Finally, the second graph also shows that those relatively few students who are enrolled in our new BA programs (red, dark blue) are pretty much isolated within the larger group, which is still dominated by students enrolled in the old five year programs (MA yellow, State Examination green) that are phased out. Divide et impera.

Reblog this post [with Zemanta]
Apr 092010
 

The other day, a (rather clever) student told me that she has no real need for all these stats classes, because she will be a journalist. I told her that the world would be a better place if all journalists underwent compulsory numeracy classes. Here is the proof from my favourite newspaper. How long does it take you to spot the glitch?

Young people in the East Midlands were the most down-to-earth of those surveyed, expecting an annual salary of £33,468 by the time they reached their mid-thirties. However, even this figure is still around £4,000 higher than the average.

Two-thirds of respondents also thought they would own a house by the time they were 25. In reality, only 14% of homeowners are aged 25 or under.

With the rising cost of higher education hitting students hard, recent figures suggest young people will be left with more than £20,000 of debt by the end of their courses. But the poll shows today’s school children do not realise how out of pocket they will actually be: the average expected figure was just half the reality.

http://www.guardian.co.uk/money/2010/mar/30/teenagers-expect-earnings-51000

Mar 232010
 

Sixteen months ago, we started the Political Science Peer-Review Survey. This week, the input form was shut down. That is about three quarters of a year later than expected, but then again, I underestimated the fallout of my move back to Germany. Moreover, until a few weeks ago there was still a tiny trickle of replies coming in. So far, we have found few major problems with the data. The RA has spotted two instances where the respondent somehow managed to save the data at various stages of the interview, thereby inflating the number of respondents. Moreover, it’s amazing how many political scientists read ‘percent’ and give absolute numbers 😉

Right now, the RA is enjoying is well-deserved holiday. He’ll be back in four weeks time, and we hope to have a data set ready for distribution by June.