This is me, about once per year, when I bemoan my lack of R-coolness whilst simultaneously enjoying my Stata-efficiency.
This morning, I came across
an outrageously funny a moderately amusing video involving Shaggy’s early 2000s classic, some seriously revamped lyrics, and the man himself (btw, is this blond-hairing an act of cultural appropriation?). Cheap laughs, and the almost heart-warming idea that the FBI could end this, and everything would go back to normal. And yes, they manage to squeeze a lot of legalese into these lyrics.
Which then reminded me (yes, I’m old enough to remember both the outrage over Iraq and the euphoria of Blair coming to power in 1997) of a cartoon video featuring Tony Blair, Michael Howard, and other politicians of the day, happily dancing to the same song (“I was told that there were weapons hidden underneath the sand”). I tried to google it, but it is gone, a victim of the death of flash.
What is it about this song and wildly unpopular politicians? Is there something about this song that could be coaxed into a paper (“Pseudo-Rap as Liberalism. A Conceptual Sketch and Some Applications”)? Most certainly not, so let’s just post the latest video.
Personal blogs are so 1990s, yes?
This is not the late 1990s. Hey, it’s not even the early Naughties, and has not been for a while. I have had my own tiny corner of the Internet (then hosted on university Web space as it was the norm in the day) since Mosaic came under pressure from Netscape and the NYT experimented with releasing content as (I kid you not) postscript files, because PDF was not invented yet. I did this mostly because I liked computers, because it was new, and because it provided an excellent distraction from the things I should have been doing. By and large, not much changes over 25 years.
Later (that was before German universities had repositories or policies for such things), my webspace became a useful resource for teaching-related material. Reluctantly and with a certain resentment, I have copied slides and handouts from one site to the next, adding layers of disclaimers instead of leaving them behind, because some of this stuff carries hundreds of decade-old backlinks and gets downloaded / viewed dozens of times each day.
And of course, I started posting pre-publication versions of my papers, boldly ignoring / blissfully ignorant of the legal muddle surrounding the issue back in the day. Call me old fashioned, but making research visible and accessible is was the Web was invented for.
In summer 2008, I set up my own domain on a woefully underpowered shared webspace (since replaced by an underpowered virtual server). A bit earlier in the same year, already late to the party, I had started my own “Weblog” on wordpress.com, writing and ranting about science, politics, methods, and all that. A year down the road, I converted www.kai-arzheimer.com to wordpress, moved my blog over there, and have
never looked back continously wondered why I kept doing this.
Why keep blogging?
In those days of old, we had trackbacks and pingbacks & stuff (now a distant memory), and social media was the idea of having a network of interlinking personal blogs, whose authors would comment on each other’s posts. Even back in 2008 on wordpress, my blog was not terribly popular, but for a couple of years, there was a bunch of people who had similar interests, with whom I would interact occasionally.
Then, academically minded multi-author blogs came along, which greatly reduced fragmentation and aimed at making social science accessible for a much bigger audience whilst removing the need to set up and maintain a site. For similar reasons, Facebook and particularly Twitter became perfect outlets for
ranting “microblogging”, while Medium bypasses the fragmentation issue for longer texts and is far more aesthetically pleasing and faster than anything any of us could run by ourselves.
It is therefore only rational that many personal academic blogs died a slow death. People I used to read left Academia completely, gave up blogging, or moved on to the newer platforms. Do you remember blogrolls? No, you wouldn’t. Because I’m a dinosaur, I still get my news through an RSS reader (and you should, too). While there are a few exceptions (Chris Blattman and Andrew Gelman spring to mind), most of the sources in my “blog” drawer are run by collectives / institutions (the many LSE blogs, the Monkey Cage, the Duck etc.). I recently learned that I made it into an only slightly dubious looking list of the top 100 political science blogs, but that is surely because there are not many individual political science bloggers left.
So why am I still rambling in this empty Platonic man-cave? Off the top of my head, I can think of about five reasons:
- Total editorial control. I have written for the Monkey Cage, The Conversation, the LSE, and many other outlets. Working with their editors has made my texts much better, but sometimes I am not in the mood for clarity and accessibility. I want to rant, and be quick about it.
- Pre-prints. I like to have pre-publication versions of my work on my site, although again, institutional hosting makes much more sense. Once I upload them, I’m usually so happy that I want to say something about it.
- For me, my blog is still a bit like an open journal. If I need to remember some sequence of events in German or European politics for the day job, it’s helpful if I have blogged about it as it happened. Similarly, sometimes I work out the solution to some software issue but quickly forget the details. Five months later, a blog post is a handy reference and may help others.
- Irrelevance. Often, something annoys or interests me so much that I need to write a short piece about it, although few other people will care. I would have a better chance of being of finding an audience at Medium, but then again on my own wordpress-powered site, I have a perfectly serviceable CME which happens to have blogging functionality built in.
- Ease of use. I do almost all of my writing in Emacs and keep (almost) all my notes in orgmode code. Thanks to org2blog, turning a few paragraphs into a post is just some hard-to-remember key strokes away.
Bonus track: the five most popular posts in 2017
As everyone knows, I’m not obsessed with numbers, thank you very much. I keep switching between various types of analytic software and have no idea how much (or rather little) of an audience I actually have. Right now I’m back to the basic wordpress statistics and have been for over a year, so here is the list of the five posts that were the most popular in 2017.
- #5 nlcom and the Delta Method. This is a short explainer of the Delta Method and its implementation in a Stata command. It was written in the summer of 2013, presumably when we were working on surveybias, as a note-to-future-self post. It was viewed 343 times in 2017. Not too shabby for an oldie.
- #4 State of the German polls: The Schulz effect was real. Part of my 2017 poll-pooling exercise, this post demonstrates that the bounce for the SPD early in the campaign was real but short-lived. Just like this post? It got 620 views, but most of them (559) in March, right when it was published.
- #3 Similarly, Five Quick takes on the German election was viewed 869 times, but almost exclusively on election night and on the following day. Which is a pity, because some of it is still relevant (I think).
- #2 I looked up the AfD’s women’s organisation on Facebook. You will not believe what I found Posted only on December 30, this one got 878 views in the few remaining hours of the old year, almost bringing down my server in the process. Traffic was driven by Twitter, thanks to the click-baity title and the incredible image. You will not believe what I saw until you see it.
- #1 Me at the Margins: Average Marginal Effects, Marginal Effects at the Mean, and Stata’s margins command. Another short explainer involving statistical stuff and Stata. It was written in March 2011, but was viewed 1010 times in 2017. It was also the most popular post in 2016 and 2015. In all likelihood, it is the most popular thing on this blog, ever. Go figure.
Women don’t like the AfD (and why would they?)
The AfD is not particularly attractive for women. Survey data suggest that only one in three AfD voters is a woman. The new national executive has 14 memebers. Just two are of the female persuasion. This amounts to a cool 14%, even less than the female share of the AfD’s total membership (16%). The share of female AfD MPs in the new Bundestag is yet again lower at just over ten per cent, half of the already very low figures for the Liberals and the Christian Democrats.
This is hardly surprising. While some Radical Right parties in Western Europe parties at least aim to give the impression that they have modernised their stances on gender politics (cf the Netherlands, Norway), the AfD’s radicalisation over the last three years has brought them closer to traditional right-wing positions (see e.g. Jasmin Siri’s work on this), or perhaps these positions have become more visible.
Sex and loathing
Two “cheeky” 2017 campaign posters marked a new low on this front. One showed the behinds of a pair of scantily clad young women who allegedly “preferred Bikinis over Burqas“, the other used a picture of a massive baby bump to cajole Germans into “making new Germans instead of relying on immigration” (incidentally, the belly in question came from a stock photo of a Brazilian model).
This is the cutesy version of Höcke’s rambling about the “expansive African fertility type” that threatens to take over Germany. The obsession with the number of pure-blooded German babies and the means of their production, the Muslim as a sexual predator, the fear (and envy?) of the hyper-sexual Black that will take away our blonde daughters, wifes, and mistresses – the nice middle class veneer over the familiar right-wing extremist tropes is wearing pretty thin.
Female Facebook Friends
The AfD does not have an officially recognised women’s organisation. But a couple of weeks ago, Christiane Christen (the AfD deputy leader in Rhineland-Palatinate) and Janin Klatt-Eberle, a rank-and-file member from Saxony, have set up a Facebook community called “AfD-politics for women“. So far, some 600 people have liked it.
The page is not meant to co-ordinate or strengthen the positions of women within the AfD (where did that thought come from?). Its mission statement says that it will serve “to explain the AfD’s policies with respect to us women”, because the AfD is the only party that defends liberty and security for women. Hm.
The posts far, are what you would expect. They exploit the New Year’s Eve attacks on women in Cologne in 2015 and a recent jealousy killing where the perpetrator was a youth from Afghanistan and the victim an equally young German girl. They are similar to what can be found on the AfD’s official channels, but executed in a much more amateurish way. What really surprised me, however, even giving that level of amateurishness, was their logo, a – variation? – on the party’s official and already awkward design. This 👇
In my book, this beggars belief, so I preserve it for posterity here before they change it. I’m old enough to qualify as a dirty old man, so I just summarise the gist of the comments on the page:
- No money for a designer? Seriously?
- Pitch-perfect illustration of the party’s gender politics
- This must be a satirical page.
It’s not. It’s real.
Bonus track, because it is almost 2018: Link to one of my favourite older posts on a related subject.
At the tender age of 84, Ian Wachtmeister has died. In the early 1990s, he co-founded New Democracy, a short-lived and at times rather entertaining Swedish Radical Right outfit. Wachtmeister’s Wikipedia bio is here. Later in life (quite late in his case), he was somewhat close to the Sweden Democrats.
The Wikipedia article on New Democracy is also quite interesting (for us nerds). Even better, Jens Rydgren has put a PDF of his 2005 book on New Democracy on the interwebs. In the original Swedish, of course.
As some of you might have noticed, I have recently made some changes to my site. The idea was to simplify its administration and to streamline its design. Predictably, the only thing that really took off was the number of 404 errors. To quote the central theorem of policy analysis, all innovations make things worse, always. To repeat the mantra of system administration, never change a running system. Never.
But (and this is a big but) I have finally managed to revive the Extreme Right Bibliography after a mere week of tinkering, and have thrown in a few new titles for good measure. As always, comments and additions are most welcome. Enjoy!
A mere 2.75 years after the fact, the Definitive Volume (TM) on the German Federal Election of 2009 is almost (almost!) ready to go to the printers’. And so is our chapter on East-West differences in German voting behaviour, which is vintage before it is even out (Pirate party, anyone?). Obviously, the details are becoming more and more blurry, so going through the proofs actually made for a pleasant read.
Political Science is the magpie amongst the social sciences, which borrows heavily from other disciplines. These days, many political scientists are actually failed economists (even more failed economists are actually economists, however). I used to think of myself as a failed sociologist, but reading the proofs it dawned on me that I might actually aspire to become a failed geographer.
On particular nice map that should have been discussed more thoroughly in the paper shows the local deviation from regional voting patterns. Yes, you read that right: I calculate an index (basically Pedersen’s) that summarises local (i.e. district level) deviations from the regional (East vs West) result and roll that into a choropleth. This way, it is easy to see how heterogeneous the two regions really are. Most striking (in my view) is the difference between Bavaria and the other Western Länder, which is of course a result of the CSU’s still relatively strong position. The PDS/Left party’s stronghold over the eastern districts of Berlin is clearly visible, too.
For our piece on distance effects in English elections we geocoded the addresses of hundreds of candidates. For the un-initiated: Geocoding is the fine art of converting addresses into geographical coordinates (longitude and latitude). Thanks to Google and some other providers like OpenStreeMap, this is now a relatively painless process. But when one needs more than a few addresses geocoded, one does not rely on pointing-and-clicking. One needs an API, i.e. a software library that makes the service accessible through R, Python or some other programming language.
The upside is that I learned a bit about the wonders of Python in general and the charms of geopy in particular. The downside is that writing a simple script that takes a number of strings from a Stata file, converts them into coordinates and gets them back into Stata took longer than I ever thought possible. Just now, I’ve learned about a possible shortcut (via the excellent data monkey blog): geocode is a user-written Stata command that takes a variable containing address strings and returns two new variables containing the latitude/longitude information. Now that would have been a bit of a time-saver. You can install geocode by typing
net from http://www.stata-journal.com/software/sj11-1
net install dm0053
There is, however, one potential drawback: Google limits the number of free queries per day (and possibly per minute). Via Python, you can easily stagger your requests, and you can also use an API key that is supposed to give you a bigger quota. Geocoding a large number of addresses from Stata in one go, on the other hand, will probably result in an equally large number of parsing errors.
I’m more and more intrigued by the potential spatial data hold for political science. Once you begin to think about it, concepts like proximity and clustering are basic building blocks for explaining social phenomena. Even better, since the idea of open data has gone mainstream, more and more spatially referenced information becomes available, and when it comes to free, open source software, we are spoilt for choice or, at least in my case, up and beyond the point of utter confusion.
For our paper on the effect of spatial distance between candidates and their prospective voters, we needed a choropleth map of English Westminster constituencies that shows how many of the mainstream candidates live within the constituency’s boundaries. Basically, we had three options (not counting the rather few user-contributed packages for Stata): GRASS, a motley collection of Python packages, and a host of libraries for R.
GRASS is a full-blown open source GIS, whose user interface is perfect for keyboard aficionados and brings back happy memories of the 1980s. While GRASS can do amazing things with raster and vector maps, it is suboptimal for dealing with rectangular data. In the end, we used only its underrated cartographic ps.map module, which reliably creates high-resolution postscript maps.
Python has huge potential for social scientists, both in its own right and as a kind of glue that binds various programs together. In principle, a lot of GIS-related tasks could be done with Python alone. We used the very useful geopy toolboxfor converting UK postcodes to LatLong co-ordinates, with a few lines of code and a little help from Google.
The real treasure trove, however, is R. The quality of packages for spatial analysis is amazing, and their scope is a little overwhelming. Applied Spatial Data Analysis with R by Roger Bivand, who wrote much of the relevant code, provides much-needed guidance.
Counting the number of mainstream candidates living in a constituency is a point-in-polygon problem: each candidate is a co-ordinate enclosed by a constituency boundary. Function overlay from package sp carries out the relevant operation. Once I had it located, I was seriously tempted to loop over constituencies and candidates. Just in time, I remembered the R mantra of vectorisation. Provided that points (candidates) and polygons (constituencies) have been transformed to the same projection, all that is needed is this:
This works because candpos1 is a vector of points that represent the spatial positions of all Labour candidates. These are tested against all constituency boundaries. The result is another vector of indices, i.e. sequence numbers of the constituencies the candidates are living in. Put differently, overlay takes a list of points and a bunch of polygons and returns a list that maps the former to the latter. With a bit of boolean logic, a vector of zeros (candidate outside constituency) and ones (candidate living in their constituency) ensues. Summing up the respective vectors for Labour, Tories, and LibDems then gives the required count that can be mapped. Result!