A friend sent me the link to this very short article in Perspectives on Psychological Science that use precious journal space to highlight a lot of rather disturbing parallels between (social) science and Dante’s Inferno in creative ways. It would seem that we are all sinners, which, on second thought, is hardly news. For once, the article is not behind a pay-wall which reminds me of a glaring omission in the piece: there is no mention of the 99 extra circles reserved for predatory publishers.
The story has now been picked up by just about every news outlet on the planet: A German law professor was supposed to review a monograph on European constitutional law for a learned journal. He soon discovered that various pages were not properly referenced, to says the least. The twist: This monograph is based on Karl-Theodor zu Guttenberg’s PhD thesis. And that man happens to be the German defence minister. The review has not yet been published, but the proofs have been leaked. From what you can read there, you would think that the minister cannot have been in his right mind.
While this is a scientific debate, the internet has of course exploded. I’m not sure how far we can trust the wisdom of the crowd, but it would seem that even the introduction bears an uncanny resemblance with some old editorials and even an essay by an anonymous student, all readily available online. That looks very bad.
But do normal people care? How can you explain that copying text verbatim is very bad while copying text verbatim and adding a name, a year and a page is absolutely ok? How can you explain that rephrasing someone else’s ideas and adding a name, year and page is even better?
Another, not totally unrelated question: If the rules of academia are so opaque to normal people, why is so much social status attached to a doctorate? Why should people who have no ambition to do research (inside or outside academia) strive for a higher degree?
At any rate, zu Guttenberg has done a lot of harm to German science: too many of us have already wasted too much of our time, er, researching the affair on facebook and twitter instead of producing stuff that could at least potentially be plagiarised.
- German ‘plagiarism’ minister Guttenberg drops doctorate (nowpublic.com)
- German minister given deadline in plagiarism row (telegraph.co.uk)
This is a true gem of interdisciplinary research: A recent article in the British Medical Journal demonstrates that the crisis may have toppled major banks and halved the value of your assets, but did not stop these silly little buggers from happily swallowing coins at a constant rate.
Almost exactly three years ago, a major political science journal asked me to review a manuscript. I recommended to reject the paper on the grounds that a) its scope was extremely limited and b) that it largely ignored the huge body of existing political science literature on its topic. The editors followed my suggestion (presumably, the other reviewers did not like the piece either). A couple of days ago, an obscure national journal sent me the very same (though slightly updated and upgraded) manuscript review. Is this sad or funny? How often did they authors have to downgrade their ambitions for finding a decent outlet in the process? And how common is this?
Thanks to the all new, all shiny political science peer-review survey, there is at least an answer to the last question: about 30 per cent of our respondents say that they would submit a rejected manuscript to a less prestigious journal. But what really strikes me is the proportion of reviewers who have reviewed (and rejected?) the same manuscript for at least two different journals: 29 per cent. This squares nicely with my personal experience (sometimes I seem to hit the same wall twice or more) and points to the fact that political science is a small world. Too small perhaps.
The survey is still open, so if your are an active political scientist, please, please participate and share your experience with us! We will publish preliminary results of the peer review survey online and will eventually put the data into the public domain.
If you edit, review or author manuscripts for political science journals, the peer-review process is at the centre of your professional life. Unfortunately, for most of us the process is largely a black box. While everyone has heard (or lived through) tales from the trenches, there is very little hard evidence on how the process actually works. This is why a number of colleagues and I started the peer-review survey project that aims at collecting information on the experience of authors, reviewers and editors of political science journals.
If you are an active political scientist, this survey is for you: we need your expertise, and your input is greatly appreciated. Filling in the form is fun and will typically take less than ten minutes of your time. It is also a great way to release some steam 🙂
Ready? Then proceed to the Political Science Peer-Review Survey.
We also put some (very) preliminary results of the political science peer-review survey online and will release further findings and eventually the data set in the future.
If you think this is worthwhile (and who wouldn’t?), please spread the word. To make this easier, we have created short URL for the survey (http://tinyurl.com/peer-review-survey) and the results (http://tinyurl.com/peer-review-results) that you can forward to your colleagues. Thanks again for your support. It is greatly appreciated.
However, during the summer break I had a little spare time and decided that it was time to move my stuff to a domain of my own. This is what I did:
- I registered my own domain kai-arzheimer.com and rented 250 MB of webspace from a small but very keen provider for less than 18 Euros per year. Crucially, they give me ssh access to the server and a handy set of tools (bash, textutils, emacs, perl, python and even gcc)
- I carefully read the advice on moving to a new domain that Google gives on its webmaster blog. I registered both the old and the new site with them and installed their tool for generating sitemaps.
- I copied everything to the new site without making any changes.
- I brushed up my knowledge on generating 301 redirects. A “301” means that what ever content was available at a given URL has moved permanently to another URL. Most browsers take you to this new address in the blink of an eye without you ever realising that the URL has changed. And Google will eventually update its index and will interpret any links pointing to the old URL as pointing to the new one. At least this is what they promise.
- I found out that I was extremely lucky because my old institution runs Apache with the Mod-Rewrite module enabled and gives ordinary users access to this machine via .htaccess files. This is obviously Techno-Babble but the upshoot is this: I put a file named .htaccess in the top-level directory of my old site (www.politik.uni-mainz.de/kai.arzheimer/) and changed its content to
RewriteRule (.*) http://www.kai-arzheimer.com/$1 [R=301,L]
This instructs the server at Mainz to do a search&replace operation on URLs that refer to my old site and rewrite them into redirects to my new site. This works for PDFs, powerpoints, single pages, pictures, anything. That also means that external links to duly forgotten working papers on other people’s sites which have (just like the working papers) not been updated since 1999 still work. The object does not even have to exist: if you ask for http://www.politik.uni-mainz.de/kai.arzheimer/meaning-of-life.html you will be served a 404-page from my new site. How neat is that?
- Finally, I found a perl-oneliner that would correct the absolute references to the old site that might or might not be buried deep in the HTML code of ancient pages:
perl -pi.bak -e 's!www.politik.uni-mainz.de/kai.arzheimer!www.kai-arzheimer.com!ig' *.htm*There is probably a more clever way to do this, but I applied the same changes in the lower-level directories by changing the last few characters to */*.htm*, */*/*.htm* and so on. Rather amazingly, the same trick worked for PDF files: by applying the patch to
On the next day, results from the new site began very slowly to replace the pages from the old site. For a couple of days, pages from the new site would disappear and re-appear, but this doesn’t really matter because thanks to the redirect, people find you either way. Three weeks on, the transition seems to be mostly complete. So far, it has been a surprisingly painless experience.
Weird, sad but apparently true: at Nottingham University, a PhD student who works on islamic terrorism and an administrator were arrested (though released without charges) because they were in possession of an al-Qaeda manual downloaded from the internet. The twist: the manual was part of an MA dissertation and had been re-submitted as part of a PhD application. Now this is clandestine. THE has the full story, and boing boing has lots of comments on it. All of the sudden, the whole point of urging students to provide proper references and go back to the sources seems rather moot.
Our project on social (citation and collaboration) networks in British and German political science involves networks with hundreds and thousands of nodes (scientists and articles). At the moment, our data come from the Social Science Citation Index (part of the ISI web of knowledge), and we use a bundle of rather eclectic (erratic?) scripts written in Perl to convert the ISI records into something that programs like Pajek or Stata can read. Some canned solutions (Wos2pajek, network workbench, bibexcel) are available for free, but I was not aware of them when I started this project, did not manage to install them properly, or was not happy with the results. Perl is the Swiss Army Chainsaw (TM) for data pre-processing, incredibly powerful (my scripts are typically less than 50 lines, and I am not an efficient programmer), and every time I want to do something in a slightly different way (i.e. I spot a bug), all I have to do is to change a few lines in the scripts.
After trying a lot of other programs available on the internet, we have chosen Pajek for doing the analyses and producing those intriguing graphs of cliques and inner circles in Political Science. Pajek is closed source but free for non-commercial use and runs on Windows or (via wine) Linux. It is very fast, can (unlike many other programs) easily handle very large networks, produces decent graphs and does many standard analyses. Its user interface may be slightly less than straightforward but I got used to it rather quickly, and it even has basic scripting capacities.
The only thing that is missing is a proper manual, but even this is not really a problem since Pajek’s creators have written a very accessible introduction to social network analysis that doubles up as documentation for the program (order from amazon.co.uk, amazon.com, amazon.de. However, Pajek has been under constant development since the 1990s (!) and has acquired a lot of new features since the book was published. Some of them are documented in an appendix, others are simply listed in the very short document that is the official manual for Pajek. You will want to go through the many presentations which are available via the Pajek wiki.
Of course, there is much more software available, often at no cost. If you do program Java or Python (I don’t), there are several libraries available that look very promising. Amongst the stand-alone programs, visone stands out because it can easily produce very attractive-looking graphs of small networks. Even more software has been developed in the context of other sciences that have an interest in networks (chemistry, biology, engineering etc.)
Here is a rather messy collection of links to sna software. Generally, you will want something that is more systematic and informative. Ines Mergel has recently launched a bid for creating a comprehensive software list on wikipedia. The resulting page on social network analysis software is obviously work in progress but provides very valuable guidance.
Udo Voigt, the leader of the NPD, has been charged with inciting racial hatred. During the 2006 World Cup, the party published a pamphlet that questioned the right of non-white players in the squad to represent Germany in the tournament. The NPD is the oldest amongst the three relevant extreme right parties in Germany. Founded in the early 1960s, the party was successful in a number of Land elections but could not overcome the 5 per cent threshold in the General election of 1969. For more than three decades, the party that once had tens of thousands of members and even set up its own student organisation barely survived as a political sect but played no role in electoral politics. If you can read German, here is a chapter on extremist parties and their voters with lots of fascinating details on Germany I wrote for a handbook on electoral behaviour.
Voigt was elected as party leader in 1996 and quickly modernised the party. His aggressive and dynamic stance persuaded the Federal government to apply for a ban of the party in the Federal Constitutional Court in 2003. The case was thrown out on procedural grounds, and for the first time in 40 years, the party managed to win seats in two state elections in 2004 and 2006.
However, the charges against Voigt are just the latest political blow for the party and its current leadership. After 2006, there have been no more electoral successes. Moreover, the party is involved in dubious financial transactions. The party treasurer was taken into custody in February, and the party must repay huge amounts of money it had claimed under Germany’s state-sponsored party-funding scheme. Voigt stands for re-election as party leader in May, and there might well be a leadership contest.
The basic assumptions of the theory of economic voting are very simple:
- voters care about unemployment, inflation, and growth
- voters blame the government for adverse economic conditions
- voters use the ballot to punish the government.
Unfortunately, the impact of this effect is not constant over time and across countries, which is slightly embarrassing. In their recent book, van der Brug et al. do not claim that they have solved this puzzle, but they maintain that they have taken the discussion one step further. According to them, previous research has looked at the wrong variable, i.e. (dichotomous or multinomial) vote intentions. This is hardly surprising. For the last decade or so, these authors and their associates have campaigned for an alternative measure, namely the subjective probability to vote for each single party. However, their measure (which has been implemented in the European Election Studies) is not uncontroversial. First, analysts must account for the clustering of these ratings (while we might look at 4,000 or 6,000 ratings, we still have only 1,000 truly independent cases, i.e. persons). Second, if a respondent does not rate a party, is that a missing value or a zero probability? Third, comparisons across political systems (especially comparisons of two-/multiparty systems) are at least as dodgy as comparisons of the traditional variable. And finally, while counting votes/vote intentions obviously discards valuable information about the individual calculus that leads to this decision, subjective probabilities are closer to party sympathies than to real thing. Nonetheless, an interesting read.