But how bad is it really? In a recent chapter (author’s version, not paywalled), I argue that communication in Radical Right studies still works. Texts using all 50 shades of “Right” are still cited together, indicating that later scholars realised they were all talking about (more or less) the same thing.
I have written a number of short blogs about the change in terminology over time, the extraction of the co-citation network, and the interpretation of the findings. But sometimes, all this reading is getting a bit much, and so I tried something different: using some newfangled software for noobs, I turned my findings into a short video. Have a look for yourself and tell me what you think.

Reprise: The co-citation network in European Radical Right studies
In the last post, I tried to reconstruct the co-citation network in European Radical Right studies and ended up with this neat graph.
The titles are arranged in groups, with the “Extreme Right” camp on the right, the “Radical Right” group in the lower-left corner, and a small number of publications that is committed to neither in the upper-left corner. The width of the lines represents the number of co-citations connecting the titles.
What does the pattern look like? The articles by Knigge (1998) and Bale et al. (2010) are both in the “nothing in particular” group, but are never cited together, at least not in the data that I extracted. One potential reason is that they are twelve years apart and address quite different research questions.
Want to watch a video of this blog?
Apart from this gap, the network is complete, i.e. everyone is cited with everyone else in the top 20. This is already rather compelling against the idea of a split into incompatible two incompatible strands. Intriguingly, there are even some strong ties that bridge alleged intellectual cleavages, e.g. between Kitschelt’s monograph and the article by Golder, or between Lubbers, Gijsberts and Scheepers on the one hand and Norris and Kitschelt on the other.
While the use of identical terminology seems to play a minor role, the picture also suggests that co-citations are chiefly driven by the general prominence of the titles involved. However, network graphs can be notoriously misleading.
Modelling the number of co-citations in European Radical Right studies
Modelling the number of co-citations provides a more formal test for this intuition. There are counts of co-citations amongst the top 20 titles, ranging from 0 to 5476, with a mean count of 695 and a variance of 651,143. Because the variance is so much bigger than the mean, a regression model that assumes a negative binomial distribution, which can accommodate such overdispersion, is more adequate than one built around a Poison distribution. “General prominence” is operationalised as the sum of external co-citations of the two titles involved. Here are the results.
Variable | Coefficient | S.E. | p |
---|---|---|---|
external co-citations | 0.0004 | .00002 | <0.05 |
same terminology | 0.424 | 0.120 | <0.05 |
Constant | 2.852 | 0.219 | <0.05 |
The findings show that controlling for general prominence (operationalised as the sum of co-citations outside the top 20), using the same terminology (coded as “extreme” / “radical” / “unspecific or other” does have a positive effect on the expected number of co-citations. But what do the numbers mean?
The model is additive in the logs. To recover the counts (and transform the model into its multiplicative form), one needs to exponentiate the coefficients. Accordingly, the effect of using the same terminology translates into a factor of exp(0.424) = 1.53.
What do these numbers mean?
But how relevant is this in practical terms? Because the model is non-linear, it’s best to plot the expected counts for equal/unequal terminology, together with their areas of confidence, against a plausible range of external co-citations.

Effect of external co-citations and use of terminology on predicted number of co-citations within top 20
As it turns out, terminology has only a small effect on the expected number of co-citations for works that have between 6,000 and 8,000 external co-citations. From this point on, the expected number of co-citations grows somewhat more quickly for dyads that share the same terminology. However, over the whole range of 6,000 to 12,000 external co-citations, the confidence intervals overlap and so this difference is not statistically significant.
Unless two titles have a very high number of external co-citations, the probability of them being both cited in a third work does not depend on the terminology they use. Even for the (few) heavily cited works, the evidence is insufficient to reject the null hypothesis that terminology makes no difference.
While the analysis is confined to the relationships between just 20 titles, these titles matter most, because they form the core of ERRS. If we cannot find separation here, that does not necessarily mean that it does not happen elsewhere, but if happens elsewhere, that is much less relevant. So: no two schools. Everyone is citing the same prominent stuff, whether the respective authors prefer “Radical” or “Extreme”. Communication happens, which seems good to me.
Are you surprised?
Got to the first part of this mini series, or read the full article on concepts in European Radical Right research here:
- Arzheimer, Kai. “Conceptual Confusion is not Always a Bad Thing: The Curious Case of European Radical Right Studies.” Demokratie und Entscheidung. Eds. Marker, Karl, Michael Roseneck, Annette Schmitt, and Jürgen Sirsch. Wiesbaden: Springer, 2018. 23-40. doi:10.1007/978-3-658-24529-0_3
[BibTeX] [Download PDF] [HTML]@InCollection{arzheimer-2018, author = {Arzheimer, Kai}, title = {Conceptual Confusion is not Always a Bad Thing: The Curious Case of European Radical Right Studies}, booktitle = {Demokratie und Entscheidung}, publisher = {Springer}, address = {Wiesbaden}, pages = {forthcoming}, year = 2018, url = {https://www.kai-arzheimer.com/conceptual-confusion-european-radical-right-studies.pdf}, doi = {10.1007/978-3-658-24529-0_3}, pages = {23-40}, html = {https://www.kai-arzheimer.com/conceptual-confusion-european-radical-right-studies}, editor = {Marker, Karl and Roseneck, Michael and Schmitt, Annette and Sirsch, Jürgen}, dateadded = {01-06-2018} }

Research question
Want to watch a video of this blog?
How to turn citations into data
Short of training a hypercomplex and computationally expensive neural network (i.e. a grad student) to look at the actual content of the texts, analysing citation patterns is the most straightforward way to address the research question. Because I needed citation information, I harvested the Social Science Citation Index (SSCI) instead of my own bibliography. The Web of Science interface to the SSCI lets you save records as plain text files, which is all that was required. The key advantage of the SSCI data is that all the sources that each item cites are recorded, too, and can be exported with the title. This includes (most) items that are themselves not covered by the SSCI, opening up the wonderful world of monographs and chapters. To identify the two literatures, I simply ran queries for the phrases “Extreme Right” and “Radical Right” for the 1980-2017 period. I used the “TS” operator to search in titles, abstracts, and keywords. These queries returned 596 and 551 hits, respectively. Easy.
But how far separated are the two strands of the literature? To find out, I first looked at the overlap between the two. By overlap, I mean items that use both phrases. This applies to 132 pieces, or just under 12 per cent of the whole stash. This is not a state of zilch communication, yet by this criterion alone, it would seem that there are indeed two relatively distinct literatures. But what I’m really interested in are (co-)citation patterns How could I beat two long plain text lists of articles and the sources they cite into a usable data set?
When you are asking this kind of question, usually “there is an R package for that”™, unless the question is too silly. In my case, the magic bullet for turning information from the SSCI into crunchable data is the wonderful bibliometrix package. Bibliometrix reads saved records from Web of Science/SSCI (in bibtex format) and converts them into data frames. It also provides functions for extracting bibliometric information from the data. Before I move on to co-citations, here’s the gist of the code that reads the data and generates a handy list of the 10 most-cited titles:
library(bibliometrix) D <- readFiles("savedrecs-all.bib") M <- convert2df(D, dbsource = "isi", format = "bibtex") # remove some obviously unrelated items M <- M[-c(65,94,96,97,104,105,159,177,199,457,459,497,578,579,684,685,719,723),] M <- M[-c(659,707),] M <- M[-c(622),] results <- biblioAnalysis(M, sep = ";") S=summary(object = results, k = 10, pause = FALSE) #Citations CR <- citations(M, field = "article", sep = ". ") CR$Cited[1:10]
So what are the most cited titles in Extreme/Radical Right studies?
Source | Number of times cited |
---|---|
Mudde (2007) | 160 |
Kitschelt (1995) | 147 |
Betz (1994) | 123 |
Lubbers et al. (2002) | 97 |
Norris (2005) | 90 |
Golder (2003) | 86 |
R.W. Jackman & Volpert (1996) | 77 |
Carter (2005) | 66 |
Arzheimer & Carter (2006) | 65 |
Brug et al. (2005) | 65 |
Getting to the co-citation network: are the Extreme / Radical Right literatures separated from each other?
If this was indeed the case, the literature should display a low degree of separation between users of both labels. Looking for co-citation patterns is a straightforward operationalisation for (lack of) separation. A co-citation occurs when two publications are both cited by some later source. By definition, co-citations reflect a view on the older literature as it is expressed in a newer publication. When two titles from the “Extreme Right” and “Radical Right” literatures are co-cited, this small piece of evidence that the literature has not split into two isolated streams. The SSCI aims at recording every source that is cited, even if the source itself is not in the SSCI. This makes for a very large number of publications that could be candidates for co-citations (18,255), even if most of them are peripheral European Radical Right studies, and a whopping 743,032 actual co-citations.
To get a handle on this, I extracted the 20 publications with the biggest total number of co-citations and their interconnections. They represent something like the backbone of the literature. How did I reconstruct this network from textual data? Once more, R and its packages came to the rescue and helped me to produce a reasonably nice plot (after some additional cleaning up)
NetMatrix <- biblioNetwork(M, analysis="co-citation",network = "references", sep = ". ") # Careful: we are not interested in loops and not interested in separate connections between nodes. We convert the latter to weights g <- graph.adjacency(NetMatrix,mode="max",diag=FALSE) # Extract the top 20 most co-cited items f <- induced_subgraph(g,degree(g)>quantile(degree(g),probs=(1-20/ length(V(g))))) # Now build a vector of relevant terms (requires knowledge of these titles) # 1: extreme, 2: radical, 3:none/other # Show all names V(f)$name term <- c(3,2,1,1,2,1,1,2,1,2,3,2,2,2,3,1,1,1,1,1) mycolours <- brewer.pal(3, "Greys") V(f)$term <- term V(f)$color <- mycolours[term]
Co-citation analysis: results
So, what are the results? First, here is the top 20 of co-cited items in the field of Extreme/Radical Right studies:
Source | Co-citations within top 20 | Total co-citations |
---|---|---|
Kitschelt (1995) | 745 | 7700 |
Mudde (2007) | 740 | 8864 |
Lubbers et al. (2002) | 600 | 5212 |
Norris (2005) | 568 | 5077 |
Golder (2003) | 564 | 4687 |
Betz (1994) | 542 | 6151 |
R.W. Jackman & Volpert (1996) | 477 | 4497 |
Brug et al. (2005) | 462 | 3523 |
Arzheimer & Carter (2006) | 460 | 3551 |
Knigge (1998) | 445 | 3487 |
Carter (2005) | 389 | 3291 |
Arzheimer (2009) | 376 | 3301 |
Ignazi (2003) | 344 | 2876 |
Ivarsflaten (2008) | 334 | 3221 |
Ignazi (1992) | 331 | 3230 |
Rydgren (2007) | 300 | 3353 |
Bale (2003) | 297 | 3199 |
Brug et al. (2000) | 276 | 2602 |
Meguid (2005) | 246 | 2600 |
Bale et al. (2010) | 134 | 2449 |
Many of these titles are familiar, because they also appear in the top ten of most cited titles and are classics to boot. And here is another nugget: for each title, a substantial share of about 10 per cent of all co-citations happen within this top twenty. This is exactly the (sub)network of co-citations I’m interested in. So here is the plot I promised:
But what does it all mean? Read the second part of this mini series, or go to the full article (author’s version, no paywall):
- Arzheimer, Kai. “Conceptual Confusion is not Always a Bad Thing: The Curious Case of European Radical Right Studies.” Demokratie und Entscheidung. Eds. Marker, Karl, Michael Roseneck, Annette Schmitt, and Jürgen Sirsch. Wiesbaden: Springer, 2018. 23-40. doi:10.1007/978-3-658-24529-0_3
[BibTeX] [Download PDF] [HTML]@InCollection{arzheimer-2018, author = {Arzheimer, Kai}, title = {Conceptual Confusion is not Always a Bad Thing: The Curious Case of European Radical Right Studies}, booktitle = {Demokratie und Entscheidung}, publisher = {Springer}, address = {Wiesbaden}, pages = {forthcoming}, year = 2018, url = {https://www.kai-arzheimer.com/conceptual-confusion-european-radical-right-studies.pdf}, doi = {10.1007/978-3-658-24529-0_3}, pages = {23-40}, html = {https://www.kai-arzheimer.com/conceptual-confusion-european-radical-right-studies}, editor = {Marker, Karl and Roseneck, Michael and Schmitt, Annette and Sirsch, Jürgen}, dateadded = {01-06-2018} }
Citations and co-publications are one important indicator of scientific communication and collaboration. By studying patterns of citation and co-publication in four major European Political Science journals (BJPS, PS, PVS and ÖZP), we demonstrate that compared to the conduits of communication in the natural sciences, these networks are rather sparse. British Political Science, however, is clearly less fragmented than its German speaking counterpart.
Last Saturday, we presented our ongoing work on collaboration and citation networks in Political Science at the
4th UK Network conference held at the University of Greenwich. For this conference, we created a presentation on Knowledge Networks in European Political Science that summarises most of our findings on political science in Britain and Germany and provides some additional international context. The picture on the right shows a subnetwork of about 320 scientists who mutually cite each others’ work. Watch out for the dense IR/methods cluster and the lack of (mutual) connections between the dispersed political sociology and formal methods camps.
Technorati-Tags: sna, pajek, networks, analysis, political science, citation, bibliometrics, pdf, presentation
After trying a lot of other programs available on the internet, we have chosen Pajek for doing the analyses and producing those intriguing graphs of cliques and inner circles in Political Science. Pajek is closed source but free for non-commercial use and runs on Windows or (via wine) Linux. It is very fast, can (unlike many other programs) easily handle very large networks, produces decent graphs and does many standard analyses. Its user interface may be slightly less than straightforward but I got used to it rather quickly, and it even has basic scripting capacities.
The only thing that is missing is a proper manual, but even this is not really a problem since Pajek’s creators have written a very accessible introduction to social network analysis that doubles up as documentation for the program (order from amazon.co.uk, amazon.com, amazon.de. However, Pajek has been under constant development since the 1990s (!) and has acquired a lot of new features since the book was published. Some of them are documented in an appendix, others are simply listed in the very short document that is the official manual for Pajek. You will want to go through the many presentations which are available via the Pajek wiki.
Of course, there is much more software available, often at no cost. If you do program Java or Python (I don’t), there are several libraries available that look very promising. Amongst the stand-alone programs, visone stands out because it can easily produce very attractive-looking graphs of small networks. Even more software has been developed in the context of other sciences that have an interest in networks (chemistry, biology, engineering etc.)
Here is a rather messy collection of links to sna software. Generally, you will want something that is more systematic and informative. Ines Mergel has recently launched a bid for creating a comprehensive software list on wikipedia. The resulting page on social network analysis software is obviously work in progress but provides very valuable guidance.
Technorati-Tags: sna, software, political science, network, analysis, perl, citation, bibliometrics, networks, social, social networks
