## Reprise: The co-citation network in European Radical Right studies

In the last post, I tried to reconstruct the co-citation network in European Radical Right studies and ended up with this neat graph.

The titles are arranged in groups, with the “Extreme Right” camp on the right, the “Radical Right” group in the lower-left corner, and a small number of publications that is committed to neither in the upper-left corner. The width of the lines represents the number of co-citations connecting the titles.

What does the pattern look like? The articles by Knigge (1998) and Bale et al. (2010) are both in the “nothing in particular” group, but are never cited together, at least not in the data that I extracted. One potential reason is that they are twelve years apart and address quite different research questions.

*Want to watch a video of this blog?*

Apart from this gap, the network is complete, i.e. everyone is cited with everyone else in the top 20. This is already rather compelling against the idea of a split into incompatible two incompatible strands. Intriguingly, there are even some strong ties that bridge alleged intellectual cleavages, e.g. between Kitschelt’s monograph and the article by Golder, or between Lubbers, Gijsberts and Scheepers on the one hand and Norris and Kitschelt on the other.

While the use of identical terminology seems to play a minor role, the picture also suggests that co-citations are chiefly driven by the general prominence of the titles involved. However, network graphs can be notoriously misleading.

## Modelling the number of co-citations in European Radical Right studies

Modelling the number of co-citations provides a more formal test for this intuition. There are counts of co-citations amongst the top 20 titles, ranging from 0 to 5476, with a mean count of 695 and a variance of 651,143. Because the variance is so much bigger than the mean, a regression model that assumes a negative binomial distribution, which can accommodate such overdispersion, is more adequate than one built around a Poison distribution. “General prominence” is operationalised as the sum of *external* co-citations of the two titles involved. Here are the results.

Variable | Coefficient | S.E. | p |
---|---|---|---|

external co-citations | 0.0004 | .00002 | <0.05 |

same terminology | 0.424 | 0.120 | <0.05 |

Constant | 2.852 | 0.219 | <0.05 |

The findings show that controlling for general prominence (operationalised as the sum of co-citations outside the top 20), using the same terminology (coded as “extreme” / “radical” / “unspecific or other” *does *have a positive effect on the expected number of co-citations. But what do the numbers mean?

The model is additive in the logs. To recover the counts (and transform the model into its multiplicative form), one needs to exponentiate the coefficients. Accordingly, the effect of using the same terminology translates into a factor of exp(0.424) = 1.53.

## What do these numbers mean?

But how relevant is this in practical terms? Because the model is non-linear, it’s best to plot the expected counts for equal/unequal terminology, together with their areas of confidence, against a plausible range of external co-citations.

As it turns out, terminology has only a small effect on the expected number of co-citations for works that have between 6,000 and 8,000 external co-citations. From this point on, the expected number of co-citations grows somewhat more quickly for dyads that share the same terminology. However, over the whole range of 6,000 to 12,000 external co-citations, the confidence intervals overlap and so this difference is not statistically significant.

Unless two titles have a very high number of external co-citations, the probability of them being both cited in a third work does not depend on the terminology they use. Even for the (few) heavily cited works, the evidence is insufficient to reject the null hypothesis that terminology makes no difference.

While the analysis is confined to the relationships between just 20 titles, these titles matter most, because they form the core of ERRS. If we cannot find separation here, that does not necessarily mean that it does not happen elsewhere, but if happens elsewhere, that is much less relevant. So: no two schools. Everyone is citing the same prominent stuff, whether the respective authors prefer “Radical” or “Extreme”. Communication happens, which seems good to me.

Are you surprised?

Got to the first part of this mini series, or read the full article on concepts in European Radical Right research here:

- Arzheimer, Kai. “Conceptual Confusion is not Always a Bad Thing: The Curious Case of European Radical Right Studies.” Demokratie und Entscheidung. Eds. Marker, Karl, Michael Roseneck, Annette Schmitt, and Jürgen Sirsch. Wiesbaden: Springer, 2018. 23-40. doi:10.1007/978-3-658-24529-0_3

[BibTeX] [Download PDF] [HTML]`@InCollection{arzheimer-2018, author = {Arzheimer, Kai}, title = {Conceptual Confusion is not Always a Bad Thing: The Curious Case of European Radical Right Studies}, booktitle = {Demokratie und Entscheidung}, publisher = {Springer}, address = {Wiesbaden}, pages = {forthcoming}, year = 2018, url = {https://www.kai-arzheimer.com/conceptual-confusion-european-radical-right-studies.pdf}, doi = {10.1007/978-3-658-24529-0_3}, pages = {23-40}, html = {https://www.kai-arzheimer.com/conceptual-confusion-european-radical-right-studies}, editor = {Marker, Karl and Roseneck, Michael and Schmitt, Annette and Sirsch, Jürgen}, dateadded = {01-06-2018} }`