Luhmann was a Large Language Model

The other day, I read an interesting article about GPT-4’s lack of proper “understanding” by a grizzled evolutionary theorist aimed at dummies like me. The author said that the “thing” (his word) is (still) clueless, because its answers are not based in experience and models of the world. It’s just “finding correlations across words” that sound meaningful to us.

This gave me an instant flashback to my PhD days of yore, when I was forced to take yet another course in philosophy. Because it seemed social-science adjacent, and because I had already suffered through two or three heavy doses of Parsons as an undergrad, I went for the Luhmann option. Even (or especially as) a native speaker of German, this was nothing that I would recommend. In fact, I would not wish this upon my worst enemy.

I vividly remember one seminar when the philosophy professor (a lovely, if somewhat unworldly man) picked out the phrase “male bias” from the chapter we were trying to make sense of, mistook it for Latin, and tried to connect it to the work of Meister Eckhart, the medieval mystic, theologian, and, yes, mystic. Incidentally, I got a term paper out of this course that I later managed to get published as my only ever theory paper in a pretty obscure journal that subsequently tanked.

This, via “correlation” brings me back to Luhmann, ChatGPT, and the hordes of young men that have aged before their time, earnestly stroking their beards whilst extolling the alleged virtues of his Zettelkasten method and the endless iterations and permutations of important sounding stuff the master produced using it. Just like ChatGPT.

And so I had to ask the thing what Luhmann’s last words to his creation might have been:

Luhmann was a Large Language Model 1

In the complexity of modern societies, a network of communication unfolds that creates unpredictable interdependencies and is always looking for new forms of order.

ChatGPT channeling Luhmann

Yeah, whatever. Sounds about right, although I like its Inglehart and Goodwin incarnations better. Probably a matter of training data.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.