Attention conservation notice: Notice of a fairly advanced course in a discipline you don't study at a university you don't attend. Combines a trendy subject matter near the peak of its hype cycle with a stodgy, even resentful, focus on old ideas.
This fall will be the beginning of my 21st year at CMU. I should know better than to volunteer to do a new prep --- but I don't.
All of this, especially the topical outline, is subject to revision as we get closer to the semester actually starting.
Manual trackback: Brad DeLong.
ChatGPT's interpretation of the course description, as a painting in the style of the Futurist Gino Severini. The fact that it managed to spell "statistical inference", but not "debate", is a nice touch
Posted at April 22, 2025 12:55 | permanent link
Collecting posts related to this course (36-3467/36-667).
Posted at April 21, 2025 21:17 | permanent link
Attention conservation notice: Almost 3900 words of self-promotion for an academic paper about large language models (of all ephemeral things). Contains self-indulgent bits trimmed from the published article, and half-baked thoughts too recent to make it in there.
For some years now, I have been saying to anyone who'll listen that the best way to think about large language models and their kin is due to the great Alison Gopnik, and it's to regard them as cultural technologies. All technologies, of course, are cultural in the sense that they are passed on from person to person, generation to generation. In the process of leaping from mind to mind, cultural content always passes through some external, non-mental form: spoken words, written diagrams, hand-crafted models, demonstrations, interpretive dances, or just examples of some practice carried out by the exemplifier's body [1]. A specifically cultural technology is one that modifies that very process of transmission, as with writing or printing or sound recording. That is what LLMs do; they are not so much minds as a new form of information retrieval.
I am very proud to have helped play a part in giving Gopnikism [2] proper academic expression:
What follows is my attempt to gloss and amplify some parts of our paper. My co-authors are not to be blamed for what I say here: unlike me, they're constructive scholars.
To put things more bluntly than we did in the paper: the usual popular and even academic debate over these models is, frankly, conducted on the level of myths (and not even of mythology). We have centuries of myth-making about creating intelligences and their consequences [3], Tampering with Forces Man Was Not Meant to Know, etc. Those myths have hybridized with millennia of myth-making about millenarian hopes and apocalyptic fears. [4] This is all an active impediment to understanding.
LLMs are parametric probability models of symbol sequences, fit to large corpora of text by maximum likelihood. By design, their fitting process strives to reproduce the distribution of text in the training corpus [5]. (The log likelihood is a proper scoring function.) Multi-modal large models are LLMs yoked to models of (say) image distributions; they try to reproduce the joint distribution of texts and images. Prompting is conditioning: the output after a prompt is a sample from the conditional distribution of text coming after the prompt (or the conditional distribution of images that accompany those words, etc.). All these distributions are estimated with a lot of smoothing: parts of the model like "attention" (a.k.a. kernel smoothing) tell the probability model when to treat different-looking contexts as similar (and how closely similar), with similar conditional distributions. This smoothing, what Andy Gelman would call "partial pooling", is what lets the models respond with sensible-looking output, rather than NA, to prompts they've never seen in the training corpus. It also, implicitly, tells the model what to ignore, what distinctions make no difference. This is part (though only part) of why these models are lossy.
(The previous paragraph, like our paper, makes no mention of neural networks, of vector embeddings as representations of discrete symbols, of LLMs being high-order Markov chains, etc. Those are important facts about current models. [That Markov models, of all things, can do all this still blows my mind.] But I am not convinced that these are permanent features of the technology, as opposed to the first things tried that worked. I really think we should know more about is how well other techniques for learning distributions of symbols sequences would work if given equivalent resources. I really do want to see someone try Large Lempel-Ziv. I also have a variety of ideas for combining Infini-gram with old distribution-learning procedures which I think would at the very least make good student projects. [I'm being a bit cagey because I'd rather not be scooped; get in touch if you're interested in collaborating.] I am quite prepared for all of these Mad Schemes to work less well than conventional LLMs, but then I think the nature and extent of the failures would be instructive. In any case, the argument we're making about these artifacts does not depend on these details of their innards.)
What follows from all this?
I was (I suspect) among the last cohorts of students who were routinely taught how to use paper library card catalogs. Those, too, were technologies for bringing inquirers into contact with the works of other minds. You can worry, if you like, that LLMs and their kin are going to grow into uncontrollable artificial general intelligences, but it makes about as much sense as if I'd had nightmares about card catalogs going feral.
ChatGPT's output for the phrase "feral library card catalogs". (I suspect the image traces back to photos of Doe Library at Berkeley, but that may just be nostalgia on my part.) Click to embiggen.
Back when all this was beginning, in the spring of 1956, Allen Newell and Herbert Simon thought that "complex information processing" was a much better name than "artificial intelligence" [9]:
The term "complex information processing" has been chosen to refer to those sorts of behaviors --- learning, problem solving, and pattern recognition --- which seem to be incapable of precise description in any simple terms, or perhaps, in any terms at all. [p. 1]
Even though our language must still remain vague, we can at least be a little more systematic about what constitutes a complex information process.[p. 6]
- A complex process consists of very large numbers of subprocesses, which are extremely diverse in their nature and operation. No one of them is central or, usually, even necessary.
- The elementary component processes need not be complex; they may be simple and easily understood. The complexity arises wholly from the pattern in which these processes operate.
- The component processes are applied in a highly conditional fashion. In fact, large numbers of the processes have the function of determining the conditions under which other processes will operate.
If "complex information processing" had become the fixed and common name, rather than "artificial intelligence", there would, I think, be many fewer myths to contend with. To use a technical but vital piece of meta-theoretical jargon, the former is "basically pleasant bureaucrat", the latter is "sexy murder poet" (at least in comparison).
Newell and Simon do not mention it, explicitly, in their 1956 paper, but the component processes in a complex information-processing system can be hard-wired machines, or flexible programmed machines, or human beings, or any combination of these. Remember that Simon was, after all, a trained political scientist whose first book was Administrative Behavior and who had, in fact, worked as a government bureaucrat, helping to implement the Marshall Plan. Even more, the first time Newell and Simon ran their Logic Theorist program (described in that paper), they ran it on people, because the electronic computer was back-ordered. I will let Simon tell the story:
Al [Newell] and I wrote out the rules for the components of the program (subroutines) in English on index cards, and also made up cards for the contents of the memories (the axioms of logic). At the GSIA [= Graduate School of Industrial Administration] building on a dark winter evening in January 1956, we assembled my wife and three children together with some graduate students. To each member of the group, we gave one of the cards, so that each person became, in effect, a component of the LT computer program --- a subroutine that performed some special function, or a component of its memory. It was the task of each participant to execute his or her subroutine, or to provide the contents of his or her memory, whenever called by the routine at the next level above that was then in control.So we were able to simulate the behavior of LT with a computer constructed of human components. Here was nature imitating art imitating nature. The actors were no more responsible for what they were doing than the slave boy in Plato's Meno, but they were successful in proving the theorems given them. Our children were then nine, eleven, and thirteen. The occasion remains vivid in their memories.
[Models of My Life, ch. 13, pp. 206--207 of the 1996 MIT Press edition.]
The primal scene of AI, if we must call it that, is thus one of looking back and forth between a social organization and an information-processing system until one can no longer tell which is which.
Lots of social technologies can be seen as means of effectively making people smarter. Participants in a functioning social institution will act better and more rationally because of those institutions. The information those participants get, the options they must choose among, the incentives they face, all of these are structured --- limited, sharpened and clarified --- by the institutions, which helps people think. Continued participation in the institution means facing similar situations over and over, which helps people learn. Markets are like this; bureaucracies are like this; democracy is like this; scientific disciplines are like this. [10] And cultural tradition are like this.
Let me quote from an old book that had a lot of influence on me:
Intellect is the capitalized and communal form of live intelligence; it is intelligence stored up and made into habits of discipline, signs and symbols of meaning, chains of reasoning and spurs to emotion --- a shorthand and a wireless by which the mind can skip connectives, recognize ability, and communicate truth. Intellect is at once a body of common knowledge and the channels through which the right particle of it can be brought to bear quickly, without the effort of redemonstration, on the matter in hand.
Intellect is community property and can be handed down. We all know what we mean by an intellectual tradition, localized here or there; but we do not speak of a "tradition of intelligence," for intelligence sprouts where it will.... And though Intellect neither implies nor precludes intelligence, two of its uses are --- to make up for the lack of intelligence and to amplify the force of it by giving it quick recognition and apt embodiment.
For intelligence wherever found is an individual and private possession; it dies with the owner unless he embodies it in more or less lasting form. Intellect is on the contrary a product of social effort and an acquirement.... Intellect is an institution; it stands up as it were by itself, apart from the possessors of intelligence, even though they alone could rebuild it if it should be destroyed....
The distinction becomes unmistakable if one thinks of the alphabet --- a product of successive acts of intelligence which, when completed, turned into one of the indispensable furnishings of the House of Intellect. To learn the alphabet calls for no great intelligence: millions learn it who could never have invented it; just as millions of intelligent people have lived and died without learning it --- for example, Charlemagne.
The alphabet is a fundamental form to bear in mind while discussing ... the Intellect, because intellectual work here defined presupposes the concentration and continuity, the self-awareness and articulate precision, which can only be achieved through some firm record of fluent thought; that is, Intellect presupposes Literacy.
But it soon needs more. Being by definition self-aware, Intellect creates linguistic and other conventions, it multiplies places and means of communication....
The need for rules is a point of difficulty for those who, wrongly equating Intellect with intelligence, balk at the mere mention of forms and constraints --- fetters, as they think, on the "free mind" for whose sake they are quick to feel indignant, while they associate everything dull and retrograde with the word "convention". Here again the alphabet is suggestive: it is a device of limitless and therefore "free" application. You can combine its elements in millions of ways to refer to an infinity of things in hundreds of tongues, including the mathematical. But its order and its shapes are rigid. You cannot look up the simplest word in any dictionary, you cannot work with books or in a laboratory, you cannot find your friend's telephone number, unless you know the letters in their arbitrary forms and conventional order.---Jacques Barzun, The House of Intellect (New York: Harper, 1959), pp. 3--6
A huge amount of cultural and especially intellectual tradition consists of formulas, templates, conventions, and indeed tropes and stereotypes. To some extent this is to reduce the cognitive burden on creators: this has been extensively studied for oral culture, such as oral epics. But formulas also reduce the cognitive burden on people receiving communications. Scientific papers, for instance, within any one field have an incredibly stereotyped organization, as well as using very formulaic language. One could imagine a world where every paper was supposed to be a daring exploration of style as well as content, but in reality readers want to be able to check what the reagents were, or figure out which optimization algorithm was used, and the formulaic structure makes that much easier. This is boiler-plate and ritual, yes, but it's not just boiler-plate and ritual, or at least not pointless ritual [11].
Or, rather, the formulas make things easier to create and to comprehend once you have learned the formulas. The ordinary way of doing so is to immerse yourself in artifacts of the tradition until the formulas begin to seep in, and to try your hand at making such artifacts yourself, ideally under the supervision of someone who already has grasped the tradition. (The point of those efforts was not really to have the artifacts, but to internalize the forms.) Many of the formulas are not articulated consciously, even by those who are deeply immersed in the tradition.
Large models have learned nearly all of the formulas, templates, tropes and stereotypes. (They're probability models of text sequences, after all.) To use Barzun's distinction, they will not put creative intelligence on tap, but rather stored and accumulated intellect. If they succeed in making people smarter, it will be by giving them access to the external forms of a myriad traditions.
None of this is to say that large models are an unambiguous good thing:
These are just a few of the very real issues which surround these technologies. (There are plenty more.) Spinning myths about superintelligence will not help us deal with them; seeing them for what they are will.
[1]: I first learned to appreciate the importance, in cultural transmission, of the alternation between public, external representations and inner, mental content by reading Dan Sperber's Explaining Culture. ^
[2]: I think I coined the term "Gopnikism" in late 2022 or early 2023, but it's possible I got it from someone else. (The most plausible source however is Henry, and he's pretty sure he picked it up from me.) ^
[3]: People have been telling the joke about asking a supercomputer "Is there a god?" and it answering "There is now" since the 1950s. Considering what computers were like back then, I contend it's pretty obvious that some part of (some of) us wants to spin myths around these machines. ^
[4]: These myths have also hybridized with a bizarre conviction that "this function increases monotonically" implies "this function goes to infinity", or even "this function goes to infinity in finite time". When this kind of reasoning grips people who, in other contexts, display a perfectly sound grasp of pre-calculus math, something is up. Again: mythic thinking. ^
[5]: Something we didn't elaborate on in the paper, and I am not going to do justice to here, is that one could deliberately not match the distribution of the training corpus -- one can learn some different distribution. Of course to some extent this is what reinforcement learning from human feedback (and the like) aims at, but I think the possibilities here are huge. Nearly the only artistically interesting AI image generation I've seen is a hobbyist project with a custom model generating pictures of a fantasy world, facilitated by creating a large artificial vocabulary for both style and content, and (by this point) almost exclusively training on the output of previous iterations of the model. In many ways, the model itself, rather than its images, is the artwork. (I am being a bit vague because I am not sure how much attention the projector wants.) Without suggesting that everything needs to be, as it were, postcards from Tlön, the question of when and how to "tilt" the distribution of large training corpora to achieve specific effects seems at once technically interesting and potentially very useful. ^
[6]: For values of "we" which include "the sort of people who pirate huge numbers of novels" and "the sort of people who torrent those pirated novel collections on to corporate machines". ^
[7]: If the RLHF workers are, like increasing numbers of online crowd-sourced workers, themselves using bots, we get a chain of technical mediations, but just a chain and not a loop. ^
[8]: I imprinted strongly enough on cybernetics that part of me wants to argue that an LLM, as an ergodic Markov chain, does have a goal after all. This is to forget the prompt entirely and sample forever from its invariant distribution. On average, every token it produces returns it, bit by bit, towards that equilibrium. This is not what people have in mind. ^
[9]: Strictly speaking, they never mention the phrase "artificial intelligence", but they do discuss the work of John McCarthy et al., so I take the absence of that phrase to be meaningful. Cf. Simon's The Sciences of the Artificial, "The phrase 'artificial intelligence' ... was coined, I think, right on the Charles River, at MIT. Our own research group at Rand and Carnegie Mellon University have preferred phrases like 'complex information processing' and 'simulation of cognitive processes.' ... At any rate, 'artificial intelligence' seems to be here to stay, and it may prove easier to cleanse the phrase than to dispense with it. In time it will become sufficiently idiomatic that it will no longer be the target of cheap rhetoric." (I quote from p. 4 of the third edition [MIT Press, 1996], but the passage dates back to the first edition of 1969.) Simon's hope has, needless to say, not exactly been achieved. ^
[10]: Markets, bureaucracies, democracies and disciplines are also all ways of accomplishing feats beyond the reach of individual human minds. I am not sure that cultural traditions are, too; and if large models are, I have no idea what those feats might be. (Maybe we'll find out.) ^
[11]: Incorporated by reference: Arthur Stinchcombe's When Formality Works, which I will write about at length one of these decades. ^
Update, later the same day: Fixed a few annoying typos. Also, it is indeed coincidence that Brad DeLong posted ChatGPT, Claude, Gemini, & Co.: They Are Not Brains, They Are Kernel-Smoother Functions the same day.
Update, 28 April 2025: Since there were some questions about this, the use of the words "intelligence" and "intellect" in my quotation from Barzun is not due to any translator. While he was a native speaker of French, by the time he wrote The House of Intellect he had been living and teaching in America, and writing in English, for some decades.
Self-Centered; Enigmas of Chance; The Collective Use and Evolution of Concepts; Minds, Brains, and Neurons
Posted at April 16, 2025 12:25 | permanent link
Attention conservation notice: Middle-aged dad contemplating "aut liberi, aut libri" on April 1st.... and why I am not going to write them.
Many social questions about inequality, injustice and unfairness are, in part, questions about evidence, data, and statistics. This class lays out the statistical methods which let us answer questions like "Does this employer discriminate against members of that group?", "Is this standardized test biased against that group?", "Is this decision-making algorithm biased, and what does that even mean?" and "Did this policy which was supposed to reduce this inequality actually help?" We will also look at inequality within groups, and at different ideas about how to explain inequalities between and within groups.The idea is to write a book which could be used for a course on inequality, especially in the American context where we're obsessed by between-group inequalities, for quantitatively-oriented students and teachers, without either pandering, or pretending that being STEM-os lets us clear everything up easily. (I have heard too many engineers and computer scientists badly re-inventing basic sociology and economics in this context...)
Posted at April 01, 2025 00:30 | permanent link
Attention conservation notice: I have no taste, and no qualification to opine on pure mathematics, sociology, or adaptations of Old English epic poetry. Also, most of my reading this month was done at odd hours and/or while chasing after a toddler, so I'm less reliable and more cranky than usual.
Books to Read While the Algae Grow in Your Fur; Mathematics; Automata and Calculating Machines; Enigmas of Chance; Commit a Social Science; The Dismal Science; The Collective Use and Evolution of Concepts; The Commonwealth of Letters
Posted at March 31, 2025 23:59 | permanent link
Attention conservation notice: I have no taste, and no qualification to opine on ancient history, the anthropology of the transition to literacy, or even on feminist science fiction. Also, most of my reading this month was done at odd hours and/or while chasing after a toddler, so I'm less reliable and more cranky than usual.
Let us recapitulate the educational experience of the Homeric and post-Homeric Greek. He is required as a civilised being to become acquainted with the history, the social organisation, the technical competence and the moral imperatives of his group. This group will in post-Homeric times be his city, but his city in turn is able to function only as a fragment of the total Hellenic world. It shares a consciousness in which he is keenly aware that he, as a Hellene, partakes. This over-all body of experience (we shall avoid the word 'knowledge') is incorporated in a rhythmic narrative or set of narratives which he memorises and which is subject to recall in his memory. Such is poetic tradition, essentially something he accepts uncritically, or else it fails to survive in his living memory. Its acceptance and retention are made psychologically possible by a mechanism of self-surrender to the poetic performance, and of self-identification with the situations and the stories related in the performance. Only when the spell is fully effective can his mnemonic powers be fully mobilised. His receptivity to the tradition has thus, from the standpoint of inner psychology, a degree of automatism which however is counter-balanced by a direct and unfettered capacity for action, in accordance with the paradigms he has absorbed. 'His not to reason why.' [ch. 11, pp. 198--199; any remaining glitches are, for once, due to OCR errors in the ProQuest electronic version and not my typing]Elsewhere, Havelock repeatedly speaks of "hypnotism". This is all, supposedly, what Plato is reacting against.
Books to Read While the Algae Grow in Your Fur; Writing for Antiquity; Scientifiction and Fantastica; Afghanistan and Central Asia
Posted at January 31, 2025 23:59 | permanent link
Attention conservation notice: An overly-long blog comment, at the unhappy intersection of political theory and hand-wavy social network theory.
Henry Farrell has a recent post on how "We're getting the social media crisis wrong". I think it's pretty much on target --- it'd be surprising if I didn't! --- so I want to encourage my readers to become its readers. (Assuming I still have any readers.) But I also want to improve on it. What follows could have just been a comment on Henry's post, but I'll post it here because I feel like pretending it's 2010.
Let me begin by massively compressing Henry's argument. (Again, you should read him, he's clear and persuasive, but just in case...) The real bad thing about actually-existing social media is not that it circulates falsehoods and lies. Rather it's that it "creates publics with malformed collective understandings". Public opinion doesn't just float around like a glowing cloud (ALL HAIL) rising nimbus-like from the populace. Rather, "we rely on a variety of representative technologies to make the public visible, in more or less imperfect ways". Those technologies shape public opinion. One way in particular they can shape public opinion is by creating and/or maintaining "reflective beliefs", lying somewhere on the spectrum between cant/shibboleths and things-you're-sure-someone-understands-even-if-you-don't. (As an heir of the French Enlightenment, many of Dan Sperber's original examples of such "reflective beliefs" concerned Catholic dogmas like trans-substantiation; I will more neutrally say that I have a reflective belief that botanists can distinguish between alders and poplars, but don't ask me which tree is which.) Now, at this point, Henry references a 2019 article in Logic magazine rejoicing in the title "My Stepdad's Huge Data Set", and specifically the way it distinguishes between those who merely consume Internet porn, and the customers who actually fork over money, who "convert". To quote the article: "Porn companies, when trying to figure out what people want, focus on the customers who convert. It's their tastes that set the tone for professionally produced content and the industry as a whole." To quote Henry: "The result is that particular taboos ... feature heavily in the presentation of Internet porn, not because they are the most popular among consumers, but because they are more likely to convert into paying customers. This, in turn, gives porn consumers, including teenagers, a highly distorted understanding of what other people want and expect from sex, that some of them then act on...."
To continue quoting Henry:
Something like this explains the main consequences of social media for politics. The collective perspectives that emerge from social media --- our understanding of what the public is and wants --- are similarly shaped by algorithms that select on some aspects of the public, while sidelining others. And we tend to orient ourselves towards that understanding, through a mixture of reflective beliefs, conformity with shibboleths, and revised understandings of coalitional politics.
At this point, Henry goes on to contemplate some recent grotesqueries from Elon Musk and Mark Zuckerberg. Stipulating that those are, indeed, grotesque, I do not think they get at the essence of the problem Henry's identified, which I think is rather more structural than a couple of mentally-imploding plutocrats. Let me try to lay this out sequentially.
Conclusion: Social media is a machine for "creat[ing] publics with malformed collective understandings".
The only way I can see to avoid reaching this end-point is if what we prolific weirdos write about tends to be a matter of deep indifference to almost everyone else. I'd contend that in a world of hate-following, outrage-bait and lolcows, that's not very plausible. I have not done justice to Henry's discussion of the coalitional aspects of all this, but suffice it to say that reflective beliefs are often reactive, we're-not-like-them beliefs, and that people are very sensitive to cues as to which socio-political coalition's output they are seeing. (They may not always be accurate in those inferences, but they definitely draw them ***.) Hence I do not think much of this escape route.
--- I have sometimes fantasized about a world where social media are banned, but people are allowed to e-mail snapshots and short letters to their family and friends. (The world would, un-ironically, be better off if more people were showing off pictures of their lunch, as opposed to meme-ing each other into contagious hysterias.) Since, however, the technology of the mailing list with automated sign-on dates back to the 1980s, and the argument above says that it alone would be enough to create distorted publics, I fear this is another case where Actually, "Dr. Internet" Is the Name of the Monsters' Creator.
(Beyond all this, we know that the people who use social media are not representative of the population-at-large. [ObCitationOfKithAndKin: Malik, Bias and Beyond in Digital Trace Data.] For that matter, at least in the early stages of their spread, online social networks spread through pre-existing social communities, inducing further distortions. [ObCitationOfNeglectedOughtToBeClassicPaper: Schoenebeck, "Potential Networks, Contagious Communities, and Understanding Social Network Structure", arxiv:1304.1845.] As I write, you can see this happening with BlueSky. But I think the argument above would apply even if we signed up everyone to one social media site.)
*: Define "impressions" as the product of "number of posts per unit time" and "number of followers". If those both have power-law tails, with exponents \( \alpha \) and \( \beta \) respectively, and are independent, then impressions will have a power-law tail with exponent \( \alpha \wedge \beta \), i.e., slowest decay rate wins. )To see this, set \( Z = XY \) so \( \log{Z} = \log{X} + \log{Y} \), and the pdf of \( \log{Z} \) is, by independence, the convolution of the pdfs of \( \log{X} \) and \( \log{Y} \). But those both have exponential tails, and the slower-decaying exponential gives the tail decay rate for the convolution.) The argument is very similar if both are log-normal, etc., etc. --- This does not account for amplification by repetition, algorithmic recommendations, etc. ^
**: Someone sufficiently flame-proof could make a genuinely valuable study of this point by scraping the public various fora for written erotica and doing automated content analysis. I'd bet good money that the right tail of prolificness is dominated by authors with very niche interests. [Or, at least, interests which were niche at the time they started writing.] But I could not, in good conscience, advise anyone reliant on grants to actually do this study, since it'd be too cancellable from too many directions at once. ^
***: As a small example I recently overheard in a grocery store, "her hair didn't used to be such a Republican blonde" is a perfectly comprehensible statement. ^
Actually, "Dr. Internet" Is the Name of the Monsters' Creator; Kith and Kin
Posted at January 22, 2025 15:00 | permanent link