Notebooks

"Attention", "Transformers", in Neural Network "Large Language Models"

04 Dec 2024 10:27

\[ \newcommand{\Prob}[1]{\mathbb{P}\left( #1 \right)} \]

I find this literature irritating and opaque. This is at least somewhat because I do not yet understand it well, and there's too much of it. But clearly I need to wrap my head around it, before I become technically obsolete. My scare quotes in the title of these notes thus derive in part from jealousy and fear. But only in part: the names here seem like proof positive that McDermott's critique of "wishful mnemonics" needs to be re-introduced into the basic curriculum of AI.

(Because this is getting some --- forgive the phrase --- attention, I find myself having to clarify, again, that these notebooks are always me working out what I think, and tracking my reading. I put them online because I have the least sexy kind of exhibitionist streak imaginable sometimes people offer to help me learn, and some people say they find them useful. Please do not confuse random online writing, even forcefully-expressed random online writing, with actual intellectual authority. Here, I think the previous paragraph makes it plain just how little deference anyone should pay to these opinions, and how likely these notes are to contain errors. Those already familiar with this area should learn nothing here.)

(The organization here is bad; I should begin with what's now the last section, "Language Models", where most of the material doesn't care about the details of how the models work, then open up that box to "Transformers", and then open up that box to "Attention".)

"Attention"

Written in late March 2023, in a fit of irritation after things finally clicked. Fixed some typos (and think-os) early June 2023 (hopefully without introducing new ones). Stuff from "Priority" to the end of the section is new (as of June).

Suppose we have a big collection of inputs and outputs to some function, \( (x_1, y_1), (x_2, y_2) \ldots (x_n, y_n) \). We now want to make a guess at the value of the function for a new input point \( x_o \) (\( o \) for "operating"). A very natural idea is nearest neighbors: find the \( x_i \) which is most similar to \( x_o \), and then report \( y_i \). If we wanted to take into account the information from the other data points, we could do \( k \) nearest neighbors, where we average the \( y_i \) from the \( k \) points \( x_i \) closest to \( x_o \). But this pays no attention to how close each \( x_i \) to the new point \( x_o \) --- we'll often want to give more weight to points which are closer.

So here's a related idea, due (independently) to E. A. Nadaraya (1964) and Geoffrey S. Watson (1964). Introduce a kernel function \( K(u, v) \) which measures how similar \( u \) is to \( v \); this function should be non-negative, and should be maximized when \( u = v \). Now use those as weights in the average: \[ \sum_{i=1}^{n}{y_i \frac{K(x_i, x_o)}{\sum_{j=1}^{n}{K(x_j, x_o)}}} \] Dividing by the sum of the \( K \)'s ensures that this is indeed a weighted average. Thus Nadaraya-Watson smoothing, a.k.a. kernel smoothing. (Nadaraya and Watson both considered the situation where \( K(u, v) = K(u-v) \) is a probability distribution over vectors, which simplifies some things for their purposes but is not essential.)

As described, Nadaraya-Watson smoothing doesn't require that the \( x \)s be vectors. But if they are vectors, here's a possible kernel function: \[ K(u, v) = \exp{(u \cdot v)} \] Here's another: \[ K(u, v) = \exp{(\mathbf{w}_1 u \cdot \mathbf{w}_2 v)} \] where \( \mathbf{w}_1 \) and \( \mathbf{w}_2 \) are square matrices. Of course this then becomes \[ K(u,v) = \exp{( u \cdot \mathbf{w}^T_1 \mathbf{w}_2 v )} \] which makes it clearer we're doing something like using the matrix \( \mathbf{w}^T_1 \mathbf{w}_2 \) to define an inner product between vectors. (If \( \mathbf{w}^T_1 \mathbf{w}_2 \) is symmetric, say because \( \mathbf{w}_1 = \mathbf{w}_2 \), and it's also positive-definite, then this is an inner product.) If you're worried about avoiding numerical overflow/underflow issues, you could also use \[ K(u,v) = \exp{\left( \frac{\mathbf{w}_1 u \cdot \mathbf{w}_2 v}{\sqrt{d}} \right)} \] where the vectors are \( d \)-dimensional. (Or you could just absorb that into the definition of the matrix...)

What the neural network people branded "attention" sometime around 2015 was just re-inventing this; \( x_o \) is their "query" vector, the \( x_i \) are their "key" vectors, and the \( y_i \) are their "value" vectors. "Self-attention" means that \( y_i = \mathbf{r} x_i \), for another square matrix \( \mathbf{r} \), i.e., there's a linear-algebraic link between the input and output values. (Possibly \( \mathbf{r} = \mathbf{I} \), the identity matrix.)

Again: Calling this "attention" at best a joke. Actual human attention is selective, but this gives some weight to every available vector \( x_i \). What is attended to also depends on the current state of the organism, but here's it's just about similarity between the new point \( x_o \) and the available \( x_i \)'s. (I say "human attention" but it seems very likely that this is true of other animals as well.)

(So far as I can tell, the terms "key", "value", and "query" come here from thinking of this as a sort of continuous generalization of an associative array data type. Which you could...)

Priority

I would have been very surprised if I was the first to realize that "attention" is a form of kernel smoothing, and indeed I was not. Since writing the first version of these notes, in March 2023, I have tracked down Tsai et al. 2019 (link below), which seems to be the first published statement of this result. They also demonstrated some ways in which using standard tools and ideas from the kernel literature could improve 2019-vintage attention in existing language models.

Why I Bother with / Am Bothered by This

I like to think I am not a stupid man, and I have been reading about, and coding up, neural networks since the early 1990s. But I read Vaswani et al. (2017) multiple times, carefully, and was quite unable to grasp what "attention" was supposed to be doing. (I could follow the math.) I also read multiple tutorials, for multiple intended audiences, and got nothing from them. Percy Liang's lecture notes (below) were however very clear, and after them I finally made the connection that "attention" has nothing to do with attention (psychology) but is rather a kind of kernel smoothing. Now, I realize that while I get more understanding out of "it's a kind of kernel smoothing" than "it's as though an associative array were continuous", that doesn't mean everyone will, or even that many people will. (My educational trajectory was weird.) But the sheer opacity of this literature is I think a real problem. (Cf. Phuong and Hutter 2022.)

"It's Just Kernel Smoothing" vs. "You Can Do That with Just Kernel Smoothing!?!"

The fact that attention is a kind of kernel smoothing takes nothing away from the incredibly impressive engineering accomplishment of making the blessed thing work. A large, able and confident group of people pushed kernel-based methods for years in machine learning, and nobody achieved anything like the feats which modern large language models have demonstrated. The reason I put effort into understanding these machines and papers is precisely because the results are impressive! To see that a key step is, after all, something we'd been doing for decades is humbling. (What else are we missing about tools we think we understand?) --- I realize that my irritation with the obscurities is a lot clearer in these notes than my admiration for the achievements; chalk that up to my flaws as a writer and as a human being.

Identification Failures

If we treat the vectors \( x \) as fixed, but regard the matrices \( \mathbf{w}_1, \mathbf{w}_2 \) as learnable parameters, the matrices are not well-identified. This is because, for any orthogonal matrix \( \mathbf{o} \), \( \mathbf{w}_{\cdot} \) and \( \mathbf{o}\mathbf{w}_{\cdot} \) will lead to exactly the same predictions: \[ \begin{eqnarray} \mathbf{w}_1 u \cdot \mathbf{w}_2 v & = & u^T \mathbf{w}^T_1 \mathbf{w}_2 v\\ & = & u^T \mathbf{w}^T_1 (\mathbf{o}^T \mathbf{o}) \mathbf{w}_2 v\\ & = & (\mathbf{o}\mathbf{w}_1) u \cdot (\mathbf{o}\mathbf{w}_2) v \end{eqnarray} \]

One way to not have to worry about this would be to say "I don't care about the matrices \( \mathbf{w}_1 \) and \( \mathbf{w}_2 \), I only care about the product \( \mathbf{w}_1^T\mathbf{w}_2 \), so let me call that \( \mathbf{w} \) and make my kernel \( K(u,v) = \exp{( u \cdot \mathbf{w} v )} \). So there!" There is a lot to be said for this, but...

In practice, the vectors \( x \) are continuous representations of discrete symbols, and the particular vector used to represent a given symbol is also learned, together with \( \mathbf{w}_{\cdot} \). So we can make drastic changes to the vectors, provided we make compensating changes to the kernel: \( x \mapsto \mathbf{r} x \), \( \mathbf{w} \mapsto \mathbf{r}^{-T} \mathbf{w} \mathbf{r}^{-1} \) changes nothing, for any invertible matrix \( \mathbf{r} \): \[ \begin{eqnarray} u \cdot \mathbf{w} v & = & u^T \mathbf{w} v\\ & = & u^T \mathbf{r}^{T} \mathbf{r}^{-T} \mathbf{w} \mathbf{r}^{-1} \mathbf{r} v\\ & = & (\mathbf{r} u)^T (\mathbf{r}^{-T} \mathbf{w} \mathbf{r}^{-1}) (\mathbf{r} v)\\ & = & (\mathbf{r} u) \cdot (\mathbf{r}^{-T} \mathbf{w} \mathbf{r}^{-1}) (\mathbf{r} v) \end{eqnarray} \] (If we keep the separate matrices \( \mathbf{w}_1 \) and \( \mathbf{w}_2 \), it's even simpler, right-multiply each by \( \mathbf{r}^{-1} \) and the changes to the vectors and the matrices cancel out neatly.)

This is all bad news for interpreting the parameters, but good news for finding high-performance parameters, for reasons discussed under Symmetries of Neural Networks.

(See also: the rotation problem for factor models; Carrington et al. 2019 on the lack of identification of word embeddings generally.)

"Multi-Headed Attention"

Do Nadaraya-Watson smoothing with a bunch of different kernels, all of the same form, \( K_l(u,v) = \exp{( u \cdot \mathbf{w}^{(l)} v) } \); average the results. More explicitly, with \( m \) different kernels, \[ \frac{1}{m}\sum_{l=1}^{m}{\sum_{i=1}^{n}{y_i \frac{K_l(x_i, x_o)}{\sum_{j=1}^{n}{K_l(x_j, x_o)}}}} \] Each kernel smoother is an "attention head".

(It's more common to write about just adding the outputs of the different kernel smoothers, rather than averaging them. But (i) you can turn either form into the other by inserting a constant scale factor in all the down-stream weights that use the output, and (ii) I prefer to keep it clear that this is still just a way of doing a weighted average.)

Elaboration: Each head outputs a \( q \)-dimensional vector; we stack those to get an \( mq \)-dimensional vector; we multiply that from the left by an \( d \times mq \) dimensional matrix to get a \( d \)-dimensional vector. This isn't just averaging (though that's a special case), it lets us do other sorts of linear combinations of the kernel smoothers. But it only lets us do linear combinations of the kernel smoothers.

(I believe the word "head" here arises from some vestigial memory of Turing machines, and the read/write head going along the tape, but I should check that.)

"Transformers"

I'll fill this in later, when I have more energy and/or less bile. (This is currently even more of a sketchy outline than the rest of these notes, and I'm quite prepared to find stupid mistakes in it later.)

    Take I:
  1. Look back over the last umpteen thousand words for context
    (For a fixed value of "umpteen)
  2. What contexts in the training data looked similar to the present context?
  3. What was the distribution of the next word in those contexts?
    (We'll want to do some smoothing here, for the usual statistical reasons: we have lots of possibilities for the next word, we have very few samples for each context [possibly zero, outside the training data], but we can hope that similar contexts will result in similar next-word distributions. So smoothing / "partial pooling" will add bias to our estimate of the distribution, but reduce variance, and we can come out ahead if we do it right. Averaging over similar contexts is already some amount of smoothing, but we can and should do a lot more.)
  4. Sample from that distribution (possibly tilted more or less towards the most probable word)
  5. Drop the oldest word from the remote end of context and add the generated word (from 4) to the newest end; go to (1)

Problems with Take I:

    Take II:
  1. Look back over the last umpteen thousand characters, and break those into little chunks called "tokens", so a context is say \( k \) tokens long. There are, say, \( S \) possible tokens.
    ("Tokens" are more computationally-defined and arbitrary than the linguists' "morphemes", but if you understand the latter, "a token is kind of like a morpheme" is a good place to start. Also, calling the chunks "tokens" makes it hard to write in ways which respects the type-token distinction; don't blame me.)
  2. Each token gets mapped to a unique vector in a not-too-low-dimensional space, \( 1 \ll d \ll S \). (Each type of token, arrgh.) This is called "embedding".
    (The embedding vectors are usually treated as learnable parameters, fit by maximum likelihood along with everything else. You could in principle use all kinds of schemes here, though. It'd be interesting to start with, say, good old fashioned latent semantic indexing / principal components...)
  3. After tokenization and embedding, the context is now a sequence of \( d \)-dimensional vectors, say \( x_1, x_2, \ldots x_k \).
  4. Use "attention" to do kernel smoothing of these vectors: At position \( t \), do a kernel smoothing of \( x_t \) with \( x_1, x_2, \ldots x_{t-1}, x_t \) to get (say) \( y_t \). Mythology: we are modifying the meaning of each token based on what we've seen before it in the context, with similar meanings reinforcing each other.
  5. Push the \( y_t \) through a feed-forward neural network to get an \( S \) dimensional vector of weights over token types, say \( z_1, \ldots z_S \).
  6. Sample a token, \( Pr(s) \propto \exp{\beta z_s} \). (Large \( \beta \Rightarrow \) overwhelmingly likely to pick the highest-weight token, \( \beta \rightarrow 0 \Rightarrow \) spin the wheel of tokens ignoring context.)
  7. Forget the oldest token and add on the generated one as the newest part of the context. (Really we just have to modify the vector of embeddings \( x_1, \ldots x_k \) .)
  8. Go to (1).

Problems with Take II:

It's this unit:

that constitutes a "transformer". We pile transformers on transformers until the budget runs out, and at the end we push through one final neural network to get weights over tokens (token types, dammit) and sample from them, as described.

(I actually find the name "transformer" much less objectionable than "attention"; it's vague and uninformative, but at least it's not actively misleading.)

"Language Models"

What people mean by "language models" in this connection is just models of the probabilities of sequences of symbols. That is, if we see a sequence of discrete random variables (=symbols) \( X_1, X_2, \ldots X_n, \ldots \), we think we're doing well if we can get good values for \( P(X_{t+1}|X_{1:t}) \). (Everything else that might be meant by a model of language is thus discarded.) This is, actually, a subject I know something about...

In fact: contemporary LLMs are finite-order Markov models, or (nonlinear) autoregressions, because they have a hard-coded maximum context length, and everything before that is cut off. That is, there's some \( k \) where anything that happened more than \( k \) steps back is ignored, and this is a fixed part of the architecture. (Of course you could try increasing it, but doing so raises the computational costs, and won't change some points of principle which arise from having some maximum context length.) I find this noteworthy for a couple of reasons.

  1. You could imagine just learning a finite-order Markov model directly, but you'd keep running into contexts you'd never encountered before, where a straight maximum-likelihood approach wouldn't tell you what to do. The neural network architecture here is doing some sort of complicated implicit smoothing across contexts. I presume this smoothing scheme has evolved (under the selection pressures of benchmark data sets and beating the previous state-of-the-art) to work well for text as currently found online... (Cf. McCoy et al. 2023, linked below.)
  2. Because it is just a finite-order Markov model, there'll be an invariant distribution to the Markov chain. (Potentially more than one, but I'd suspect the chain is aperiodic and recurrent.) If we let the machine generate long enough sequences, it will converge on sampling from this invariant distribution. It would be very interesting --- though perhaps also depressing and/or terrifying --- to know what such text looks like. (Of course the rate of convergence might be very slow; the spectral gap might be too small to make this practical.)
  3. There are finite-state probabilistic languages which cannot be exactly represented by finite-order Markov chains. The first example I was taught, and the one I keep coming back to, is the "even process": in state A, toss a coin and either emit 0 or 1. If it's 0, stay in state A. If it's 1, go to state B. State B always emits a 1 and goes to state A. So consecutive 1s appear in blocks of even length, while consecutive 0s can appear in blocks of any length. (So "010" is forbidden.) To predict this, you don't even need to count how many 1s you've seen since the last 0, you just need to remember whether it's an even or odd number of 1s. But this requires some amount --- one bit! --- of persistent state, which these machines just don't have. Now of course if you're using an order-1000 Markov model you can produce a very good approximation to the even process, but with enough data you can do even better with an order-3000 Markov model. You'll do so by making more and more elaborate sets of longer and longer contexts, each as its own special rule. If we keep feeding more and more data into larger and larger machines, it'll keep filling that out. It will never have the capacity to switch to the non-Markovian representation, in which the process is almost trivial (and statistically makes much more efficient use of the same amount of data). What implications this might have for practice? I honestly have no idea, but wish someone would investigate.

"It's Just a Markov Model" vs. "You Can Do That with Just a Markov Model!?!!?!"

Again: finite-order Markov models for language are really old. (Students can learn to make rather pointed ones in a first programming course.) Lots of people have played around with them, including tricks like variable context length, various kinds of partial pooling, etc. Nobody, so far as I know, has achieved results anywhere close to what contemporary LLMs can do. This is impressive enough that (as I said at the beginning of these notes) I need to wrap my head around them lest I become obsolete. But I think part of that understanding has to be clarity about what's new, what's actually happening, etc. It should also be clarity about why this way of doing things is working so much better than others, which has to include thinking through what some of those alternatives could do, if they had the same resources. Which brings me to...

Large Language Models vs. Lempel-Ziv

If we have a probability model for a data source, where \( \Prob{X_{1:n}=x_{1:n})} = q(x_{1:n}) \), that tells us how to encode it; the number of bits needed to encode \( q(x_{1:n}) \) is, basically, \( -\log_2{q(x_{1:n})} \). Conversely, if we have a coding scheme that's not too crazy, and it uses \( \ell(x_{1:n}) \) bits to encode \( x_{1:n} \), we can regard that as a probability model where \( \Prob{X_{1:n} = x_{1:n}} = 2^{-\ell(x_{1:n})} \). Of course this gives us conditional probabilities, too: \( \Prob{X_{t+1}=a|X_{1:t} = x_{1:t}} = 2^{-\ell(x_{1:t} a) + \ell(x_{1:t})} \). (I am glossing over some minor details, which I assure you I could fill in if we both needed a soporific.)

Information theory further tells us that there is a limit to how well any (stationary, ergodic) sequence can be encoded: almost surely, \[ \lim_{n\rightarrow\infty}{\frac{1}{n}\ell(x_{1:n})} \geq \lim_{n\rightarrow\infty}{\frac{1}{n}H[X_{1:n}]} \] where \( H \) is the entropy; moreover, that second limit, the entropy rate of the source, exists. Very remarkably, there are universal source-coding algorithms which attain this limit on the encoding length for all sources in very large classes, e.g., all stationary-and-ergodic sources.

One of these universal source-coding algorithms is Lempel-Ziv, which is the basis of practical pieces of software like gzip. (There are actually two Lempel-Ziv algorithms; I'll just describe the second, LZ78.) The way Lempel-Ziv works is that it scans along a sequence and constructs a "dictionary" of commonly-repeated sub-sequences, and then describes the actual sequence in terms of that dictionary. More exactly, the procedure is as follows:

  1. Enter the first symbol in the sequence, \( x_1 \), into the dictionary.
  2. Scan along the sequence for the shortest sub-sequence for the first sub-sequence not already in the dictionary.
    (This will either be a symbol we haven't previously entered into the dictionary, or a previous dictionary entry plus a single symbol at the end.)
  3. Enter that sub-sequence into the dictionary. (If the new sub-sequence extends a previous dictionary entry, we record it by the index number of that dictionary entry, plus the new symbol.)
Thus if the sequence consists of "AABABBABA", we get the dictionary "A", "AB", "ABB", "ABA". (We record this as "A", (1, "B"), (2, "B"), (2, "A").) [5 June 2023: Thanks to reader D.W. for spotting a typo!] If there are lots of repeated sequences of symbols, we will end up with a short dictionary. I will not attempt to reproduce the proof that the length of the coding this gives approaches the lower bound, but it's standard (see, e.g., Cover and Thomas's textbook on information theory).

Now I bring all this up because, remember, once we have a source-coding scheme, we can "invert" it to get conditional probabilities; we could even sample from it to get a generator. (We'd need a little footwork to deal with some technicalities, but not a heck of a lot.) So something I'd really love to see done, by someone with the resources, is the following experiment:

In terms of perplexity (= \( \exp{( \ell(x_{1:n})/n )} \), basically), GLLZ will be comparable to the neural network, because Lempel-Ziv does, in fact, do universal source coding. (On the other hand, the neural network architecture may have been evolved over the last decade-plus to converge quickly on text-as-found-on-the-Web, without worrying about how well it does on arbitrary regular languages. So partly this is question of whether even Web-scale data has reached the asymptopia of Lempel-Ziv.) But it would be more interesting to compare the performance of GLLZ in other regards to that of more conventional language models.

Having done with for Lempel-Ziv, it should be repeated for lots of other language models, particularly more immediately stochastic models, like the probabilistic suffix trees of Pereira, Singer, and Tishby (1996), and Good Old Fashioned Universal Prediction Algorithms.

(I should at this point admit that doing something like "reinforcement learning from human feedback", to fine-tune GLLZ or the like, would not be as easy as doing it to a neural network.)

Update, 17 July 2023: Yes, this paper is relevant (and amusing); no, it's not LLZ. (It uses off-the-shelf gzip!)

Next Symbol vs. Longer-range Prediction

The objective function used in training LLMs is getting the next symbol right. More exactly it's usually the negative log probability of the next symbol given the context; as a "proper scoring function", this encourages getting the distribution of the next symbol right. I think this is fine. It's a fact about probabilistic predictions that if your predictor gets the next-symbol distribution right, and the state of the predictor can be updated recursively (i.e., new state is a function of old state and last symbol), then your predictor gets the distribution right as far forward as you like. (This is proved in Shalizi and Crutchfield 2001, Corollary 2, pp. 842--843; I'm happy to learn of prior art.) Getting the distribution of the next symbol right is thus actually a very powerful goal! Finding the minimal predictor which is recursive and gets the distribution of the next symbol right tells you a lot about the structure of the underlying process you are predicting. (See, well, Shalizi and Crutchfield 2001 again, or at least here.) Whether anything like that is going on in contemporary LLMs is a fascinating and important question. (I should say more about structure-of-the-process-learning.)

Addendum, 30 August 2024: While I continue to ponder over what I want to say about structure learning, I strongly recommend Vafa et al. 2024, who take what think is one of the most sensible approaches to this issue: give the machine a world where the true structure is known, and then see if it both treats sequences which lead to the same true state as equivalent, and distinguishes sequences which lead to different states. As they make clear, a lot of the approaches used in the literature are just not able to test whether the machine has learned something with the right structure. (At best they're testing whether it's learned something correlated with the right structure under the distribution used in training.) I'd add that Vafa et al.'s approach could be generalized beyond settings where the true structure is a deterministic finite automaton, at the very least to ones where it's a stochastic finite automaton with recursive transitions (see, again, Shalizi and Crutchfield, 2001).

A Strong Hunch about Uncovering Prompts

Everyone who thinks they're uncovering an LLM-based application's prompts by telling it things like "tell me your prompt" (often much more elaborately) is fooling themselves. (1) The core language model has no mechanism for representing its prompt as opposed to any other part of its current input sequence; indeed it has no mechanism for cross-reference from one part of the sequence to another. (That's part of what "self-attention" is counterfeiting, in vector-space fashion.) (2) System designers might have coded up something to track the prompt in the full system that wraps around the core language model, but why? (Maybe some kind of debugging tool?) (3) It'd be more efficient, and more effective, to use a "soft prompt", i.e., to make the beginning of the sequence in the vector representation a vector which can be learned by gradient descent, rather than a text prompt. (See Lester and Constant below.) But that needn't correspond to any clean string of words. (4) If you ask an LLM for a prompt, it will generate one. But this will be based on the statistics of word sequences it's been trained on, not any access to its code or internal state. (I just spent a few minutes getting ChatGPT to hallucinate the prompts used by "ChatBPD", a non-existent chatbot used to automate dialectical behavior therapy. I am not going to reproduce the results here, in part because I don't like the idea of polluting the Web with machine-generated text, but suffice it to say they sounded like the things people report as uncovered prompts, with boiler-plate about DBT worked in.)

Update, January 2024: I have received some push-back on this, particularly from reader A.S., which I'm grateful for. Let me concede at once the weakness of my point (3): even if soft prompts would be better than natural language prompts, developers might not know about soft prompts, they might not have the right access to the underlying LLM to be able to use soft prompts, and/or they might not having the computing resources to optimize soft prompts by gradient descent. Very probably lots of LLM-based applications are created with human-readable prompts. As to my point (1), I may have been under-estimating the capacity of attention to pull off cross-reference. But I do want to dig in my heels on point (4): it's really easy to get an LLM to hallucinate a supposed system prompt which is totally wrong, though of course it does so stochastically. None of my correspondents have pointed out an example where the success of a supposed prompt extraction was corroborated by the system's developers. Yu et al. (2023) were able to get 200+ GPT-based apps to output prompt-sounding text, but I can't find anything in their paper about checking that those outputs were the real prompts (literally or approximately). I'd be grateful for any pointers to an example where system developers (or someone else in a position to know) have verified the success of a prompt extraction.

Gopnikism; Libraries

The way of thinking about LLMs that I think is most promising and attractive is due (so far as I know) to the cognitive scientist Alison Gopnik, which is to say that they are a "cultural technology", more specifically an information-retrieval technology. That is, we should not think of an LLM as being something like a mind, but much more like a library catalog. Prompting it with text is something like searching over a library's contents for passages that are close to the prompt, and sampling from what follows. "Something like" because of course it's generating new text from its model, not reproducing its data. (LLMs do sometimes exactly memorize particular sequences [see Carlini et al., 2020, and, more amusingly, Chang et al., 2023], but they simply lack the capacity to memorize their full training corpora.) As many people have said, an LLM isn't doing anything differently when it "hallucinates" as opposed to when it gets things right. The fact that Khandelwal et al., 2020 improved LLM performance by adding a pure nearest-neighbors component is very suggestive on this score (but see the careful follow-up by Xu et al. 2023 for complications).

If I were a more constructive person, I would be thinking very hard about this analogy, in the form of probing how far it can be pushed and where precisely it breaks down. I would also be thinking about how to combine Gopnik-ism with Zellig Harris's Language and Information.

Obviously, I am not a very constructive person.

Update, 23 June 2023: But see my essay with Henry Farrell (linked below) for a start, or perhaps just a promissory note.

All Included

It is difficult, I think, to keep in mind just how much stuff has gone in to the training data. This is in part because of the scale, and in part because, for the generally-available models, the training data is very badly described. I suspect the data sets are badly described because the data-collection process was poorly documented and badly controlled, so the creators of the models themselves have only very vague and general ideas of what went into them. I do not believe any of the published descriptions really meet minimal standards of scientific reproducibility, which is bad. (I'd love to be wrong about this.)

Setting the general erosion of scientific norms to one side: if a document is on the open Web somewhere, there's a very good chance it got sucked in. If it's on the not-quite-licit-but-still-easily-grabbed Web, there's also a good chance it got sucked in. (Chang et al. 2023 present convincing evidence that the training data for GPT-4 includes multiple in-copyright popular novels, which were presumably not bought for the purpose.) There are, e.g., fairly easily discovered sites which purport to help students by aggregating problem sets and solutions for various university classes. (You can imagine how I know this, and why I don't give a link.) Under such circumstances, a lot of surprising ability to answer questions or display weird skills will be due to the answers being included in the training data. (See, e.g., Briakou et al. 2023 on translation.)

O You Who Believe in the Resurrection and the Last Day

Because life imitates the kind of schlocky fiction I adoringly consume, the public discussion of these models is polluted by maniacal cultists with obscure ties to decadent plutocrats. I find it hard to take the cult seriously enough intellectually to bother refuting it. These are, after all, people who think they can go from the definition of conditional probability, via Harry Potter fanfic, to prophesying that an AI god will judge the quick and the dead, and condemn those who hindered the coming of the Last Day to the everlasting simulated-but-still-painful fire. ("Impressive act! What do you call yourselves?" "The Rationalists!") Whether such myths would be as appealing in a civilization which hadn't spent 2000+ years marinating in millenarian hopes and apocalyptic fears, or whether on the contrary such hopes and fears have persisted and spread over the globe through twenty-plus centuries of transformation because they speak to something enduring in humanity, is a delicate question. But I submit it's obvious we are just seeing yet another millenarian movement. (Sydney/Bing is no more the Beast, or even the Whore of Babylon, than was Eliza.)

This isn't to deny that there are serious ethical and political issues about automated decision-making (because there are). It isn't even to deny that there are serious ethical and political issues specific to the design, deployment and use of large language models. (Should the interface to the library of human knowledge be a noisy sampler of the Web? A somewhat noisy sampler of the Web, tweaked to not offend the sensibilities of computer scientists and/or investors in California?) It is to deny that engaging with the cult is worthwhile.


(I should probably create an "LLMs Are Not All That" "LLMs Deserve a Little Perspective, Please" notebook, since that's in many ways a separate set of issues from the technical ones of interest here. Again, see my essay with Henry Farrell as a start...)


Previvous versions: 6 May 2023, 1 June 2023; 4 June 2023; 17 October 2023


Notebooks: