he Bactra Review: Occasional and eclectic book reviews by Cosma Shalizi   171

The Postmodern Condition

A Report on Knowledge

by Jean-François Lyotard

Translated by Geoff Bennington and Brian Massumi, with a foreword to the translation by Frederic Jameson

Minneapolis: University of Minnesota Press, 1984 (as La Condition postmoderne: rapport sur le savoir, Paris: Les Editions de Minuit, 1979)

Comment dit-on "Alvin Toffler's The Third Wave (1980)" en français?

Or, a work of vintage late-1970s futurology, looking ahead towards times like ours, seen back through its adoption into a different culture's disputes in the mid-1990s

This little book was very much adopted by The Other Side in the Science Wars. (Or at least, what mostly felt like the Other Side at the time. In retrospect there was an element of "squabbling while Sauron gathered his forces in Mordor", as Krugman put it in a related context later.) When I read this back in the 1990s, in that context, it was hard for me to get what the point was. Now, in 2020, I think that this book's Anglophone reception in the 1980s and 1990s was mostly based on mis-understandings, and that what Lyotard meant is mostly acceptable, and actually kind of commonplace. What follows is mostly an attempt to pry away the layers of ornamentation to display the banalities. Whether, in the context of late-1970s Parisian intellectual life, what seem to me like excrescences were necessities, I couldn't presume to say.

To begin by clearing up a needless confusion, Lyotard did not introduce the term "postmodern", as he makes plain right at the start ("The word is in current use on the American continent among sociologists and critics" [p. xxiii]; "since at least the end of the 1950s" ... "cultures [have] enter[ed] what is known as the postmodern age" [p. 3, with the very first end-note giving citations to earlier uses of the term]). For the purposes of most discussion back in the 1980s and 1990s, The Postmodern Condition was summarized by a single line that appears in the introduction: "Simplifying to the extreme, I define postmodern as incredulity toward metanarratives" (p. xxiv; all italics in quotations from Lyotard are his). One might even say that this became a sound bite that substituted for the book itself.

Now, by "metanarrative" (the French original is "grand recits", literally "big [or great] stories") Lyotard does not mean narratives about narratives. (He's not saying nobody can sit still for Calvino's Invisible Cities any more.) Rather, the grand stories are for him specifically stories about scientific knowledge. They "legitimate" science in the sense of purporting to say why the pursuit of science is valuable, why science should be pursued --- not in the sense of saying why science is a reliable source of knowledge. These legitimating stories are about science, but they are not themselves scientific. Indeed, subjected to scrutiny by the canons of the sciences themselves, the grand stories prove to be more or less myths.

Lyotard only discusses two grand stories in any detail. One is about how scientific knowledge is part of the self-liberation of The People (the "humanistic" narrative), and the other is about how scientific knowledge is about the increasing self-understanding of The Spirit (the "speculative" narrative). The first, supposedly, was used to organize the intellectual life of France since the Enlightenment (and especially since the Revolution), and the second that of Germany since the founding of the University of Berlin, 1807--1810 (pp. 32ff). (While Lyotard rightly notes in passing that "the young countries of the world" [p. 32], especially the US, largely copied the organization of the then-new German research universities, he does not explore whether they also copied early-1800s German Idealistic philosophy, or even the ideal of Bildung, "development" or "cultivation". Britain is ignored, never mind the rest of the world.) "Marxism has wavered between the two models of narrative legitimation" (pp. 36--37).

The incredulity he alleges is towards stories like that, not towards the sciences themselves, or even towards socio-political ideologies in general. The explanation he gives for this incredulity is not technological change, or even a broad culture of skepticism, but simply "a product of progress in the sciences" (p. xxiv). Elaborating on this in section 10, "Delegitimation" (especially pp. 37--40), Lyotard leans very heavily on (a) "the truth requirement of science being turned back against itself" (p. 39), and (b) the is-ought distinction (without calling it that): "There is nothing to prove that if a statement describing a real situation is true, it follows that a prescriptive statement based upon it (the effect of which will necessarily be a modification of that reality) will be just" (p. 40).

As to (a), Lyotard seems to be ambiguous about what exactly he means by "the truth requirement of science being turned back against itself". One possibility is the one I alluded to earlier, realizing that the legitimating grand stories do not themselves meet the usual scientific and scholarly standards of rigor, evidence, etc. Science undermining philosophical myths used to legitimate the pursuit of science might be ironic or socially uncomfortable, but doesn't actually impugn the truth or reliability of science itself. This possibility fits with the way Lyotard says that the first people to really wrestle with de-legitimation were Viennese intellectuals before WWII.

In places however Lyotard does seem to be getting at the idea that science really is self-undermining, in a way we can now express it in image-macro format:

(Socio-cultural explanations of belief face the same contradiction as physical or biological ones, but I can't find them being argued out by cartoon animals. [1]) This worry was at least a century old when Lyotard was writing [2], and can by now be fairly considered a traditional element of the western tradition.

But (b) is even more important:

If this "delegitimation" is pursued in the slightest and if its scope is widened... the road is then open for an important current of postmodernity: science plays its own game; it is incapable of legitimating the other language games. The game of prescription, for example, escapes it. But above all, it is incapable of legitimating itself.... [p. 40]

This is, of course, the is-ought distinction, and the idea that science can tell us all sorts of things about how the world is and how it works, but is incapable of telling us what to do. At most, it can advise about the consequences of pursuing certain goals by certain means, and help us select more efficient means to goals. Distinctions like this are things which tough-minded defenders of science and reason are usually very happy to endorse; this would include (inter alia) Max Weber, Bertrand Russell, John Dewey [3], Karl Popper, the Logical Positivists, Jacques Monod, etc., down to the overwhelming majority of those on my side in the Science Wars, would all sign on to some version of this.

Now, since this is primarily what Lyotard meant by "delegitimation", and by seeing science as one language game alongside others, it's actually much less exciting than either party in the Science Wars made it sound. It's a pale echo of Max Weber warning students not to expect their scientific researches to underwrite their values --- and, conversely, not to let their values crumble when research doesn't support them. Some people, of course, are not happy with the prospect of not having an integrated world-view that in one stroke embraces metaphysics, science, ethics, aesthetics, politics, etc. But they're the ones who have the quarrel with Lyotard on this score, not people like me.

The occasion for all these Deep Thoughts was, in fact, remarkably mundane:

The text that follows is an occasional one. It is a report on knowledge in the most highly developed societies and was presented to the Conseil des Universitiés of the government of Quebec at the request of its president. (p. xxv)
The reason the university council wanted a "report on knowledge" was that it was the late 1970s, and people other than DARPA-funded weirdos (and their Soviet counterparts) and a few science fiction writers were beginning to get a sense that ubiquitous, inter-connected computers might Become A Big Thing. That, in these circumstances, the Quebecois university authorities turned to an ex-Marxist Parisian philosopher sounds like the set-up for a very academic joke, but here we are.

After 40 or 50 pages on the nature of science (surprisingly influenced by the great P. B. Medawar) [4], the contrast between "learning" and "science" on the one hand, and "narratives" on the other, with narratives being exemplified by traditional oral epics (and ideas about myth explicitly drawn from Eliade), tracing the distinction between science and narrative back to Plato, along with the need to "legitimate" science, expositions of the legitimating "meta-narratives", achingly abstract quarrels with Habermas and Luhmann about whether modern society is really a functional system, the account of "delegitimation" we've already looked at, etc., etc., Lyotard does, indeed, finally get around to saying something about what computerization might mean for universities, and for higher education and for research more generally. I will quote extensively from sec. 12, "Education and Its Legitimation through Performativity", specifically pp. 50--53:

What is transmitted in higher learning? In the case of professional training, and limiting ourselves to a narrowly functionalist point of view, an organized stock of established knowledge is the essential thing that is transmitted. The application of new technologies to this stock may have a considerable impact on the medium of communication. It does not seem absolutely necessary that the medium be a lecture delivered in person by a teacher in front of silent students, with questions reserved for sections or "practical work" sessions run by an assistant. To the extent that learning is translatable into computer language and the traditional teacher is replaced by memory banks, didactics can be entrusted to machines linking traditional memory banks (libraries, etc.) and computer data banks to intelligent terminals placed at the students' disposal.

Pedagogy would not necessarily suffer. The students would still have to be taught something: not contents, but how to use the terminals. On the one hand, that means teaching new languages and on the other, a more refined ability to handle the language game of interrogation --- where should the question be addressed, in other words, what is the relevant memory bank for what needs to be known? How should the question be formulated to avoid misunderstandings? etc. From this point of view, elementary training in informatics, and especially telematics, should be a basic requirement in universities, in the same way that fluency in a foreign language is now, for example.

It is only in the context of the grand narratives of legitimation --- the life of the spirit and/or the emancipation of humanity --- that the partial replacement of teachers by machines may seem inadequate or even intolerable. But it is probable that these narratives are already no longer the principal driving force behind interest in acquiring knowledge. If the motivation is power, then this aspect of classical didactics ceases to be relevant. The question (over or implied) now asked by the professionalist student, the State, or institutions of higher education is no longer "Is it true?" but "What use is it?" ...

This creates the prospect for a vast market for competence in operational skills. Those who possess this kind of knowledge be the object of offers or even seduction policies. Seen in this light, what we are approaching is not the end of knowledge --- quite the contrary. Data banks are the Encyclopedia of tomorrow. They transcend the capacity of each of their users. They are "nature" for postmodern man.

It should be noted, however, that didactics does not simply consist in the transmission of information; and competence, even when defined as a performance skill, does not simply reduce to having a good memory for data or having easy access to a computer. It is a commonplace that what is of utmost importance is the capacity to actualize the relevant data for solving a problem "here and now", and to organize that data into an efficient strategy.

As long as the game is not a game of perfect information, the advantage will be with the player who has knowledge and can obtain information. ... But in games of perfect information, the best performativity cannot consist in obtaining additional information in this way. It comes rather from arranging the data in a new way, which is what constitutes a "move" properly speaking. This new arrangement is usually achieved by connecting together series of data that were previously held to be independent. This capacity to articulate what used to be separate can be called imagination. Speed is one of its properties. It is possible to conceive the world of postmodern knowledge as governed by a game of perfect information, in the sense that the data is in principle accessible to any expert: there is no scientific secret. Given equal competence (no longer in the acquisition of knowledge, but in its production), what extra performativity depends on in the final analysis is imagination", which allows one either to make a new move or change the rules of the game.

If education must not only provide for the reproduction of skills, but also for their progress, then it follows that the transmission of knowledge should not be limited to the transmission of information, but should also include training in all of the procedures that can increase one's ability to connect the fields jealously guarded from one another by the traditional organization of knowledge....

The idea of an interdisciplinary approach is specific to the age of delegitimation and its hurried empiricism. The relation to knowledge is not articulated in terms of the realization of the life of the spirit or the emancipation of humanity, but in terms of the users of a complex conceptual and material machinery and those who benefit from its performance capabilities. They have at their disposal no metalanguage or metanarrative in which to formulate the final goal and correct use of that machinery. But they do have brainstorming to improve its performance....

It will be observed that this orientation is concerned more with the production of knowledge (research) than its transmission. To separate them completely ... is probably counterproductive even within the framework of functionalism and professionalism. And yet the solution toward which the institutions of knowledge all over the world are in fact moving consists in dissociating these two aspects of didactics... This is being done by earmarking entities of all kinds --- institutions, levels or programs within institutions, groupings of institutions, groupings of disciplines --- either for the selection and reproduction of professional skills, or for the promotion and "stimulation" of "imaginative" minds. The transmission channels to which the first category is given access can be simplified and made available on a mass scale. The second category has the privilege of working on a smaller scale in conditions of aristocratic egalitarianism. It matters little whether the latter are officially a part of the universities.

But one thing that seems certain is that in both cases the process of delegitimation and the predominance of the performance criterion are sounding the knell of the age of the Professor: a professor is no more competent than memory bank networks in transmitting established knowledge, no more competent than interdisciplinary teams in imagining new more or new games.

When Lyotard (via his translators) uses the word "performativity" in this book, he does not mean it in the way in which J. L. Austin introduced it as a technical concept in the philosophy of language. (This is exemplified, to use Lyotard's own example, by a university rector pronouncing "The university is open" at a convocation, and, by making that very utterance, opening the university [p. 9].) Rather, as he himself admits (note 30 on p. 88), "[I]n this book, the concept [of a performative utterance] will reappear in association with the term performativity (in particular, of a system) in the new current sense of efficiency measured according to an input/output ratio". Lots of what Lyotard says about postmodern knowledge looks much more radical if one reads "performative" or "performativity" in Austin's sense, let alone Judith Butler's, let alone the now-contemporary sense in which one speaks of, say, "performative outrage". (I strongly suspect that this was lost on many readers.)

At this point I think Lyotard equivocates. On the one hand, we postmodern people, who have "lost the nostalgia for the lost narrative", we all know that "legitimation can only spring from [our] own linguistic practice and communicational interaction" (p. 41). On the other hand, since "the end of the eighteenth century, with the first industrial revolution", we have turned towards legitimation of knowledge by technology, and more specifically by what he calls "performativity", i.e., technical efficiency. This why he goes on at such length about "performativity" in my extended quotation above. (This is, of course, very close to the "airplanes fly, dammit!" defense of the reliability of scientific knowledge.) Perhaps he does not regard "science makes technological power possible" as a legitimating narrative about science.

Lyotard concludes (p. 67) by saying that there are two possible outcomes for computerization: total control, "governed exclusively by the performativity principle", which would "inevitably involve the use of terror". Alternatively, computerization could "aid groups discussing metaprescriptives by supplying them with the information they usually lack for making knowledgeable decisions". Achieving the second outcome "is, in principle, quite simple: give the public free access to the memory and data banks".

The fact that the conclusion of all this discourse is to sound a lot like (spoiler for a much better book) the end of John Brunner's The Shockwave Rider is no accident, comrades. What we have here is fundamentally a fairly standard work of c. 1980 futurology, which gets some things right (computers will matter! videoconferencing will be routine!), some things very wrong [5], and has no inkling of huge swathes of what actually happened. You can, of course, set this book beside other monuments of Theory, but in many ways it's equally productive to contrast it to other early views of network society, of which I happen to have read too many. So, yes, set this beside Frederic Jameson, but also beside John Wicklein's The Electronic Nightmare: The New Communications and Freedom (1981). In fact, if I were feeling really masochistic, I'd also re-read Alvin Toffler's The Third Wave (1980), and write a comparison of how two ex-Marxists prophesied the coming future of computerization, linking it to rather grandiose historical schemes, hopes about decentralization, flexibility, etc., and perhaps even try to figure out why they were both drawn to Ilya Prigogine's ideas on non-equilibrium thermodynamics. I do not want to admit that I am that masochistic, and I also do not have that much free time. If you actually want to see our present anticipated from the 1970s, you are probably better off reading Brunner's novel. If, on the other hand, you want an insightful vintage book about the condition of knowledge in advanced societies, informed by philosophy, anthropology and a view of the world that embraces more than France, Germany and a somewhat-imaginary America, without wood-paneled futurology, read Ernest Gellner's Legitimation of Belief. But leave poor Lyotard alone.

[1]: One could, of course, hope for a kind of "fixed point" escape for this vicious circle, a mechanistic explanation for why, mechanically, we must find the truth. There are some interpretations of Marxism in which Marxism is, itself, such a fixed point: all general ideas about society express the interest of a class, but Marxism expresses the interest of the proletariat, whose self-interest is, uniquely, the same as the universal interest of humanity. (How convenient.) Some strands of naturalized and especially evolutionary epistemology work similarly (thus Quine: "Creatures inveterately wrong in their inductions have a pathetic but praiseworthy tendency to die before reproducing their kind"), but rarely squarely confront the possibility of adaptive errors. ^

[2]: See, e.g., Leszek Kolakowski's The Alienation of Reason / History of Positivist Thought from Hume to the Vienna Circle and his Husserl and the Search for Certitude. (Lots of people have made this point, I just happen to like Kolakowski's presentations of it.) ^

[3]: I think Dewey would register a caveat here, on the grounds that understanding the consequences of pursuing certain ends may lead us to consider whether we really want those ends. (Cf.) But the obvious counter-argument is that this reconsideration takes place in light of still higher-level values, etc. ^

[4]: As explained on pp. 23--24: Someone who makes a scientific claim "is supposed to be able to provide proof of what he says, and on the other hand he is supposed to be able to refute any opposing or contradictory statements"; scientific claims are evaluated by a scientific community whose members are in principle equals (and the point of pedagogy is turn students into equals); "as long as I can produce proof, it is permissible to think that reality is the way I say it is"; "the same referent cannot supply a plurality of contradictory or inconsistent proofs"; "Not every consensus is a sign of truth; but it is presumed that the truth of a statement necessarily draws a consensus". Etc. The Medawar influence shows up later, in some sound remarks about how differently scientists talk when working out ideas as opposed to presenting results, and how one of the important ways a scientific idea can be good is that it is fruitful, and leads to new ideas. ^

[5]: Lyotard appears to think that the computerization of knowledge would mean representing knowledge in ways computers could process, in some genuinely-AI-ish way. (I admit the text is a bit ambiguous here, but see e.g. pp. 4--5, especially the text to endnotes 12--15 (and those notes). This was a common expectation at the time, so it's hardly a distinctive failure of foresight on his part, but it's very interesting that it's not what happened. Of course relational databases exist, for storing structured information (especially arising from administrative contexts), and computers can and do perform powerful logical operations on them. But it's proved possible to have computers store and retrieve text, images, etc., without their representing any of the semantic properties, and often not even the syntactic ones, just the raw sequence of characters, pixels, etc. Information retrieval works, largely, by exploiting the distribution of word tokens, and/or the pattern of hyperlinks.


Cultural Criticism; Education; The Information Society; Philosophy
xxvi + 111 pp., including notes, index.
Drafted July 2020, slight revisions and posting, 27 March 2021