January 07, 2019

Data Over Space and Time

Collecting posts related to this course (36-3467/36-667).

Posted at January 07, 2019 18:59 | permanent link

December 31, 2018

Books to Read While the Algae Grow in Your Fur, December 2018

Attention conservation notice: I have no taste. I also have no qualifications to discuss poetry or leftist political theory. I do know something about spatiotemporal data analysis, but you don't care about that.

Gidon Eshel, Spatiotemporal Data Analysis
I assigned this as a textbook in my fall class on data over space and time, because I need something which covered spatiotemporal data analysis, especially principal components analysis, for students who could be taking linear regression at the same time, and was cheap. This met all my requirements.
The book is divided into two parts. Part I is a review or crash course in linear algebra, building up to decomposing square matrices in terms of their eigenvalues and eigenvectors, and then the singular value decomposition of arbitrary matrices. (Some prior acquaintance with linear algebra will help, but not very much is needed.) Part II is about data analysis, covering some basic notions of time series and autocorrelation, linear regression models estimated by least squares, and "empirical orthogonal functions", i.e., principal components analysis, i.e., eigendecomposition of covariance or correlation matrices. As for "cheap", while the list price is (currently) an outrageous \$105, it's on JSTOR, so The Kids had free access to the PDF through the university library.
In retrospect, there were strengths to the book, and some serious weaknesses --- some absolute, some just for my needs.
The most important strength is that Eshel writes like a human being, and not a bloodless textbook. His authorial persona is not (thankfully) much like mine, but it's a likeable and enthusiastic one. This is related to his trying really, really hard to explain everything as simply as possible, and with multitudes of very detailed worked examples. I will probably be assigning Part I of the book, on linear algebra, as refresher material to my undergrads for years.
He is also very good at constantly returning to physical insight to motivate data-analytic procedures. (The highlight of this, for me, was section 9.7 [pp. 185ff] on when and why an autonomous, linear, discrete-time AR(1) or VAR(1) model will arise from a forced, nonlinear, continuous-time dynamical system.) If this had existed when I was a physics undergrad, or starting grad school, I'd have loved it.
Turning to the weaknesses, some of them are, as I said, merely ways in which he didn't write the book to meet my needs. His implied reader is very familiar with physics, and not just the formal, mathematical parts but also the culture (e.g., the delight in complicated compound units of measurement, saying "ensemble" when other disciplines say "distribution" or "population"). In fact, the implied reader is familiar with, or at least learning, climatology. But that reader has basically no experience with statistics, and only a little probability (so that, e.g., they're not familiar with rules for algebra with expectations and covariances*). Since my audience was undergraduate and masters-level statistics students, most of whom had only the haziest memories of high school physics, this was a mis-match.
Others weaknesses are, to my mind, a bit more serious, because they reflect more on the intrinsic content.
  • A trivial but real one: the book is printed in black and white, but many figures are (judging by the text) intended to be in color, and are scarcely comprehensible without it. (The first place this really struck me was p. 141 and Figure 9.4, but there were lots of others.) The electronic version is no better.
  • The climax of the book (chapter 11) is principal components analysis. This is really, truly important, so it deserves a lot of treatment. But it's not a very satisfying stopping place: what do you do with the principal components once you have them? What about the difference between principal components / empirical orthogonal functions and factor models? (In the book's terms, the former does a low-rank approximation to the sample covariance matrix $\mathbf{v} \approx \mathbf{w}^T \mathbf{w}$, while the latter treats it as low-rank-plus-diagonal-noise $\mathbf{v} \approx \mathbf{w}^T\mathbf{w} + \mathbf{d}$, an importantly different thing.) What about nonlinear methods of dimensionality reduction? My issue isn't so much that the book didn't do everything, as that it didn't give readers even hints of where to look.
  • There are places where the book's exposition is not very internally coherent. Chapter 8, on autocorrelation, introduces the topic with an example where $x(t) = s(t) + \epsilon(t)$, for a deterministic signal function $s(t)$ and white noise $\epsilon(t)$. Fair enough; this is a trend-plus-noise representation. But it then switches to modeling the autocorrelations as arising from processes where $x(t) = \int_{-\infty}^{t}{w(u) x(u) du} + \xi(t)$, where again $\xi(t)$ is white noise. (Linear autoregressions are the discrete-time analogs.) These are distinct classes of processes. (Readers will find it character-building to try to craft a memory kernel $w(u)$ which matches the book's running signal-plus-noise example, where $s(t) = e^{-t/120}\cos{\frac{2\pi t}{49}}$.)
  • I am all in favor of physicists' heuristic mathematical sloppiness, especially in introductory works, but there are times when it turns into mere confusion. The book persistently conflates time or sample averages with expectation values. The latter are ensemble-level quantities, deterministic functionals of the probability distribution. The former are random variables. Under various laws of large numbers or ergodic theorems, the former converge on the latter, but they are not the same. Eshel knows they are not the same, and sometimes talks about how they are not the same, but the book's notation persistently writes them both as $\langle x \rangle$, and the text sometimes flat-out identifies them. (For one especially painful example among many, p. 185.) Relatedly, the book conflates parameters (again, ensemble-level quantities, functions of the data-generating process) and estimators of those parameters (random variables)
  • The treatment of multiple regression is unfortunate. $R^2$ does not measure goodness of fit. (It's not even a measure of how well the regression predicts or explains.) At some level, Eshel knows this, since his recommendation for how to pick regressors is not "maximize $R^2$". On the other hand, his prescription for picking regressors (sec. 9.6.4, pp.180ff) is rather painful to read, and completely at odds with his stated rationale of using regression coefficients to compare alternative explanations (itself a bad, though common, idea). Very strikingly, the terms "cross-validation" and "bootstrap" do not appear in his index**. Now, to be clear, Eshel isn't worse in his treatment of regression that most non-statisticians, and he certainly understands the algebra backwards and forwards. But his advice on the craft of regression is, to be polite, weak and old-fashioned.
Summing up, the linear-algebra refresher/crash-course of Part I is great, and I even like the principal components chapters in Part II, as far as they go. But it's not ideal for my needs, and there are a bunch of ways I think it could be improved for anyone's needs. What to assign instead, I have no idea.
*: This is, I think, why he doesn't explain the calculation of the correlation time and effective sample size in sec. 8.2 (pp. 123--124), just giving a flat statement of the result, though it's really easy to prove with those tools. I do appreciate finally learning the origin of this beautiful and practical result --- G. I. Taylor, "Diffusion by Continuous Movements", Proceedings of the London Mathematical Society, series 2, volume 20 (1922), pp. 196--212 (though the book's citing it with the wrong year, confusing series number with an issue number, and no page numbers was annoying). ^
**: The absence of "ridge regression" and "Tikhonov regularization" from the index is all the more striking because they appear in section 9.3.3 as "a more general, weighted, dual minimization formalism", which, compared to ordinary least squares, is described as "sprinkling added power ... on the diagonal of an otherwise singular problem". This is, of course, a place where it would be really helpful to have a notion of cross-validation, to decide how much to sprinkle.^
Nick Srnicek and Alex Williams, Inventing the Future: Postcapitalism and a World Without Work
It's --- OK, I guess? They have some good points against what they call "folk politics", namely, that it has conspicuously failed to accomplish anything, so doubling down on more of it seems like a bad way to change the world. And they really want to change the world: the old twin goals of increasing human power over the world, and eliminating human power of other humans, are very much still there, though they might not quite adopt that formula. To get there, their basic idea is to push for a "post-work world", one where people don't have to work to survive, because they're entitled to a more-than-subsistence basic income as a matter of right. They realize that making that work will require lots of politics and pushes for certain kinds of technological progress rather than others. This is the future they want --- to finally enter (in Marx's words) "the kingdom of freedom", where we will be able to get on with all the other problems, and possibilities, confronting us.
As for getting there: like a long, long line of leftist intellectuals from the 1960s onwards, Srnicek and Williams are very taken with the idea, going back to Gramsci, that the key to achieving socialism is to first achieve ideological "hegemony". To put it crudely, this means trying to make your idea such broadly-diffused, widely-accepted, scarcely-noticed common notions that when madmen in authority channel voices from the air, they channel you. (In passing: Occupy may have done nothing to reduce economic inequality, but Gramsci's success as a strategist may be measured by the fact that he wrote in a Fascist prison.) Part of this drive for hegemony is pushing for new ideas in economics --- desirable in itself, but they are sure in advance of what inquiry should find *. Beyond this, and saying that many tactics will need to be tried out by a whole "ecology" of organizations and groups, they're pretty vague. There's some wisdom here --- who could propound a detailed plan to get to post-work post-capitalism? --- but also more ambiguity than they acknowledge. Even if a drive for a generous basic income (and all that would go with it) succeeds, the end result might not be anything like the sort of post-capitalism Srniceck and Williams envisage, if only because what we learn and experience along the way might change what seems feasible and desirable. (This is a Popperian point against Utopian plans, but it can be put in other language quite easily**.) I think Srnicek and Williams might be OK with the idea that their desired future won't be realized, so long as some better future is, and that the important point is to get people on the left not to prefigure better worlds in occasional carnivals of defiance, but to try to make them happen. Saying that doing this will require organization, concrete demands, and leadership is pretty sensible, though they do disclaim trying to revive the idea of a vanguard party.
Large portions of the book are, unfortunately, given over to insinuating, without ever quite saying, that post-work is not just desirable and possible, but a historical necessity to which we are impelled by the inexorable development of capitalism, as foreseen by the Prophet. (They also talk about how Marx's actual scenario for how capitalism would develop, and end, not only has not come to pass yet, but is pretty much certain to never come to pass.) Large portions of the book are given over to wide-ranging discussions of lots of important issues, all of which, apparently, they grasp through the medium of books and articles published by small, left-wing presses strongly influenced by post-structuralism --- as it were, the world viewed through the Verso Books catalog. (Perry Anderson had the important advantage, as a writer and thinker, of being formed outside the rather hermetic subculture/genre he helped create; these two are not so lucky.) Now, I recognize that good ideas usually emerge within a community that articulates its own distinctive tradition, so some insularity can be all to the good. In this case, I am not all that far from the authors' tradition, and sympathetic to it. But still, the effect of these two (overlapping) writerly defects is that once the book announced a topic, I often felt I could have written the subsequent passage myself; I was never surprised by what they had to say. Finishing this was a slog.
I came into the book a mere Left Popperian and market socialist, friendly to the idea of a basic income, and came out the same way. My mind was not blown, or even really changed, about anything. But it might encourage some leftist intellectuals to think constructively about the future, which would be good.
Shorter: Read Peter Frase's Four Futures instead.
*: They are quite confident that modern computing lets us have an efficient planned economy, a conclusion they support not be any technical knowledge of the issue but by citations to essays in literary magazines and collections of humanistic scholarship. As I have said before, I wish that were the case, if only because it would be insanely helpful for my own work, but I think that's just wrong. In any case, this is an important point for socialists, since it's very consequential for the kind of socialism we should pursue. It should be treated much more seriously, i.e., rigorously and knowledgeable, than they do. Fortunately, a basic income is entirely compatible with market socialism, as are other measures to ensure that people don't have to sell their labor power in order to live.
**: My own two-minute stab at making chapter 9 of The Open Society and Its Enemies sound suitable for New Left Review: "The aims of the progressive forces, always multifarious, develop dialectically in the course of the struggle to attain them. Those aims can never be limited by the horizon of any abstract, pre-conceived telos, even one designated 'socialism', but will always change and grow through praxis." (I admit "praxis" may be a bit behind the times.) ^
A. E. Stallings, Like: Poems
Beautiful stuff from one of my favorite contemporary poets. "Swallows" and "Epic Simile" give a fair impression of what you'll find. This also includes a lot of the poems discussed in Cynthia Haven's "Crossing Borders" essay.

Books to Read While the Algae Grow in Your Fur; Enigmas of Chance; Data over Space and Time; The Progressive Forces; The Commonwealth of Letters

Posted at December 31, 2018 23:59 | permanent link

December 28, 2018

Data over Space and Time: Self-Evaluation and Lessons Learned

Attention conservation notice: Academic navel-gazing, about a class you didn't take, in a subject you don't care about, at a university you don't attend.

Well, that went better than it could have, especially since it was the first time I've taught a new undergraduate course since 2011.

Some things that worked well:

  1. The over-all choice of methods topics --- combining descriptive/exploratory techniques and generative models and their inference. Avoiding the ARIMA alphabet soup as much as possible both played to my prejudices and avoided interference with a spring course.
  2. The over-all kind and range of examples (mostly environmental and social-historical) and the avoidance of finance. I could have done some more economics, and some more neuroscience.
  3. The recurrence of linear algebra and eigen-analysis (in smoothing, principal components, linear dynamics, and Markov processes) seems to have helped some students, and at least not hurt the others.
  4. The in-class exercises did wonders for attendance. Whether doing the exercises, or that attendance, improved learning is hard to say. Some students specifically praised them in their anonymous feedback, and nobody complained.

Some things did not work so well:

  1. I was too often late in posting assignments, and too many of them had typos when first posted. (This was a real issue with the final. To any of the students reading this: my apologies once again.) I also had a lot of trouble calibrating how hard the assignments would be, so the opening problem sets were a lot more work than the later ones.
    (In my partial defense about late assignments, there were multiple problem sets which I never posted, after putting a lot of time into them, because my initial idea either proved much too complicated for this course when fully executed, or because I was, despite much effort, simply unable to reproduce published papers*. Maybe next time, if there is a next time, these efforts can see the light of day.)
  2. I let the grading get really, really behind the assignments. (Again, my apologies.)
  3. I gave less emphasis to spatial and spatio-temporal models in the second, generative half of the course than they really deserve. E.g., Markov random fields and cellular automata (and kin) probably deserve at least a lecture each, perhaps more.
  4. I didn't build in enough time for review in my initial schedule, so I ended up making some painful cuts. (In particular, nonlinear autoregressive models.)
  5. My attempt to teach Fourier analysis was a disaster. It needs much more time and preparation than I gave it.
  6. We didn't get very much at all into how to think your way through building a new model, as opposed to estimating, simulating, predicting, checking, etc., a given model.
  7. I have yet to figure out how to get the students to do the readings before class.

If I got to teach this again, I'd keep the same over-all structure, but re-work all the assignments, and re-think, very carefully, how much time I spent on which topics. Some of these issues would of course go away if there were a second semester to the course, but that's not going to happen.

*: I now somewhat suspect that one of the papers I tried to base an assignment on is just wrong, or at least could not have done the analysis the way it say it did. This is not the first time I've encountered something like this through teaching... ^

Data over Space and Time

Posted at December 28, 2018 11:22 | permanent link

November 30, 2018

Books to Read While the Algae Grow in Your Fur, November 2018

Attention conservation notice: I have no taste. I also have no qualifications to discuss the history of photography, or of black Pittsburgh.

Cheryl Finley, Laurence Glasco and Joe W. Trotter, with an introduction by Deborah Willis, Teenie Harris, Photographer: Image, Memory, History
A terrific collection of Harris's photos of (primarily) Pittsburgh's black community from the 1930s to the 1970s, with good biographical and historical-contextual essays.
Disclaimer: Prof. Trotter is also on the faculty at CMU, but I don't believe we've ever actually met.
Ben Aaronovitch, Lies Sleeping
Mind candy: the latest installment in the long-running supernatural-procedural mystery series, where the Folly gets tangled up with the Matter of Britain.
Charles Stross, The Labyrinth Index
Mind candy; Latest installment in Stross's long-running Lovecraftian spy-fiction series. I imagine a novel about the US Presidency being taken over by a malevolent occult force seemed a lot more amusing before 2016, when this must have been mostly written. It's a good installment, but only suitable for those already immersed in the story.
Anna Lee Huber, The Anatomist's Wife and A Brush with Shadows
Mind-candy, historical mystery flavor. These are the first and sixth books in the series, because I couldn't lay hands on 2--5, but I will.

Books to Read While the Algae Grow in Your Fur; Scientifiction and Fantastica; Pleasures of Detection, Portraits of Crime; Tales of Our Ancestors; Cthulhiana; Heard About Pittsburgh, PA

Posted at November 30, 2018 23:59 | permanent link

November 13, 2018

Data over Space and Time, Lecture 20: Markov Chains


(.Rmd)

Data over Space and Time

Posted at November 13, 2018 16:50 | permanent link

November 12, 2018

Course Announcement: Advanced Data Analysis (36-402/36-608), Spring 2019

Attention conservation notice: Announcement of an advanced undergraduate course at a school you don't attend in a subject you don't care about.

I will be teaching 36-402/36-608, Advanced Data Analysis, in the spring.

This will be the seventh time I'll have taught it, since I took it over and re-vamped it in 2011. The biggest change from previous iterations will be in how I'll be handling class-room time, by introducing in-class small-group exercises. I've been doing this in this semester's class, and it seems to at least not be hurting their understanding, so we'll see how well it scales to a class with four or five times as many students.

(The other change is that by the time the class begins in January, the textbook will, inshallah, be in the hands of the publisher. I've finished adding everything I'm going to add, and now it's a matter of cutting stuff, and fixing mistakes.)

Advanced Data Analysis from an Elementary Point of View

Posted at November 12, 2018 14:51 | permanent link

Posted at November 12, 2018 13:55 | permanent link

November 03, 2018

In Memoriam Joyce Fienberg

I met Joyce through her late husband Stephen, my admired and much-missed colleague. I won't pretend that she was a close friend, but she was a friend, and you could hardly hope to meet a kinder or more decent person. A massacre by a deluded bigot would be awful enough even if his victims had been prickly and unpleasant individuals. But that he murdered someone like Joyce --- five blocks from where I live --- makes it especially hard to take. I am too sad to have anything constructive to say, and too angry at living in a running morbid joke to remember her the way she deserves.

Posted at November 03, 2018 14:25 | permanent link

November 01, 2018

Data over Space and Time, Lecture 17: Simulation

Lecture 16 was canceled.


(.Rmd)

Data over Space and Time

Posted at November 01, 2018 13:00 | permanent link

October 31, 2018

Books to Read While the Algae Grow in Your Fur, October 2018

Attention conservation notice: I have no taste. I also have no qualifications to discuss corporate fraud.

John Carreyrou, Bad Blood: Secrets and Lies in a Silicon Valley Startup
This is a deservedly-famous story, told meticulously. It says some very bad things about the culture around Silicon Valley which made this fraud (and waste) possible. (To be scurpulously fair, investment companies with experience in medical devices and the like don't seem to have bought in.) It also says some very bad things about our elites more broadly, since lots of influential people who were in no position to know anything useful about whether Theranos could fulfill its promises endorsed them, apparently on the basis of will-to-believe and their own arrogance. (I hereby include by reference Khurana's book on the charisma of corporate CEOs, and Xavier Marquez's great post on charisma.)
The real heroes here are, of course, the people who quietly kept following through on established procedures and regulations, and refused to bend to considerable pressure.
Luca D'Andrea, Beneath the Mountain
Mind candy: in which a stranger investigates the secrets of a small, isolated community's past, for multiple values of "past".
Walter Jon Williams, Quillifer
Misadventures of a rogue in a fantasy world whose technology level seems to be about the 1500s in our world. Quillifer has some genuinely horrible things happen to him, and brings others on himself, but keeps bouncing back, and keeps his eye on various main chances (befitting the only law clerk I can think of in fantasy literature who isn't just cannon-fodder). I didn't like him, exactly, but I was definitely entertained.

Books to Read While the Algae Grow in Your Fur; Pleasures of Detection, Portraits of Crime; Scientifiction and Fantastica

Posted at October 31, 2018 23:59 | permanent link

October 25, 2018

Posted at October 25, 2018 19:40 | permanent link

October 18, 2018

Revised and Extended Remarks at "The Rise of Intelligent Economies and the Work of the IMF"

Attention conservation notice: 2700+ words elaborating a presentation from a non-technical conference about AI, where the conversation devolved to "blockchain" within an hour; includes unexplained econometric jargon. Life is short, and you should have more self-respect.

I got asked to be a panelist at a November 2017 symposium at the IMF on machine learning, AI and what they can do to/for the work of the Fund and its sister organizations, specifically the work of its economists. What follows is an amplification and rationalization of my actual remarks. It is also a reconstruction, since my notes were on an only-partially-backed-up laptop stolen in the next month. (Roman thieves are perhaps the most dedicated artisans in Italy, plying their trade with gusto on Christmas Eve.) Posted now because reasons.

On the one hand, I don't have any products to sell, or even much of a consulting business to promote, so I feel a little bit out of place. But against that, there aren't many other people who work on machine learning who read macro and development economics for fun, or have actually estimated a DSGE model from data, so I don't feel totally fradulent up here.

We've been asked to talk about AI and machine learning, and how they might impact the work of the Fund and related multi-lateral organizations. I've never worked for the Fund or the World Bank, but I do understand a bit about how you economists work, and it seems to me that there are three important points to make: a point about data, a point about models, and a point about intelligence. The first of these is mostly an opportunity, the second is an opportunity and a clarification, and the third is a clarification and a criticism --- so you can tell I'm an academic by taking the privilege of ending on a note of skepticism and critique, rather than being inspirational.

I said my first point is about data --- in fact, it's about what, a few turns of the hype cycle ago, we'd have called "big data". Economists at the Fund typically rely for data on the output of official statistical agencies from various countries. This is traditional, this sort of reliance on the part of economists actually pre-dates the Bretton Woods organizations, and there are good reasons for it. With a few notable exceptions, those official statistics are prepared very carefully, with a lot of effort going in to making them both precise and accurate, as well as comparable over time and, increasingly, across countries.

But even these official statistics have their issues, for the purposes of the Fund: they are slow, they are noisy, and they don't quite measure what you want them to.

The issue of speed is familiar: they come out annually, maybe quarterly or monthly. This rate is pretty deeply tied to the way the statistics are compiled, which in turn is tied to their accuracy --- at least for the foreseeable future. It would be nice to be faster.

The issue of noise is also very real. Back in 1950, the great economist Oskar Morgenstern, the one who developed game theory with John von Neumann, wrote a classic book called On the Accuracy of Economic Observations, where he found a lot of ingenious ways of checking the accuracy of official statistics, e.g., looking at how badly they violated accounting identities. To summarize very crudely, he concluded that lots of those statistics couldn't possibly be accurate to better than 10%, maybe 5% --- and this was for developed countries with experienced statistical agencies. I'm sure that things are better now --- I'm not aware of anyone exactly repeating his efforts, but it'd be a worthwhile exercise --- maybe the error is down to 1%, but that's still a lot, especially to base policy decisions on.

The issue of measurement is the subtlest one. I'm not just talking about measurement noise now. Instead, it's that the official statistics are often tracking variables which aren't quite what you want1. Your macroeconomic model might, for example, need to know about the quantity of labor available for a certain industry in a certain country. But the theory in that model defines "quantity of labor" in a very particular way. The official statistical agencies, on the other hand, will have their own measurements of "quantity of labor", and none of those need to have exactly the same definitions. So even if we could magically eliminate measurement errors, just plugging the official value for "labor" in to your model isn't right, that's just an approximate, correlated quantity.

So: official statistics, which is what you're used to using, are the highest-quality statistics, but they're also slow, noisy, and imperfectly aligned with your models. There hasn't been much to be done about that for most of the life of the Fund, though, because what was your alternative?

What "big data" can offer is the possibility of a huge number of noisy, imperfect measures. Computer engineers --- the people in hardware and systems and databases, not in machine learning or artificial intelligence --- have been making it very, very cheap and easy to record, store, search and summarize all the little discrete facts about our economic lives, to track individual transactions and aggregate them into new statistics. (Moving so much of our economic lives, along with all the rest of our lives, on to the Internet only makes it easier.) This could, potentially, give you a great many aggregate statistics which tell you, in a lot of detail and at high frequency, about consumption, investment, employment, interest rates, finance, and so on and so forth. There would be lots of noise, but having a great many noisy measurements could give you a lot more information. It's true that basically none of them would be well-aligned with the theoretical variables in macro models, but there are well-established statistical techniques for using lots of imperfect proxies to track a latent, theoretical variable, coming out of factor-analysis and state-space modeling. There have been some efforts already to incorporate multiple imperfect proxies into things like DSGE models.

I don't want get carried away here. The sort of ubiquitous recording I'm talking about is obviously more advanced in richer countries than in poorer ones --- it will work better in, say, South Korea, or even Indonesia, than in Afghanistan. It's also unevenly distributed within national economies. Getting hold of the data, even in summary forms, would require a lot of social engineering on the part of the Fund. The official statistics, slow and imperfect as they are, will always be more reliable and better aligned to your models. But, wearing my statistician hat, my advice to economists here is to get more information, and this is one of the biggest ways you can expand your information set.

The second point is about models --- it's a machine learning point. The dirty secret of the field, and of the current hype, is that 90% of machine learning is a rebranding of nonparametric regression. (I've got appointments in both ML and statistics so I can say these things without hurting my students.) I realize that there are reasons why the overwhelming majority of the time you work with linear regression, but those reasons aren't really about your best economic models and theories. Those reasons are about what has, in the past, been statistically and computationally feasible to estimate and work with. (So they're "economic" reasons in a sense, but about your own economies as researchers, not about economics-as-a-science.) The data will never completely speak for itself, you will always need to bring some assumptions to draw inferences. But it's now possible to make those assumptions vastly weaker, and to let the data say a lot more. Maybe everything will turn out to be nice and linear, but even if that's so, wouldn't it be nice to know that, rather than to just hope?

There is of course a limitation to using more flexible models, which impose fewer assumptions, which is that it makes it easier to "over-fit" the data, to create a really complicated model which basically memorizes every little accident and even error in what it was trained on. It may not, when you examine it, look like it's just memorizing, it may seem to give an "explanation" for every little wiggle. It will, in effect, say things like "oh, sure, normally the central bank raising interest rates would do X, but in this episode it was also liberalizing the capital account, so Y". But the way to guard against this, and to make sure your model, or the person selling you their model, isn't just BS-ing is to check that it can actually predict out-of-sample, on data it didn't get to see during fitting. This sort of cross-validation has become second nature for (honest and competent) machine learning practitioners.

This is also where lots of ML projects die. I think I can mention an effort at a Very Big Data Indeed Company to predict employee satisfaction and turn-over based on e-mail activity, which seemed to work great on the training data, but turned out to be totally useless on the next year's data, so its creators never deployed it. Cross-validation should become second nature for economists, and you should be very suspicious of anyone offering you models who can't tell you about their out-of-sample performance. (If a model can't even predict well under a constant policy, why on Earth would you trust it to predict responses to policy changes?)

Concretely, going forward, organizations like the Fund can begin to use much more flexible modeling forms, rather than just linear models. The technology to estimate them and predict from them quickly now exists. It's true that if you fit a linear regression and a non-parametric regression to the same data set, the linear regression will always have tighter confidence sets, but (as Jeffrey Racine says) that's rapid convergence to a systematically wrong answer. Expanding the range and volume of data used in your economic modeling, what I just called the "big data" point, will help deal with this, and there's a tremendous amount of on-going progress in quickly estimating flexible models on truly enormous data sets. You might need to hire some people with Ph.D.s in statistics or machine learning who also know some economics --- and by coincidence I just so happen to help train such people! --- but it's the right direction to go, to help your policy decisions be dictated by the data and by good economics, and not by what kinds of models were computationally feasible twenty or even sixty years ago.

The third point, the most purely cautionary one, is the artificial intelligence point. This is that almost everything people are calling "AI" these days is just machine learning, which is to say, nonparametric regression. Where we have seen breakthroughs is in the results of applying huge quantities of data to flexible models to do very particular tasks in very particular environments. The systems we get from this are really good at that, but really fragile, in ways that don't mesh well with our intuition about human beings or even other animals. One of the great illustrations of this are what are called "adversarial examples", where you can take an image that a state-of-the-art classifier thinks is, say, a dog, and by tweaking it in tiny ways which are imperceptible to humans, you can make the classifier convinced it's, say, a car. On the other hand, you can distort that picture of a dog into an image something unrecognizable by any person while the classifier is still sure it's a dog.

If we have to talk about our learning machines psychologically, try not to describe them as automating thought or (conscious) intelligence, but rather as automating unconscious perception or reflex action. What's now called "deep learning" used to be called "perceptrons", and it was very much about trying to do the same sort of thing that low-level perception in animals does, extracting features from the environment which work in that environment to make a behaviorally-relevant classification2 or prediction or immediate action. This is the sort of thing we're almost never conscious of in ourselves, but is in fact what a huge amount of our brains are doing. (We know this because we can study how it breaks down in cases of brain damage.) This work is basically inaccessible to consciousness --- though we can get hints of it from visual illusions, and from the occasions where it fails, like the shock of surprise you feel when you put your foot on a step that isn't there. This sort of perception is fast, automatic, and tuned to very, very particular features of the environment.

Our current systems are like this, but even more finely tuned to narrow goals and contexts. This is why they have such alien failure-modes, and why they really don't have the sort of flexibility we're used to from humans or other animals. They generalize to more data from their training environment, but not to new environments. If you take a person who's learned to play chess and give them a 9-by-9 board with an extra rook on each side, they'll struggle but they won't go back to square one; AlphaZero will need to relearn the game from scratch. Similarly for the video-game learners, and just about everything else you'll see written up in the news, or pointed out as a milestone in a conference like this. Rodney Brooks, one of the Revered Elders of artificial intelligence, puts it nicely recently, saying that the performances of these systems give us a very misleading idea of their competences3.

One reason these genuinely-impressive and often-useful performances don't indicate human competences is that these systems work in very alien ways. So far as we can tell4, there's little or nothing in them that corresponds to the kind of explicit, articulate understanding human intelligence achieves through language and conscious thought. There's even very little in them of the un-conscious, in-articulate but abstract, compositional, combinatorial understanding we (and other animals) show in manipulating our environment, in planning, in social interaction, and in the structure of language.

Now, there are traditions of AI research which do take inspiration from human (and animal) psychology (as opposed to a very old caricature of neurology), and try to actually model things like the structure of language, or planning, or having a body which can be moved in particular ways to interact with physical objects. And while these do make progress, it's a hell of a lot slower than the progress in systems which are just doing reflex action. That might change! There could be a great wave of incredible breakthroughs in AI (not ML) just around the corner, to the point where it will make sense to think about robots actually driving shipping trucks coast to coast, and so forth. Right now, not only is really autonomous AI beyond our grasp, we don't even have a good idea of what we're missing.

In the meanwhile, though, lots of people will sell their learning machines as though they were real AI, with human-style competences, and this will lead to a lot of mischief and (perhaps unintentional) fraud, as the machines get deployed in circumstances where their performance just won't be anything like what's intended. I half suspect that the biggest economic consequence of "AI" for the foreseeable future is that companies will be busy re-engineering human systems --- warehouses and factories, but also hospitals, schools and streets --- so to better accommodate their machines.

So, to sum up:

  • The "big data" point is that there's a huge opportunity for the Fund, the Bank, and their kin to really expand the data on which they base their analyses and decisions, even if you keep using the same sorts of models.
  • The "machine learning" point is that there's a tremendous opportunity to use more flexible models, which do a better job of capturing economic, or political-economic, reality.
  • The "AI" point is that artificial intelligence is the technology of the future, and always will be.

Manual trackback: New Savanna; Brad DeLong

The Dismal Science; Enigmas of Chance


  1. Had there been infinite time, I like to think I'd have remembered that Haavelmo saw this gap very clearly, back in the day. Fortunately, J. W. Mason has a great post on this.^

  2. The classic paper on this, by, inter alia, one of the inventors of neural networks, was called "What the frog's eye tells the frog's brain". This showed how, already in the retina, the frog's nervous system picked out small-dark-dots-moving-erratically. In the natural environment, these would usually be flies or other frog-edible insects.^

  3. Distinguishing between "competence" and "performance" in this way goes back, in cognitive science, at least to Noam Chomsky; I don't know whether Uncle Noam originated the distinction.^

  4. The fact that I need a caveat-phrase like this is an indication of just how little we understand why some of our systems work as well as they do, which in turn should be an indication that nobody has any business making predictions about how quickly they'll advance.^

Posted at October 18, 2018 23:30 | permanent link

Data over Space and Time, Lectures 9--13: Filtering, Fourier Analysis, African Population and Slavery, Linear Generative Models

I have fallen behind on posting announcements for the lectures, and I don't feel like writing five of these at once (*). So I'll just list them:

  1. Separating Signal and Noise with Linear Methods (a.k.a. the Wiener filter and seasonal adjustment; .Rmd)
  2. Fourier Methods I (a.k.a. a child's primer of spectral analysis; .Rmd)
  3. Midterm review
  4. Guest lecture by Prof. Patrick Manning: "African Population and Migration: Statistical Estimates, 1650--1900" [PDF handout]
  5. Linear Generative Models for Time Series (a.k.a. the eigendecomposition of the evolution operator is the source of all knowledge; .Rmd)
  6. Linear Generative Models for Spatial and Spatio-Temporal Data (a.k.a. conditional and simultaneous autoregressions; .Rmd)

*: Yes, this is a sign that I need to change my workflow. Several readers have recommended Blogdown, which looks good, but which I haven't had a chance to try out yet.

Data over Space and Time

Posted at October 18, 2018 22:49 | permanent link

September 30, 2018

Books to Read While the Algae Grow in Your Fur, September 2018

Attention conservation notice: I have no taste. I also have no qualifications to discuss geography, the alt-right, 19th century American history, political philosophy, or the life and works of Joseph Conrad.

Gilbert Seldes, The Stammering Century
A sympathetic, at times even loving, account of selected 19th century American cranks, and crank movements, tracing them all back to Jonathan Edwards, both in the inflection he gave to Calvinism, and his cultivating outbreaks of enthusiasm. Strongly recommended to those interested in weird Americana, and, of course, psychoceramics.
Stanley Fish, Save the World on Your Own Time
A plea to university faculty to teach their subject matter, and just teach their subject matter, rather than use our teaching to try to "save the world". I am very sympathetic, but I don't think Fish is really fair to some fairly obvious counter-arguments:
  • Sometimes, the consensus of a discipline on a key subject matter runs smack in to a current political or cultural controversy --- e.g., evolutionary biology or climatology. To refuse to engage that is to fail in teaching our disciplines. To (as Fish suggests) "academicize" the point by studying the controversy itself fails to convey crucial points of our disciplines. (And anyway biologists and climatologists aren't sociologists or historians, and would be operating outside their domain of expertise.)
  • We may have options available to us in our teaching which are equally good from a disciplinary standpoint, but carry very different connotations. If I am teaching time series analysis, from a purely statistical viewpoint it doesn't matter whether I draw my examples from finance or from environmental toxicology, but it'd be (faux) naive to pretend that this choice wouldn't carry connotations to the students.
    Of course, what my students would make of those connotations is another matter. One of Fish's sounder points is that the way our students understand our lessons, especially the subtler aspects of them, is so far beyond our control, and so idiosyncratic from student to student, that it's futile to aim at changing their attitudes in the way some of us profess to do. (Fish didn't originate the line about "how am I supposed to indoctrinate my students when I can't even make them do the reading?", but I'm pretty sure he'd endorse it.) I might please myself by using environmental examples in my time-series class, and I might even fulfill a legitimate pedagogical purpose of showing the students something about the range of applicability of the methods, but I shouldn't fool myself that I am raising their consciousness.
  • At least since the medieval universities were founded to train professionals in medicine, law and theology, higher education has always had practical aims. American higher education was certainly never intended as the self-justifying pursuit of inutility which Fish longs for. So why not ask "useful for what?" (Cf.)
Now, this is a short book, and one can forgive a pamphlet for not being a comprehensive treatise, and in particular for not considering all possible ramifications and objections. I become less forgiving, however, when a short book has a lot of space given over to, among other things,
  • An account of what sounds like its author's nervous breakdown after he gave up being a dean;
  • A loving description of the author's frankly-eccentric approach to teaching composition and syntax by making his students invent an artificial language (not much burdened by knowledge of linguistics)
  • A disquisition on how, because Milton wrote poems, he couldn't also have been trying to make political or theological points in his poetry, because (you guessed it) poetry is a self-referential, self-justifying activity [*].
and so on.
I feel like Fish probably has it in him to write a better-proportioned book on these themes, which engaged better with objections; I'd be interested to read it.
*: This is a frankly astonishing argument from someone of Fish's obvious erudition; I can't decide whether it's more rhetorically or historically ill-informed. If poetry can be used to write astronomy textbooks, it can be used to score theological points.
Maya Jasanoff, The Dawn Watch: Joseph Conrad in a Global World
Part biography of Conrad, part exposition of his most important novels, part an effort to portray him as a prophet of a newly-globalizing world, and so connect him to our own time. I think it really works quite well on all fronts.
Daniel Dorling, Mark Newman and Anna Barford, The Atlas of the Real World: Mapping the Way We Live
A collection of interesting (if not always very uplifting) cartograms. Since Mark is a friend and collaborator, and once upon a time we wrote something with using his cartogram-making technique, I won't pretend to objectivity, but I will say this is fascinating and I wish it could be perpetually updated. (Posted now because of my policy / compulsion of not recommending books until I've read them cover to cover.)
(I am, however, puzzled by the international-trade cartograms that use net exports or imports by industry; this seems very misleading when a lot of countries both export and import substantially in the same category.)
Mike Wendling, Alt-Right: From 4Chan to the White House
No great revelations, but a decent, straightforward journalistic account of the movement, or rather collection of more-or-less related and overlapping movements and tendencies, and some of the principle ideologues/grifters.
Owing the vagaries of publication, this basically ends with Charlottesville, and with the conclusion that the movement is on its way to implosion. I suspect this is right for whatever attempt there was at a coherent movement of (sort-of) younger, (pseudo-) sophisticated people. As events since then have amply shown, however, there is no shortage online of disorganized people spread somewhere on a spectrum from paranoia to frothing hatred, and encouraging each other to ever more elaborate delusions.
(Written before one of those fuckheads shot up my neighborhood and killed someone I cared about.)
Amy Gutmann, Identity in Democracy
This is calm and sensible, and a bit depressing to still be discussing a decade and a half later, when a lot of the topical examples are very dated. Curiously, from my point of view, the book takes which identities are politically relevant as given, rather than as endogenous to the political-cultural process.

Books to Read While the Algae Grow in Your Fur; Kith and Kin; Learned Folly; The Running-Dogs of Reaction; The Progressive Forces; The Commonwealth of Letters; Commit a Social Science; Writing for Antiquity; The Beloved Republic; Psychoceramica; Philosophy

Posted at September 30, 2018 23:59 | permanent link

Three-Toed Sloth