Blogging will stay non-existent sporadic while
I struggle to get enough ahead of the 96 students
in ADA that I can do some research
devote myself to contemplating the mysteries of the universe and helping young
minds develop their own powers. In the meanwhile, if you want more:
For example, one open question now is: How can an Artificial Intelligence do statistics? In the old-fashioned view of Bayesian data analysis as inference-within-a-supermodel, it's simple enough: an AI (or a brain) just runs a Stan-like program to learn from the data and make predictions as necessary. But in a modern view of Bayesian data analysis — iterating the steps of model-building, inference-within-a-model, and model-checking — here, it's not quite clear how the AI works. It needs not just an inference engine, but also a way to construct new models and a way to check models. Currently, those steps are performed by humans, but the AI would have to do it itself, without the aid of a "homunculus" to come up with new models or check the fit of existing ones. This philosophical quandary points to new statistical methods, for example a language-like approach to recursively creating new models from a specified list of distributions and transformations, and an automatic approach to checking model fit, based on some way of constructing quantities of interest and evaluating their discrepancies from simulated replications.I also don't know how to teach a computer to do applied statistics, obviously, or else I'd be doing it. (Or trying to, teaching permitting.) My guess is that chunks of the come-up-with-new-models part can be done through evolutionary processes. (What's Bayesian updating itself, after all?) As for model checking, there are (at least) two highly non-trivial issues. One is deciding which aspects of the model to check — strategy, or at least tactics, in the choice of test. This somehow feels like the most important piece, but also the one where I have the least notion of how to articulate what a good data analyst does. The other, perhaps less vital direction for automating model checking is devising tests which don't just say "the model is broken" (when and only when it is broken), but at least hint at how to fix the model. It's this which attracts me both to Neyman smooth tests (for distributions) and to the PC algorithm (for conditional independence graphs). In both cases, when they tell you to junk your model, they give you a very strong indication of how it can be improved. We tried for something similar in CSSR, with what success it's not for me to say.
The goal of this workshop is to bring together mathematicians, physicists, and social, information, and computer scientists to explore the dynamics of social learning and cultural evolution. Of particular interest will be ways of using data from social media and online experiments to address questions of interest, which include but are not limited to: How do individual attributes and cognitive constraints affect the dynamics and evolution of social behavior? How does network structure both within and between groups (including online networks and communities) affect social learning and cultural evolution? What are the similarities and differences between cultural and genetic evolution? How do social norms emerge and evolve? What are the main mechanisms driving social learning and the evolution of culture?This will happen in January 2014, which currently seems like part of the same mythical future as fusion power, human-like AI, and Brazilian world domination, but unlike them all will actually arrive...
Manual trackbacks (of sorts): The Browser; Brad DeLong; Radio Free Europe/Radio Liberty (!); Marginal Revolution
Self-centered; Bayes, anti-Bayes; The Collective Use and Evolution of Concepts; The Progressive Forces
Posted at February 18, 2013 15:30 | permanent link