Upcoming Gigs: Bristol
I am giving two talks in Bristol next week about (not so coincidentally) my
two latest papers.
- "The Computational Structure of Spike Trains"
- Bristol Centre for Complexity Sciences,
SM2 in the School of Mathematics, 2 pm on Tuesday 9 February
- Abstract: Neurons perform computations, and convey the results of
those computations through the statistical structure of their output spike
trains. Here we present a practical method, grounded in the
information-theoretic analysis of prediction, for inferring a minimal
representation of that structure and for characterizing its
complexity. Starting from spike trains, our approach finds their causal state
models (CSMs), the minimal hidden Markov models or stochastic automata capable
of generating statistically identical time series. We then use these CSMs to
objectively quantify both the generalizable structure and the idiosyncratic
randomness of the spike train. Specifically, we show that the expected
algorithmic information content (the information needed to describe the spike
train exactly) can be split into three parts describing (1) the time-invariant
structure (complexity) of the minimal spike-generating process, which describes
the spike train statistically; (2) the randomness (internal entropy rate) of
the minimal spike-generating process; and (3) a residual pure noise term not
described by the minimal spike-generating process. We use CSMs to approximate
each of these quantities. The CSMs are inferred nonparametrically from the
data, making only mild regularity assumptions, via the causal state splitting
reconstruction algorithm. The methods presented here complement more
traditional spike train analyses by describing not only spiking probability and
spike train entropy, but also the complexity of a spike train's structure. We
demonstrate our approach using both simulated spike trains and experimental
data recorded in rat barrel cortex during vibrissa stimulation.
- Joint work with Rob Haslinger and Kristina Lisa Klinkner.
- "Dynamics of Bayesian updating with dependent data and misspecified models"
- Statistics seminar, Department of Mathematics,
Seminar
Room SM3, 2:15pm on Friday 20 February
- Abstract: Much is now known about the consistency of Bayesian
non-parametrics with independent or Markovian data.. Necessary conditions for
consistency include the prior putting enough weight on the right neighborhoods
of the true distribution; various sufficient conditions further restrict the
prior in ways analogous to capacity control in frequentist nonparametrics. The
asymptotics of Bayesian updating with mis-specified models or priors, or
non-Markovian data, are far less well explored. Here I establish sufficient
conditions for posterior convergence when all hypotheses are wrong, and the
data have complex dependencies. The main dynamical assumption is the asymptotic
equipartition (Shannon-McMillan-Breiman) property of information theory. This,
plus some basic measure theory, lets me build a sieve-like structure for the
prior. The main statistical assumption concerns the compatibility of the prior
and the data-generating process, bounding the fluctuations in the
log-likelihood when averaged over the sieve-like sets. In addition to posterior
convergence, I derive a kind of large deviations principle for the posterior
measure, extending in some cases to rates of convergence, and discuss the
advantages of predicting using a combination of models known to be
wrong.
- (More on this paper)
I'll also be lecturing
about prediction, self-organization
and filtering to the BCCS
students.
I presume that I will not spend the whole week talking about
statistics, or working on the next round of papers and lectures; is there, I
don't know, someplace in Bristol to hear music or something?
Update, 8 February: canceled at the last minute,
unfortunately; with some hope of rescheduling.
Self-centered;
Enigmas of Chance;
Complexity;
Minds, Brains, and Neurons
Posted at February 04, 2010 13:48 | permanent link