Notebooks

Yet Another Information Matrix

27 Feb 2017 16:30

Suppose I have a stochastic process, \( \{X_t\} \) whose measure is (let's be imaginative) \( \mu \). Suppose further that I consider various models of this process, with measures \( p_{\theta} \), where \( \theta \in \Theta \), some nice index set. I am interested in the relative entropy rate, which is defined as \[ d(\theta,\mu) \equiv \lim_{n\rightarrow\infty}{\mathbf{E}_{\mu}\left[\log{\frac{p_{\theta}(X_n|X^{n-1}_1)}{\mu(X_n|X^{n-1}_1)}}\right]} \] I can assume this exists; what I'm interested in, for my purposes, the relative entropy rate's matrix of second partial derivatives, with respect to \( \theta \): \[ \frac{\partial^2}{\partial \theta_i \partial \theta_j} d(\theta,\mu) = \frac{\partial^2}{\partial \theta_i \partial \theta_j}\lim_{n\rightarrow\infty}{\mathbf{E}_{\mu}\left[\log{p_{\theta}(X_n|X^{n-1}_{1})}\right]} \]

If the expectation was taken with respect to \( p_{\theta} \), and not \( \mu \), then this would just be the Fisher information matrix, or rather the asymptotic increment to the Fisher information matrix with each new observation, which I've heard called the "Fisher information rate". (Well, I'd also have to be willing to move the differentiation within the limit and the expectation; I could assume that things are regular enough for that to work.) But, as you can see, it is not the Fisher information rate, because I'm evaluating the likelihood with respect to one one measure, but taking expectations with respect to another. So, what's this called? I can't have invented it! --- Halbert White's Estimation, Inference and Specification Analysis, while it fills me with interesting ideas on mis-specified models, is not illuminating on this matter.


Previous versions: 2004-02-12 14:18


Notebooks: