Notebooks

Information in Games and Decision-Making

27 Feb 2017 16:30

The sense in which "information" is used in decision-theory and game-theory, and so in economics, seems to be quite different than the way it's used in information theory as such. In the latter, "information" is a numerical property of a random variable, or a relationship between random variables, related to coding or forecasting --- how much could the value of one variable be used to shorten the encoding of another, or tighten the best-achievable prediction interval for it. In decisions, information is used in a sense closer to what someone knows --- which, of several possible alternatives, does the state of the world fall in to? This seems to be most generally formalized not in terms of entropy or related quantities, but rather in terms of a sigma algebra. (I am not going to explain sigma algebras here.) That is, the agent is taken as conditioning the value of all random variables on some sigma algebra or another, and if the sigma algebra increases, then the agent knows more. Thus the agent knows exactly the value of any function which is measurable w.r.t. that algebra, and has a certain conditional distribution for the others. Since not all algebras are comparable, it may not be possible to say which of two agents knows more.

All well and good, but I wonder what the relationships are between this sort of algebraic information and entropic information. After all, quantities like the Kullback divergence (relative entropy) play an important role in problems like hypothesis testing, which is equally a decision-theoretic problem. I'd also like to know what we could intelligbly say about the value of information in a decision problem. Given a change between one conditioning algebra and the other, we can imagine computing the relative entropy of any given random variable. Could we somehow bound this over all variables, and use that to give a more quantitative idea of how much information an agent has acquired? Again, a natural way to talk about the value of information, in a decision problem, would be to examine the distribution of future rewards conditional on the informational algebra. Could one show that there is always some strategy such that the expected reward is non-decreasing as the algebra grows? Could the change in expected reward be related to the KL divergence of the reward distribution?

These have the sound of questions people worked out the answers to a long time ago...


Notebooks: