Intuitions on problems from Elements of Information Theory Chapter 2

This post is not about sharing solutions to those problems in the book. As the title indicates, this post is rather about sharing intuitions or interpretations of some results mentioned or alluded in some problems listed at the end of Chapter 2 of the book “Elements of Information Theory” second edition. The chapter deals with the notion of entropy and mutual information.

Some of my colleagues and I were reading some rather huge paper which draws from many basic ideas from many fields, and one of them was the notion of mutual information. We only had a grasp on its special case, the notion of entropy of a random variable and since the general notion was new to us, I decided to read at least first few chapters of “Elements of Information Theory”.

1. meaning of conditional mutual information

Given three (discrete) random variables X, Y, Z, the conditional mutual information I(X; Y | Z) intuitively means how much X says about Y (or equivalently how much Y says about X) under the assumption that the value of Z is already known. Try to use this intuition to make sense of the chain rule for mutual information and the data-processing inequality.

As for its special case, the intuitions for conditional entropy, see (already linked previously) Shannon’s entropy of random variables and partitions.

2. comments on some selected problems from Chapter 2

2.1. Problem 2.6

The problem asks to find examples where mutual information increases or decreases when conditioned to a third random variable. This points out that I(X;Y | Z) is not monotone in Z. One should contrast this with the following two facts:

  • Conditional entropy H(X | Y) is monotone in the conditioning Y.
  • Mutual information I(X;Y | Z) is monotone in X and Y.

2.2. Problem 2.14

Given two independent (discrete) random variables X and Y, the entropy of the sum X+Y is at least \max\{H(X),H(Y)\}. This is perhaps related to why the central limit theorem works. If you keep adding i.i.d. random variables one by one, the entropy of the sum never decreases. So aside from special cases, we can expect that the entropy would get higher and higher. High entropy of the sum alone may not tell you much about the probability distribution of the sum, but the variance and the mean of the sum is also known, and IIRC the normal distribution maximizes entropy under the constraints on the mean and variance. It does not make a proof of the central limit theorem but I think it illustrates the relation between large degrees of freedom (in statistical mechanics) and probability distributions with maximal entropy.

Once you prove that the entropy of the independent sum X+Y is at least \max\{H(X),H(Y)\}, you would also be able to see that the entropy of the independent modulo sum X+Y modulo something like 6 for example is also at least \max\{H(X),H(Y)\}. If you have a biased dice, you may rightfully expect that you can simulate a less biased dice, by simply throwing the same dice several times and summing up the numbers modulo 6, and the entropy argument may be one way to justify this expectation.

2.3. Problem 2.23

The problem deals with n binary random variables X_1, X_2, \dots, X_n related by one linear equation X_1 + \dots + X_n = 0 \pmod 2. The given joint probability distribution is in some sense the most fair distribution under the constraint given by the linear equation.

The results are consistent with our intuition in that:

  • knowing X_1 tells you nothing about X_2
  • and given that you already know X_1, knowing X_2 tells you nothing about X_3
  • and at the last step when you know X_1 to X_{n-2}, knowing X_{n-1} tells you everything about X_n (because of the linear equation).

The results can be different for other probability distributions while satisfying the same linear equation. For example, if we give some biased joint probability distribution, then it’s possible that knowing X_1 might tell you something about X_2 in the sense that I(X_1; X_2) > 0. In fact, you can even choose a distribution so that knowing X_1 tells you everything about X_2 in the sense that I(X_1 ; X_2) = H(X_2) = 1. That is the case when we choose the most fair distribution under the constraint X_1 + X_2 = 1, X_3 = X_4 = \dots = X_n = 0.

2.4. Problem 2.27

It asks to prove the grouping rule for entropy. It can be verified with a simple calculation, but maybe it is instructive to solve this instead by using auxiliary random variables and using already established equalities thus far.

2.5. Problem 2.41

One wishes to identify a random object X. Two i.i.d. random questions Q_1, Q_2 about the object are asked, eliciting answers A_1 = A(X, Q_1), A_2 = A(X, Q_2). The problem asks to show that the two questions are less valuable than twice a single question.

At first, one might expect incorrectly that the two questions are worth twice a single question, but what about asking millions of questions about the outcome of a single dice roll? One million questions about the outcome is worth almost the same as two million questions, since by the time you ask the million’th, you would be almost certain of what the outcome of the dice roll was and the next million questions would not contribute much. The amount of information to be extracted with questions cannot grow linearly with the number of questions indefinitely and in fact there is an upper bound, namely, H(X).

Some more thoughts reveal that it is because of independence (rather than in spite of independence) that two i.i.d questions are less valuable than twice a single question. Suppose X is the random position of a specific chess piece. If the first (random) question asked which row, then the next optimal question to ask would be to ask which column. Making subsequent optimal questions require dependence to previous questions and answers and therefore independent questions are less efficient than dependent questions, as anyone who has played Twenty Questions can attest.

This entry was posted in Mathematics and tagged , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s