Information & Entropy
The Information
Measures “Uncertainty” or “Surprise” (or lack thereof.) If is the probability of a single event,
That log base can be or a Natural Log . Don’t start saying things like “oh it’s the number of bits it takes to encode a system” without knowing what you’re talking about. This goes deep. TODO: Feynman lecture, explain why.
Why the Negative Log?
Question is “What do we want this function to do?” Four things come to mind (mine at least):
- We want to increase ‘Surprise’ when events are rare
- We want to decrease ‘Surprise’ when they are not (extreme case should be ‘zero surprise’)
- If two independent events happen, we want their ‘Surprises’ to add up: (evidence accumulates linearly)
- We don’t want negative information. Doesn’t make too much sense.
The negative log function satisfies all of these criteria. QED (if you’re a Serious Statistician/Mathematician, calm your tits, I’m dumber than you.)
Entropy
What a concept!
My greatest concern was what to call it. I thought of calling it ‘information,’ but the word was overly used, so I decided to call it ‘uncertainty.’ When I discussed it with John von Neumann, he had a better idea. Von Neumann told me, ‘You should call it entropy, for two reasons. In the first place your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, no one really knows what entropy really is, so in a debate you will always have the advantage.’
— Claude Shannon
In the 1950’s Jaynes told Wigner that physical entropy is a measure of information and Wigner thought that was absurd, because the information one person possesses differs from that of another, whereas entropy can be measured with thermometers and calorimeters.
— Source
Whatever. Here, we’re taking and defining Shannon Entropy as the Weighted Average of the Information.
This is the exact same thing as the Expected Value of the Information! Note that we use LOTUS here.
TLDR: You multiply probabilities with their , sum them up, negate and SURPRISE! Or not…
- High number → Rare Event → Lots of Surprise/Uncertainty!
- Low number → Meh, expected → Low Surprise/Uncertainty → Not very Surprised.
Like (). You wouldn’t be surprised if a loaded die kept giving you 6’s 🎲🎲🎲. High Entropy happens when you have a flat distribution like (). Big Number → Big Uncertainty.
Okay so that’s a single system. What if you wanted to compare systems? The World and your model? Or two models?
Cross Entropy
Let’s say the data generating function in the Real World™ is . Your model is . Cross-Entropy then is . What does it look like?
Okay but why? What we saying with that?
We’re asking: “Given the Real World™, how surprised on average are you by the model?” In the Weighted Average view, we’re weighting the model’s surprise by what we see in the Real World™! If you just did you’re jsut averaging according to your own beliefs and that’s nice and all but Mama Nature will kick you in the plums, guaranteed. So,
- is how often event x really occurs
- is how well your model predicts it
KL Divergence
Okay let’s do some rearranging. For giggles, let’s subtract
Or,
That last part is called the Kullback-Lieber Divergence, “a type of statistical distance: a measure of how much an approximating probability distribution Q is different from a true probability distribution P”. BOOM!
What’s an easy reading guaranteed to piss off Statisticians and Mathematicians?
Very important (and you can see why): . Also .
Sadness in The Real World™
You’ll never know the True . What do you do if you want Cross-Entropy?
You can approximate it using the sample average:
Huh. Consider the Likelihood Function
WELL HOW ABOUT THAT.
Sweet. We can build optimizers now. 🆒