Maximize the entropy of your information sources
Information sources which are predictable are not very valuable because the maximum information gain from reading such sources is limited.
Think about the information gain or surprise (Information is surprisal) from a certain information source as the ratio of certainty to your a priori prediction:
For example, if I can predict what the New York Times is going to say on some particular issue with 90% accuracy, then I don’t gain much from finding out what they actually said. That news source is not surprising. I go from being 90% sure to 100% sure, a mere 11% gain.
One can think about the information gained from a news source as the return on your attention for that news source. If the information gain is low, then your return on attention for that source is low. You are rarely surprised by what this news source has to say. Information is surprisal, so by maximizing your surprisal, you maximize your information gain, and Maximize your return on attention.
This is related to the idea of Residualize your information sources. Imagine you have a model for certain news sources that can be framed as below: where represents everything you know about this news source, is a function representing your internal mental model of that news source which converts what you know about it to your prediction of it, is the actual information generated by the news source, and is the error, or residual, of your predictions.
If you have a good model for a news source, you don’t need to actually take the time to find out what it actually said. You could guess ahead of time, take comfort in the accuracy of your predictions, and move on with your life.
Focus on the sources of information for which your internal model can only generate poor predictions with large errors or residuals. Focus on the residuals
- In general, information gain correlates with large residuals, as it implies you’re bad at predicting this source and would benefit significantly from the truth. Information is surprisal