Nate Silver’s the Signal and the Noise is a forecasting book with broad appeal. Read this book if you want to understand more about decision-making, statistics and predictive analytics without having to mine a text book. The book is richly researched, well organized and packed with engaging examples. It is especially valuable for digital analytics professionals and marketing executives who may be facing pressure to provide more predictions.
Readability: 4 out of 5 stars. The text is at the level of the Financial Times which is to say about 11th grade. Lots of compound complex sentences, footnotes and about 50-100 reference notes per chapter.
Impact: 5 out of 5 stars. If you read this book, it will change the way you look at major world events and think about prediction. You will also feel smarter (isn’t that great?). The potential impact is high.
Speed read pattern: To get the main idea without digesting the full book, I recommend hitting the conclusion first, then the introduction, then chapters 2, 4, 8 and 10 in that order. A word of warning though, you may find yourself drawn into the book and end up starting at the beginning anyway. I did.
The book is organized into four main sections. Each grouping contains a chapter summary with a few insights I found useful. There are many more insights in the book.
Failures of Prediction. Chapters 1-3
Examples of how noise was mistaken for signal.
1. Financial crisis. We focus on signals that tell us how we would LIKE things to be, not how they really are. This creates major blind spots in our models. When the system fails, these blind spots finally come to light (moneyball, financial meltdown). There is a very good chart about accuracy vs. precision at the end of this chapter. Accurate and precise is equals a good forecast.
2. Politics. Hedgehog vs Fox thinking. Hedgehogs are fixed, overly confident, weak forecasters (e.g. political pundits). Political TV pundits make terrible predictions, no better than random guesses. Their goal is to entertain. They are not penalized for being wrong. Foxes, on the other hand are continuously adapting theories, cautious, modest, better forecasters. Foxes qualify and equivocate a lot which makes for less dramatic TV by people who are more likely to be correct.
3. Moneyball. Statistics have not replaced talent scouts altogether. Prediction is always an art and a science.
Dynamic Systems of Prediction. Chapters 4-7
How dynamic systems make forecasting even more difficult.
4. Weather. Prediction has improved due to highly sophisticated, large-scale supercomputers. However, humans still improve the accuracy of precipitation models by 25% over computers alone and temperature forecasts by 10%. Weather is an exponential system which can see a huge impact when initial factors are off by small amounts. This explains why it makes sense to think of outcomes as a range (95% likely or 50% likely).
5. Earthquakes. We have almost no ability to predict earthquakes. But we know that some regions are more earthquake prone. The random noise scientists have used to historically predict earthquakes are an example of how to overfit a model (to fit the noise rather than the underlying structure).
6. Economic. The exponential growth of things to measure will not yield more signal, but more noise. The danger in big data is losing sight of this underlying data story.
7. Disease. Self-fulfilling predictions can be caused by the sheer act of releasing the prediction. For example, when news about H1N1 flu is broadcast, more people go to doctors and more H1N1 is identified. Self-cancelling predictions can also occur. Navigation systems show where the least traffic is but simultaneously invalidate the route by sending all traffic there en masse.
Prediction Solutions. Chapters 8-10
How to use Bayes Theorem to think probabilistically.
8. Gambling. Bayes Theorem is a powerful tool which leads to vast predictive insights. This allows us to use probability (“the way point between ignorance and knowledge,” Silver says) to get closer and closer to the truth as we gather more evidence. Again, predictions are MORE prone to failure in the era of big data because there are exponentially more hypothesis to test and yet the number of meaningful relationships does not increase.
9. Chess. Simplified models or heuristics (e.g. always run away from danger) are used in chess. These necessarily produce biases and blind spots. Observe – hypothesize – predict – test helps us converge toward the truth. Beware absolute truths which are untestable. Computers are great calculators but they still have trouble coming up with creative ideas to test.
10. Poker. There is a “water level” in some fields where getting the first 80% right is easy and the remaining 20% is hard. Poker was such a field at one time. Overconfidence is rampant here. We must accept the fallibility of our judgments if we want to come to more accurate predictions.
Hardest to Predict Problems. Chapters 11-13
How to make the world a little safer.
11. Stock market. Consistency makes superior results but most data ranges are too small to show this. It is nearly impossible to beat the market. The test is that no model is able to beat it predictably over time.
12. Climate change. Very few scientists doubt greenhouse gases cause global warming. Temperature data is quite noisy which makes scientists uncertain about the details. Estimating uncertainty is essential. The further you move away from consensus, the stronger the evidence must be.
13. Terrorist attacks. We failed to predict both Pearl Harbor and September 11th as a result of “unknown unknowns.” Logarithmic scales can help us overcome these blind spots.
Humans like simplicity and we despise uncertainty. This makes it easy for us to jump in and look quick answers or predictions.
For digital marketers this means:
- Testing is the rule not the exception
- Be prepared to have your hypothesize proven wrong, a lot. The noise is growing exponentially.
- When asked to predict the future, put it in Bayesian terms. “There is a 1 in 10 chance this test will succeed.”
Nate Silver’s book encourages all of us to slow down, consider the imperfections and look for hypothesis to test which eventually bring us closer to the truth.