Magazine / 5 Paths to Truth in a World of Fluctuating Certainty

5 Paths to Truth in a World of Fluctuating Certainty

Book Bites Science Technology

Adam Kucharski is a Professor of Epidemiology at the London School of Hygiene & Tropical Medicine and an award-winning science writer. His book, The Rules of Contagion, was a Book of the Year in The Times, Guardian, and Financial Times. A mathematician by training, his work on global outbreaks has included Ebola, Zika, and COVID. He has advised multiple governments and health agencies. His writing has appeared in Wired, Observer, and Financial Times, among other outlets, and he has contributed to several documentaries, including BBC Horizon.

What’s the big idea?

In all arenas of life, there is an endless hunt to find certainty and establish proof. We don’t always have the luxury of “being sure,” and many situations demand decisions be made even when there is insufficient evidence to choose confidently. Every field—from mathematics and tech to law and medicine—has its own methods for proving truth, and what to do when it is out of reach. Professionally and personally, it is important to understand what constitutes proof and how to proceed when facts falter.

Below, Adam shares five key insights from his new book, Proof: The Art and Science of Certainty. Listen to the audio version—read by Adam himself—in the Next Big Idea App.

https://nextbigideaclub.com/wp-content/uploads/BB_-Adam-Kucharski_MIX.mp3?_=1

1. It is dangerous to assume something is self-evident.

In the first draft of the U.S. Declaration of Independence, the Founding Fathers wrote that “we hold these truths to be sacred and undeniable, that all men are created equal.” But shortly before it was finalized, Benjamin Franklin crossed out the words “sacred and undeniable,” because they implied divine authority. Instead, he replaced them with the famous line, “We hold these truths to be self-evident.” The term “self-evident” was borrowed from mathematics—specifically from Greek geometry. The idea was that there could be a universal truth about equality on which a society could be built.

This idea of self-evident, universal truths had shaped mathematics for millennia. But the assumption ended up causing a lot of problems, both in politics and mathematics. In the 19th century, mathematicians started to notice that certain theorems that had been declared “intuitively obvious” didn’t hold up when we considered things that were infinitely large or infinitely small. It seemed “self-evident” didn’t always mean well-evidenced.

Meanwhile, in the U.S., supporters of slavery were denying what Abraham Lincoln called the national axioms of equality. In the 1850s, Lincoln (himself a keen amateur mathematician) increasingly came to think of equality as a proposition rather than a self-evident truth. It was something that would need to be proven together as a country. Similarly, mathematicians during this period would move away from assumptions that things were obvious and instead work to find sturdier ground.

2. In practice, proof means balancing too much belief and too much skepticism.

If we want to get closer to the truth, there are two errors we must avoid: we don’t want to believe things that are false, and we don’t want to discount things that are true. It’s a challenge that comes up throughout life. But where should we set the bar for evidence? If we’re overly skeptical and set it too high, we’ll ignore valid claims. But if we set the bar too low, we’ll end up accepting many things that aren’t true.

In the 1760s, the English legal scholar William Blackstone argued that we should work particularly hard to avoid wrongful convictions. As he put it: “It is better that ten guilty persons escape than that one innocent suffer.” Benjamin Franklin would later be even more cautious. He suggested that “it is better 100 guilty persons should escape than that one innocent person should suffer.”

“We don’t want to believe things that are false, and we don’t want to discount things that are true.”

But not all societies have agreed with this balance. Some communist regimes in the 20th century declared it better to kill a hundred innocent people than let one truly guilty person walk free.

Science and medicine have also developed their own traditions around setting the bar for evidence. Clinical trials are typically designed in a way that penalizes a false positive four times more than a false negative. In other words, we don’t want to say a treatment doesn’t work when it does, but we really don’t want to conclude it works when it doesn’t.

This ability to converge on a shared reality, even if occasionally flawed, is fundamental for science and medicine. It’s also an essential component of democracy and justice. Rather than embracing or shunning everything we see, we must find ways to balance the risk that comes with trusting something to be true.

3. Life is full of “weak evidence” problems.

Science is dedicated to generating results that we have can high confidence in. But often in life, we must make choices without the luxury of extremely strong evidence. We can’t, as some early statisticians did, simply remain on the fence if we’re not confident either way. Whether we’re sitting on a jury or in a boardroom, we face situations where a decision must be made regardless.

This is known as the “weak evidence” problem. For example, it might be very unlikely that a death is just a coincidence. But it also might be very unlikely that a certain person is a murderer. Legal cases are often decided on the basis that weak evidence in favor of the prosecution is more convincing than weak evidence for the defendant.

Unfortunately, it can be easy to misinterpret weak evidence. A prominent example is the prosecutor’s fallacy. This is a situation where people assume that if it’s very unlikely a particular set of events occurred purely by coincidence, that must mean the defendant is very unlikely to be innocent. But to work out the probability of innocence, we can’t just focus on the chances of a coincidence. What really matters is whether a guilty explanation is more likely than an innocent one. To navigate law—and life—we must often choose between unlikely explanations, rather than waiting for certainty.

4. Predictions are easier than taking action.

If we spot a pattern in data, it can help us make predictions. If ice cream sales increase next month, it’s reasonable to predict that heatstroke cases will too. These kinds of patterns can be useful if we want to make predictions, but they’re less useful if we want to intervene in some way. The correlation in the data doesn’t mean that ice cream causes heatstroke, and crucially, it doesn’t tell us how to prevent further illness.

“Often in life, prediction isn’t what we really care about.”

In science, many problems are framed as prediction tasks because, fundamentally, it’s easier than untangling cause-and-effect. In the field of social psychology, researchers use data to try to predict relationship outcomes. In the world of justice, courts use algorithms to predict whether someone will reoffend. But often in life, prediction isn’t what we really care about. Whether we’re talking about relationships or crimes, we don’t just want to know what is likely to happen—we want to know why it happened and what we can do about it. In short, we need to get at the causes of what we’re seeing, rather than settling for predictions.

5. Technology is changing our concept of proof.

In 1976, two mathematicians announced the first-ever computer-aided proof. Their discovery meant that, for the first time in history, the mathematical community had to accept a major theorem that they could not verify by hand.

However, not everyone initially believed the proof. Maybe the computer had made an error somewhere? Suddenly, mathematicians no longer had total intellectual control; they had to trust a machine. But then something curious happened. While older researchers had been skeptical, younger mathematicians took the opposite view. Why would they trust hundreds of pages of handwritten and hand-checked calculations? Surely a computer would be more accurate, right?

Technology is challenging how we view science and proof. In 2024, we saw the AI algorithm AlphaFold make a Nobel Prize-winning discovery in biology. AlphaFold can predict protein structures and their interactions in a way that humans would never have been able to. But these predictions don’t necessarily come with traditional biological understanding.

Among many scientists, I’ve noticed a sense of loss when it comes to AI. For people trained in theory and explanation, crunching possibilities with a machine doesn’t feel like familiar science. It may even feel like cheating or a placeholder for a better, neater solution that we’ve yet to find. And yet, there is also an acceptance that this is a valuable new route to knowledge, and the fresh ideas and discoveries it can bring.

Enjoy our full library of Book Bites—read by the authors!—in the Next Big Idea App:

Download
the Next Big Idea App

Also in Magazine

Sign up for newsletter, and more.