Podium: Science cannot predict the future

From a lecture by the professor of geography at University College, London, to the Annual Festival of Science

John Adams
Wednesday 25 November 1998 00:02 GMT
Comments

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

RISKS PERCEIVED through science are framed in terms of probabilities. But frequently these probabilities are nothing but confident expressions of uncertainty. We do not respond blankly to uncertainty; we impose meanings upon it, and act upon those meanings.

Whenever scientists disagree, or confess their ignorance, the lay public is confronted by uncertainty. Virtual risks, risks about which scientists do not know or can not agree, e.g. BSE or carcinogens, may or may not be imaginary, but they have real consequences - people act upon the meanings that they impose upon uncertainty. The 1995 contraceptive pill scare in Britain is an example of a scientific risk assessment spilling over into the virtual category. On the basis of preliminary evidence suggesting that the new third generation pill was twice as likely to cause blood clots as the second generation pill, a public warning was issued to this effect.

The result was a panic in which large numbers of women stopped taking the new pill, with the further result that there were an estimated 8,000 extra abortions plus an unknown number of unplanned pregnancies.The highly publicised twofold increase in risk amounted to a doubling of a very small number, which might have caused, according to the original estimates, an extra two fatalities a year, and even when doubled, the mortality risk was far below that for abortions and pregnancies.

Such minuscule risks are statistical speculations and cannot be measured directly. Subsequent research cast doubt on the plausibility of any additional risk associated with the new pill. The lesson that the Chief Medical Officer drew from this panic was that there is an important distinction to be made between relative risk and absolute risk.

By combining uncertainty with potential dire consequences, scientists can frighten people. The women who stopped taking the pill were imposing meaning upon the uncertainty of the medical establishment. This uncertainty was projected through the media. The fact of the hastily convened press conference, the secretive procedures by which the Committee on the Safety of Medicines arrive at their conclusions, and histories of government cover-ups of dangers such as radiation and mad cow disease have resulted in a very low level of public trust in government telling the truth about environmental threats.

Science has been very effective in reducing uncertainty, but much less effective in managing it. The scientific risk literature has little to say about virtual risks - and where the scientist has insufficient information even to quote odds, the optimising models of the economist are of little use. A scientist's "don't know" is the verbal equivalent of a Rorschach ink blot: some will hear a cheerful, reassuring message; others will listen to the same words and hear the threat of catastrophe.

Science has a very useful role in making visible dangers that were previously invisible, thereby shifting their management into the directly perceptible category. Where science has been successful, it has reduced uncertainty, and thereby shrunk the domain of risk perceived through science. But where the evidence is simply inconclusive and scientists cannot agree about its significance, we all, scientists included, are in the realm of virtual risk - scientists usually dignify the virtual risks in which they take an interest with the label "hypothesis".

In the presence of virtual risk, even the precautionary principle becomes an unreliable guide to action. Consider the ultimate virtual risk, discussed occasionally in the media. Nasa invokes the precautionary principle to argue for the commitment of vast resources to the development of more powerful H bombs and delivery systems to enable the world to fend off asteroids - even if the odds of them ever being needed are only one in a million. But we are also told by Russia's Defence Minister that "Russia might soon reach the threshold beyond which its rockets and nuclear systems cannot be controlled". Which poses the greater danger to life on earth - asteroids, or H bombs and delivery systems out of control?

Debates about BSE, radiation, and asteroid defences are debates about the future, which does not exist except in our imagination. They are debates to which scientists have much to contribute, but not ones that can be left to scientists alone. An understanding of the different ways in which people tend to respond to uncertainty cannot settle arguments. It does offer the proposal of more coherent and civilised debate among those with a stake in such issues.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in