Interview: The overlooked risks that threaten our world
Lord Martin Rees, Fellow of Trinity College and Emeritus Professor of Cosmology and Astrophysics at the University of Cambridge, is interviewed about catastrophic risks.
You recently set up the Cambridge Centre for the Study of Existential Risk, along with philosopher Huw Price and Skype co-founder Jaan Tallinn. What do you hope to achieve?
We believe governments and corporations are in denial about new risks that have only a small likelihood of occurring, but which would be truly catastrophic if they did occur. We’d like people to spend more time and energy thinking about these risks, rather than focusing mainly on risks that are more familiar but relatively limited, like train crashes.
What kinds of unfamiliar and potentially catastrophic risk are you talking about?
Climate change is one example – we fixate on the median projections of moderate temperature rises, when we should be even more concerned by the smallish probability of extreme warming. And there are new technologies with the potential for “error or terror”. Cyber-attacks or breakdowns in communications infrastructure will become more dangerous as we rely more on connectivity for basic services.
More speculatively, think of the kind of people who nowadays take pleasure in writing computer viruses, and imagine that in a few decades they become able to synthesize actual lethal viruses in a home laboratory. Or what could happen if we create artificial intelligence that is smarter than we are?
These sound like scenarios from science fiction.
They do, but imagine you could show an iPhone to someone 50 years ago, or even 20 years ago. Would that not also have looked like magic? It’s good to be reminded how unexpectedly quickly the horizons of the possible can expand.
At the same time, we do need to tease apart those risks that are actually worth thinking about from the ones we can consign more confidently to the realms of science fiction. That’s what the Centre for the Study of Existential Risk aims to achieve. We want to convene the best scientific minds to create a register of new risks we should consider and which might otherwise catch us unawares, as the financial crisis of 2008 did.
Putting risks on the radar is an important first step. To take an example from Nate Silver’s book, The Signal and the Noise, before 9/11 nobody really thought about people flying planes into buildings; so warning signs, such as a student behaving suspiciously in a flying school, went unrecognized. If we can envisage risks before they happen, we have a much greater chance of preventing them.
You then run up against the problem of cognitive bias, which means our leaders – like all of us – are hard-wired to find it difficult to focus attention on those risks that haven’t happened before, or that seem very unlikely.
It is a serious problem, persuading anyone to spend money on protecting themselves against things that even the greatest pessimists believe are probably more likely not to happen than to happen. It’s the same reason you or I might be tempted to not pay the premiums for our house insurance.
Even when the science is well established, as with climate change, you can see how difficult it is to get politicians to prioritize concerns that are long-term and global rather than short-term and local. The difficulties are even greater when the risks are based on many unknown variables.
So how can we encourage our leaders to take these risks seriously?
This raises a broader question about the role of science in public affairs. I think the United Kingdom actually has a rather good system, in that government departments have embedded scientific advisers whose input is taken seriously, and scientists generally have good relationships with parliamentarians. When you look at how issues like stem cell research have been handled, our system in the UK works better than those in other countries, such as the US.
I think scientists have a duty to engage more with politics. And as politicians will ultimately pay attention to the issues their electorates demand they pay attention to, scientists also have a duty to communicate with the public – including about how much science still doesn’t know.
Both the general public and elected representatives need to have a basic feel for science, risk and uncertainty. Otherwise we won’t be able to have democratic debate on difficult policy questions that involve weighing questions of science against ethics, economics and social policies.
An example of those difficult questions concerns privacy, as potentially destructive technologies become more affordable and accessible. How do you see this evolving?
There is obviously a balance to be struck between security and privacy, and this is a debate that people are already having. These are things that will increasingly need to be discussed, along with questions of national sovereignty. It may be that we decide some risks can be tackled only by nations agreeing to cede some sovereignty to new organizations with remits along the lines of the World Health Organization or the International Atomic Energy Agency.
If the capacity to create catastrophe becomes widely available, isn’t it only a matter of time before somebody uses it?
That’s an unduly fatalistic attitude. We may never be able to reduce risks to zero, but there is a lot we can do. Look at pandemics, for example. We can’t remove entirely the risk of a catastrophic pandemic spreading rapidly through our hyperconnected world. But there’s a lot we can do to reduce the risk – say, making sure that a Vietnamese farmer knows where to report a strange disease in his animals, and that the authorities know how to respond.
Moving away from your work on existential risk, you are chairing a committee to mark the 300th anniversary of the Longitude Prize next year by offering new prizes to solve the publicly important challenges of today. What sort of challenges might they be?
We’ve identified themes and we’re currently shortlisting possible topics within those themes. Some examples are malnutrition, robotics to help the elderly, and antibiotic-resistant bacteria. Through the BBC we’ll conduct a public conversation to see which options resonate, and with private sponsorship we hope to offer multimillion-pound jackpots.
The original Longitude Prize, of course, set out to incentivize new thinking on a problem – how to navigate longitude at sea – that seemed intractable, and where a solution would clearly benefit humanity. I hope the Longitude Prize 2014 will capture public imagination and rekindle the belief that dramatic advances are possible if we set ourselves ambitious goals.
Presumably ambitious goals are also necessary to tackle catastrophic risks – a clean energy revolution to address climate change, for example?
I wish our leaders would commit themselves to powering the world with clean energy in the same way their predecessors once committed to the Manhattan Project or the Apollo moon landing. It’s hard to imagine anything more likely to achieve the much-needed goal of enthusing young people towards careers in science and engineering.
Decarbonizing Europe’s energy supply would require public-private investment on the same scale as the building of Europe’s railways. Can we not be as ambitious in the 21st century as our forebears were in the 19th? And we need the same kind of visionary leadership – with public and private sectors working together – if we are to reap the benefits of novel technologies while guarding against their potentially catastrophic downsides.
Author: Lord Martin Rees, Fellow of Trinity College and Emeritus Professor of Cosmology and Astrophysics at the University of Cambridge and is participating at the World Economic Forum’s Annual Meeting in Davos 2014.