'I think we all have an obligation as individuals to try to assess the future consequences of our actions,' says James Hughes, the executive director of the Institute for Ethics and Emerging Technologies. (Steve Miller for the Boston Globe) "I think we all have an obligation as individuals to try to assess the future consequences of our actions," says James Hughes, the executive director of the Institute for Ethics and Emerging Technologies.

A talk with James Hughes

Physicists, zombies, rogue nanobots, and other long-odds threats to life as we know it.

By Peter Bebergal
September 7, 2008
  • Email|
  • Print|
  • Single Page|
  • |
Text size +

ON WEDNESDAY, HUNDREDS of feet below ground in Europe, a proverbial switch will be pulled on the Large Hadron Collider, a new multibillion dollar machine designed to smash subatomic particles together at immense speeds. The device could help physicists rewrite the rules of the universe. It could also, just possibly, do something else: create a tiny black hole that would result in the end of all life as we know it.

Most scientists are confident that the danger is vanishingly small, and a number of research papers have concluded the experiment is safe. But are the potential gains to science really worth even a tiny risk of eradicating the earth? This question, writ large, is the province of a group of scholars who study potential global catastrophe. At the center of their work lies an almost unanswerable question: How should we deal with very unlikely threats that also carry the potential to extinguish human civilization?

This past July, specialists convened in Oxford, England, for the first Global Catastrophic Risks Conference. The group included philosophers, physicists, and sociologists; aside from the huge particle accelerator, they looked at the threat of massive asteroid collisions, gamma ray bursts from supernovas that could sterilize the planet, man-made nanobots that could replicate and consume the earth's surface, and out-of-control artificial intelligence.

James Hughes, a lecturer in public policy at Trinity College and the executive director of the Institute for Ethics and Emerging Technologies, spoke at the conference on how apocalyptic fears (and hopes) inhibit clear thinking about catastrophic risks. A sociologist by training, Hughes is optimistic that humanity will be sufficiently technologically savvy by the time it faces some of the more awful possible predicaments. But he also suggests that we do need to start focusing on some long-term threats.

Ideas spoke with Hughes by phone at his home in Willington, Conn.

IDEAS: What are some of the man-made risks we should be concerned about?

HUGHES: Well, there are the traditional 20th-century man-made risks that most people think about, the weapons of mass destruction risks: nuclear weapons, bioterrorism. Chemical weapons are not really part of the picture. But bioterrorism could theoretically create some kind of agents that could wipe out most of humanity.

IDEAS: Is there a way to keep new and dangerous technologies at bay?

HUGHES: I think one of the reasons why people are extremely pessimistic about this is that we don't have a global regime that would make it possible to ban technologies on a global level yet. Some people see the risk of the creation of such a regime as such a risk in of itself.

IDEAS: Who needs to be thinking about this kind of threat - the researchers at the conference, or the scientists creating the technology?

HUGHES: In this case we were saying, "Well, this is really philosophical about how we assess risk, and once we figure all that out, we'll advise the government." In fact, we had a one day seminar for the British version of Homeland Security. . . . They think about a whole different category of risk - the things that are in a five-year time horizon like floods. Asteroids and all these other things weren't ever on their table, so we were kind of talking past each other. But it was the right conversation to have.

IDEAS: With asteroid collisions, is there enough risk that we should be investing huge amounts of money into making sure we're safe?

HUGHES: Well, I think there's a general consensus among catastrophic-risk people that . . . it deserves at least as much money as we put into other science and military endeavors.

IDEAS: Do you feel a little heart-heavy when you realize that we're barely prepared to help a major American city survive a hurricane?

HUGHES: Yes, I'm heart-heavy. . . . At every single level you have these huge, seemingly insurmountable obstacles. The public and policy makers seem to jump from risk to risk without any serious considerations. The risk of an American dying of a terrorist attack is smaller than the risk of being hit by lightning. So why did we spend almost a trillion dollars over the last eight years on the war on terror, and not people getting hit by lightning?

IDEAS: What about the Hadron Collider? How do you decide ultimately to turn the thing on?

HUGHES: I think there is some threshold of risk at which you have to say, "OK, even though it's a really small risk, if it's the end of everything, then you shouldn't do that." The argument that I found convincing was that the nature of the risks of the Hadron Collider creating a catastrophe was on the same level at which you're driving down the road and having your car spontaneously turn into a horse through simple quantum fluctuation.

IDEAS: Should scientists be held accountable, or should they be allowed to do their work unfettered?

HUGHES: I think we all have an obligation as individuals to try to assess the future consequences of our actions. . . . But, that said, scientists are neither prepared to assess the consequences of their actions or the ethical nature of their actions. And like all people in all occupations in all walks of life, they have vested interest.

IDEAS: In terms of man-made technology, can we evolve faster than our ability to destroy ourselves?

HUGHES: I think Einstein may have even said it - we weren't wise enough for how fast our technology is evolving. With the nuclear weapons and bioterrorism, we have a pretty good idea of how to assess these risks. We've known what to do for 50 years, and we haven't really done it - which is to create strong transnational institutions that monitor everybody who's doing nukes.

IDEAS: And yet we're continuing to work on technologies that could lead to threats in the future.

HUGHES: We're not going to give up Google because of the hypothetical possibility that everything that's connected to Google will suddenly wake up and take over the Net. Google is just too useful to us.

IDEAS: So ultimately should short-term progress trump longer-term risk?

HUGHES: If you saw . . . the Will Smith movie ["I Am Legend"] . . . what creates this global zombie apocalypse is the mutant version of a cancer drug. . . . I can't say there's no possibility that gene therapy for cancer, or stem cell research for cancer, might lead to a zombie apocalypse, but it's extremely unlikely. So at some point you have to say, "What is the real risk?"

Peter Bebergal is a frequent contributor to the Globe. He has a blog at

  • Email
  • Email
  • Print
  • Print
  • Single page
  • Single page
  • Reprints
  • Reprints
  • Share
  • Share
  • Comment
  • Comment
  • Share on DiggShare on Digg
  • Tag with Save this article
  • powered by
Your Name Your e-mail address (for return address purposes) E-mail address of recipients (separate multiple addresses with commas) Name and both e-mail fields are required.
Message (optional)
Disclaimer: does not share this information or keep it permanently, as it is for the sole purpose of sending this one time e-mail.