CDN$ 31.50
Only 2 left in stock (more on the way).
Ships from and sold by Gift-wrap available.
Global Catastrophic Risks has been added to your Cart
Have one to sell?
Flip to back Flip to front
Listen Playing... Paused   You're listening to a sample of the Audible audio edition.
Learn more
See all 2 images

Global Catastrophic Risks Paperback – Oct 22 2011

See all 3 formats and editions Hide other formats and editions
Amazon Price
New from Used from
Kindle Edition
"Please retry"
"Please retry"
CDN$ 31.50
CDN$ 23.25 CDN$ 31.24

Harry Potter and the Cursed Child
click to open popover

Special Offers and Product Promotions

  • You'll save an extra 5% on Books purchased from, now through July 29th. No code necessary, discount applied at checkout. Here's how (restrictions apply)

Frequently Bought Together

  • Global Catastrophic Risks
  • +
  • Superintelligence: Paths, Dangers, Strategies
Total price: CDN$ 60.84
Buy the selected items together

No Kindle device required. Download one of the Free Kindle apps to start reading Kindle books on your smartphone, tablet, and computer.
Getting the download link through email is temporarily not available. Please check back later.

  • Apple
  • Android
  • Windows Phone
  • Android

To get the free app, enter your mobile phone number.

Product Details

  • Paperback: 576 pages
  • Publisher: Oxford University Press (Oct. 22 2011)
  • Language: English
  • ISBN-10: 0199606501
  • ISBN-13: 978-0199606504
  • Product Dimensions: 22.9 x 2.8 x 15.5 cm
  • Shipping Weight: 962 g
  • Average Customer Review: Be the first to review this item
  • Amazon Bestsellers Rank: #131,593 in Books (See Top 100 in Books)
  •  Would you like to update product info, give feedback on images, or tell us about a lower price?

  • See Complete Table of Contents

Product Description


`Review from previous edition This volume is remarkably entertaining and readable...It's risk assessment meets science fiction.' Natural Hazards Observer

`The book works well, providing a mine of peer-reviewed information on the great risks that threaten our own and future generations.' Nature

`We should welcome this fascinating and provocative book.' Martin J Rees (from foreword)

`[Provides] a mine of peer-reviewed information on the great risks that threaten our own and future generations.' Nature

About the Author

Nick Bostrom, Ph.D., is Director of the Future of Humanity Institute, in the James Martin 21st Century School, at Oxford University. He previously taught at Yale University in the Department of Philosophy and in the Yale Institute for Social and Policy Studies. Bostrom has served as an expert consultant for the European Commission in Brussels and for the Central Intelligence Agency in Washington DC. He has advised the British Parliament, the European Parliament, and many other public bodies on issues relating to emerging technologies. Milan M. Cirkovic, Ph.D., is a senior research associate of the Astronomical Observatory of Belgrade, (Serbia) and a professor of Cosmology at Department of Physics, University of Novi Sad (Serbia). He received both his PhD in Physics and his MSc in Earth and Space Sciences from the State University of New York at Stony Brook (USA) and his BSc in Theoretical Physics was received from the University of Belgrade.

What Other Items Do Customers Buy After Viewing This Item?

Customer Reviews

There are no customer reviews yet on
5 star
4 star
3 star
2 star
1 star

Most Helpful Customer Reviews on (beta) HASH(0xb32e85c4) out of 5 stars 13 reviews
29 of 32 people found the following review helpful
HASH(0xb336375c) out of 5 stars Important Sept. 25 2008
By Peter McCluskey - Published on
Format: Hardcover
This is a relatively comprehensive collection of thoughtful essays about the risks of a major catastrophe (mainly those that would kill a billion or more people).
Probably the most important chapter is the one on risks associated with AI, since few people attempting to create an AI seem to understand the possibilities it describes. It makes some implausible claims about the speed with which an AI could take over the world, but the argument they are used to support only requires that a first-mover advantage be important, and that is only weakly dependent on assumptions about that speed with which AI will improve.
The risks of a large fraction of humanity being killed by a super-volcano is apparently higher than the risk from asteroids, but volcanoes have more of a limit on their maximum size, so they appear to pose less risk of human extinction.
The risks of asteroids and comets can't be handled as well as I thought by early detection, because some dark comets can't be detected with current technology until it's way too late. It seems we ought to start thinking about better detection systems, which would probably require large improvements in the cost-effectiveness of space-based telescopes or other sensors.
Many of the volcano and asteroid deaths would be due to crop failures from cold weather. Since mid-ocean temperatures are more stable that land temperatures, ocean based aquaculture would help mitigate this risk.
The climate change chapter seems much more objective and credible than what I've previously read on the subject, but is technical enough that it won't be widely read, and it won't satisfy anyone who is looking for arguments to justify their favorite policy. The best part is a list of possible instabilities which appear unlikely but which aren't understood well enough to evaluate with any confidence.
The chapter on plagues mentions one surprising risk - better sanitation made polio more dangerous by altering the age at which it infected people. If I'd written the chapter, I'd have mentioned Ewald's analysis of how human behavior influences the evolution of strains which are more or less virulent.
There's good news about nuclear proliferation which has been under-reported - a fair number of countries have abandoned nuclear weapons programs, and a few have given up nuclear weapons. So if there's any trend, it's toward fewer countries trying to build them, and a stable number of countries possessing them. The bad news is we don't know whether nanotechnology will change that by drastically reducing the effort needed to build them.
The chapter on totalitarianism discusses some uncomfortable tradeoffs between the benefits of some sort of world government and the harm that such government might cause. One interesting claim:
totalitarian regimes are less likely to foresee disasters, but are in some ways better-equipped to deal with disasters that they take seriously.
11 of 12 people found the following review helpful
HASH(0xb33637d4) out of 5 stars I'll Read this Book Again Jan. 13 2009
By Mike Byrne - Published on
Format: Hardcover Verified Purchase
GCR (Global Catastrophic Risks) is a real page-turner. I literally couldn't put it down. Sometimes I'd wake up in the middle of the night with the book open on my chest, and the lights on, and I'd begin reading again from where I'd dozed off hours earlier, and I'd keep reading till I just had to go to sleep.

I had read a review of GCR in the scientific journal "Nature" in which the reviewer complained that the authors had given the global warming issue short shrift. I considered this a plus.

If, like me, you get very annoyed by "typos," be forewarned. There are enough typos in GCR to start a collection. At first I was a bit annoyed by them, but some were quite amusing... almost as if they were done on purpose.

Most of the typos were straight typing errors, or errors of fact. For example, on page 292 the author says that the 1918 flu pandemic killed "only 23%" of those infected. Only 23%? That seems a rather high percentage to be preceeded by the qualifier "only". Of course, although 50 million people died in the pandemic, this represented "only" 2% to 3% of those infected... not 23%. On p 295 we read "the rats and their s in ships" and it might take us a moment to determine that it should have read, "the rats and their fleas in ships."

But many of the typos were either fun, or a bit more tricky to figure out: on p. 254 we find "canal so" which you can probably predict should have been "can also." Much trickier, on p. 255 we find, "A large meteoric impact was invoked (...) in order to explain their idium anomaly." Their idium anomaly?? Nah. Better would have been..."the iridium anomaly!" (That's one of my favorites.) Elsewhere, we find twice on the same page "an arrow" instead of "a narrow"... and so it goes..."mortality greater than $1 million." on p. 168 (why the $ sign?) etc. etc.

But the overall impact of the book is tremendous. We learn all sorts of arcane and troubling data, e.g. form p.301 "A totally unforseen complication of the successful restoration of immunologic function by the treatment of AIDS with antiviral drugs has been the activation of dormant leprosy..." I can hear the phone call now...."Darling, I have some wonderful news, and some terrible news...hold on a second dearest, my nose just fell off..."

So even if you're usually turned off by typos, don't let that stop you from buying this book. I expected more from the Oxford University Press, but I guess they've sacked the proofreader and they're using Spell-Check these days. But then, how did "their idium anomaly" get past Spell-Check? I guess Spell-Check at Oxford includes Latin.
2 of 2 people found the following review helpful
HASH(0xb3363c30) out of 5 stars A work of tremendous importance. July 27 2012
By TretiaK - Published on
Format: Hardcover Verified Purchase
This book is a compendium of what are referred to as GCRs (Global Catastrophic Risks) that represent future risks and cases, and merit much consideration if the human species is to survive into the future. Some of the cases given can seem "out there" so to speak in that they can seem like very futuristic and remote possibilities, but nonetheless are still very pertinent for the continuity of humanity. Social collapse, astrophysical catastrophes, technological breakthroughs, apocalyptic ideas given social consideration and credibility, thermonuclear war, and biases in human reasoning are all among the broad categories discussed within the books content. Some areas could have been expanded upon more, and other areas have an intermix of technical jargon here and there. This book would due very well to people looking to supplement their efforts for things like social work and for people in scientific disciplines, and to get a sense of general awareness and exposure from experts in the field as to how relevant these issues already are, or are going to become in the future.

Probably the most dangerous future risk is going to be the advent of real Artificial Intelligence within our lifetime or very near into the future. Eliezer Yudkowsky is the top figurehead and spokesman for factors involved in this risk and is the editor for this specific risk within the book. If our fears are to become a reality, then it doesn't matter much of whatever else we get right. Many of the other risks to worry about, we already have a wealth of information on their occurrences, how they work, how likely they are to affect us, and how they will affect us when they come. The risks concerning the arrival of AI however are far more dangerous in that this isn't an experiment that we get to practically represent so that reality can beat us over the head with the correct answer. If we are to achieve true FAI (Friendly Artificial Intelligence as Yudkowsky calls it) then a massive amount of dedication, money and effort is needed for research needed to avoid a real disaster. If our aims are achieved and realized however, many of the other risks and concerns we have can be offset to the handling of an intelligence much greater than ourselves with a higher probability and likelihood of being overcome.

We are passing through a stage where we are beginning to create problems that are beyond our current capacity to provide solutions for. This book is probably the best general and somewhat technical primer to become acquainted with serious problems we are currently facing and that we will inevitably arrive at in the future. If you are truly keen to getting involved in with the kinds of problems we will have to confront, this book is indispensable.
4 of 5 people found the following review helpful
HASH(0xb33639a8) out of 5 stars Eclectic and thought-provoking academic essays Nov. 6 2008
By David J. Aldous - Published on
Format: Hardcover Verified Purchase
21 chapters by different authors succeed in the declared goal of giving a big picture of the subject (GCR) as seen by academics in different disciplines. The content is appropriately non-technical -- like the serious end of the ``popular science" genre -- though the writing styles are more reminiscent of an academic paper or lecture than the style of best-selling popular science books. The opening 8 ``background" chapters (on very diverse topics from long-term astrophysics to public policy toward catastrophe) were the least satisfying to me, many (while interesting in themselves) seeming to be each author's favorite lecture recycled with a nod to GCR. Of these chapters, let me just single out Eliezer Yudkowsky's chapter on cognitive biases in an individual's risk assessments, as one of the best 20-page summaries of that topic I have read.

Amongst the core chapters discussing particular risks, the three that are most ``hard science", on supervolcanos, asteroid or comet impact, and extra-solar-system risks are just great -- one learns for instance that (contrary to much science fiction) comets are more a risk than asteroids, and the major risk in the last category is not nearby supernovas but cosmic rays created by gamma ray bursts. These three chapters are perhaps the only contexts where it's reasonable to attempt to estimate actual probabilities of the catastrophes.

The balanced article on global warming is unlikely to please extremists, concluding that mainstream science predicts a linear increase in temperature that may be unpleasant but not catastrophic, while the various speculative non-linear possibilities leading to catastrophe have plausibilities impossible to assess. The article on pandemics is surprisingly upbeat (``are influenza pandemics likely? Possibly, except for the preposterous mortality rate that has been proposed"), as is the article on exotic physics ("Might our vacuum be only metastable? If so, we can envisage a terminal catastrophe, when the field configuration of empty space changes, and with it the effective laws of physics ..."). The articles on nuclear war, on nuclear terrorism, and on risks from biotechnology and from nanotechnology are perfectly sensible and well-argued. These articles are somewhat technical, so it is a curious relief to arrive at "totalitarian government" which discusses in an easy to read way why 20th century totalitarian governments did not last forever, and circumstances under which a stable worldwide totalitarian government might emerge.

The article on AIs emphasizes that we wrongly imagine intelligent machines as like humans -- "how likely is it that AI will cross the vast gap from amoeba to village idiot, and then stop at the level of human genius?" -- and that we should attempt to envisage something quite different. But the subsequent discussion of Friendly or Unfriendly AIs rests on the assumptions that AIs may be created which have intelligence and motivation ("optimization targets", in the author's effort to avoid anthropomorphizing) to do things on their own initiative, and that their motivations will be comprehensible to humans. Well, I find it hard enough to imagine what "motivation/optimization targets" mean to an amoeba or a village idiot, let alone an AI.

The only article I found positively unsatisfactory was on social collapse. A catastrophe eliminating global food production for one year would likely cause "collapse of civilization" in fighting over the 2 months food supply in storage. But not elimination for just one month. A serious discussion of the sizes of different catastrophes needed to reach this tipping point would be fascinating, but the article merely assumes power law distributions for the size of an unspecified disaster -- this is the sort of thing that brings mathematical modeling into disrepute.

Overall, a valuable and eclectic selection of thought-provoking articles.
8 of 11 people found the following review helpful
HASH(0xb33671a4) out of 5 stars A Catalyst for Ideas and Actions Aug. 23 2008
By R. B. Cathcart - Published on
Format: Hardcover
Individual and government policy instigators everywhere, starting with voting citizens, must face the common problem of bringing expert knowledge to bear on globalized public policy-making. This book is rather like a "think tank" in and of itself and could serve such a purpose successfully! Campaigning in 1912, the intellectual and soon-to-be-US President Woodrow Wilson commented that "What I fear is a government of experts". Yet, in the 21st Century, the world-public has to have the best possible advice on macro-problems that can, may or certainly will impact human society. The Canadian scientist Vaclav Smil, in GLOBAL CATASTROPHES AND TRENDS: THE NEXT FIFTY YEARS (2008) has also foreseen, along with the stellar topic-centered name writers in this excellent revelatory text,the necessity of focused individuals, investigative panels and advisory bodies helping the world-public. None express a desire or need to "rule the world", the stealing of choices from the world-public, or the foreclosure of world-public options for future life-styles! However, they do a masterful job of explicating the macro-problems developing, impending or forecastable. The well-edited prose, informative diagrams and necessary illustrations are simply awe-inspiring! This demonstrative text--by no means to be considered a textbook--is fascinating, alarming, inspiring and just plain delicious reading. I reccommend it as a 10 on a scale of 1 to 10!!