Global Catastrophic Risks Hardcover – Aug 1 2008
Customers Who Viewed This Item Also Viewed
No Kindle device required. Download one of the Free Kindle apps to start reading Kindle books on your smartphone, tablet, and computer.
To get the free app, enter your e-mail address or mobile phone number.
This volume is remarkably entertaining and readable...It's risk assessment meets science fiction. Natural Hazards Observer The book works well, providing a mine of peer-reviewed information on the great risks that threaten our own and future generations. Nature We should welcome this fascinating and provocative book. Martin J Rees (from foreword) [Provides] a mine of peer-reviewed information on the great risks that threaten our own and future generations. Nature
About the Author
Nick Bostrom, PhD, is Director of the Future of Humanity Institute, in the James Martin 21st Century School, at Oxford University. He previously taught at Yale University in the Department of Philosophy and in the Yale Institute for Social and Policy Studies. He is the author of more than 130 publications including many in leading academic journals, and his writings have been translated into more than 16 different languages. Bostrom pioneered the concept of existential risk. He developed the first mathematically explicit theory of observation selection effects. He also is the originator of the simulation argument and is the author of a number of seminal studies on the implications of future technologies. Milan M. 'Cirkovi'c, PhD, is a senior research associate of the Astronomical Observatory of Belgrade, (Serbia) and a professor of Cosmology at Department of Physics, University of Novi Sad (Serbia). He received his Ph. D. in Physics from the State University of New York at Stony Brook (USA). His primary research interests are in the fields of astrophysical cosmology (baryonic dark matter, star formation, future of the universe), astrobiology (anthropic principles, SETI studies, catastrophic episodes in the history of life), as well as philosophy of science (risk analysis, future studies, foundational issues in quantum mechanics and cosmology).
Inside This Book(Learn More)
What Other Items Do Customers Buy After Viewing This Item?
Most Helpful Customer Reviews on Amazon.com (beta)
Probably the most important chapter is the one on risks associated with AI, since few people attempting to create an AI seem to understand the possibilities it describes. It makes some implausible claims about the speed with which an AI could take over the world, but the argument they are used to support only requires that a first-mover advantage be important, and that is only weakly dependent on assumptions about that speed with which AI will improve.
The risks of a large fraction of humanity being killed by a super-volcano is apparently higher than the risk from asteroids, but volcanoes have more of a limit on their maximum size, so they appear to pose less risk of human extinction.
The risks of asteroids and comets can't be handled as well as I thought by early detection, because some dark comets can't be detected with current technology until it's way too late. It seems we ought to start thinking about better detection systems, which would probably require large improvements in the cost-effectiveness of space-based telescopes or other sensors.
Many of the volcano and asteroid deaths would be due to crop failures from cold weather. Since mid-ocean temperatures are more stable that land temperatures, ocean based aquaculture would help mitigate this risk.
The climate change chapter seems much more objective and credible than what I've previously read on the subject, but is technical enough that it won't be widely read, and it won't satisfy anyone who is looking for arguments to justify their favorite policy. The best part is a list of possible instabilities which appear unlikely but which aren't understood well enough to evaluate with any confidence.
The chapter on plagues mentions one surprising risk - better sanitation made polio more dangerous by altering the age at which it infected people. If I'd written the chapter, I'd have mentioned Ewald's analysis of how human behavior influences the evolution of strains which are more or less virulent.
There's good news about nuclear proliferation which has been under-reported - a fair number of countries have abandoned nuclear weapons programs, and a few have given up nuclear weapons. So if there's any trend, it's toward fewer countries trying to build them, and a stable number of countries possessing them. The bad news is we don't know whether nanotechnology will change that by drastically reducing the effort needed to build them.
The chapter on totalitarianism discusses some uncomfortable tradeoffs between the benefits of some sort of world government and the harm that such government might cause. One interesting claim:
totalitarian regimes are less likely to foresee disasters, but are in some ways better-equipped to deal with disasters that they take seriously.
I had read a review of GCR in the scientific journal "Nature" in which the reviewer complained that the authors had given the global warming issue short shrift. I considered this a plus.
If, like me, you get very annoyed by "typos," be forewarned. There are enough typos in GCR to start a collection. At first I was a bit annoyed by them, but some were quite amusing... almost as if they were done on purpose.
Most of the typos were straight typing errors, or errors of fact. For example, on page 292 the author says that the 1918 flu pandemic killed "only 23%" of those infected. Only 23%? That seems a rather high percentage to be preceeded by the qualifier "only". Of course, although 50 million people died in the pandemic, this represented "only" 2% to 3% of those infected... not 23%. On p 295 we read "the rats and their s in ships" and it might take us a moment to determine that it should have read, "the rats and their fleas in ships."
But many of the typos were either fun, or a bit more tricky to figure out: on p. 254 we find "canal so" which you can probably predict should have been "can also." Much trickier, on p. 255 we find, "A large meteoric impact was invoked (...) in order to explain their idium anomaly." Their idium anomaly?? Nah. Better would have been..."the iridium anomaly!" (That's one of my favorites.) Elsewhere, we find twice on the same page "an arrow" instead of "a narrow"... and so it goes..."mortality greater than $1 million." on p. 168 (why the $ sign?) etc. etc.
But the overall impact of the book is tremendous. We learn all sorts of arcane and troubling data, e.g. form p.301 "A totally unforseen complication of the successful restoration of immunologic function by the treatment of AIDS with antiviral drugs has been the activation of dormant leprosy..." I can hear the phone call now...."Darling, I have some wonderful news, and some terrible news...hold on a second dearest, my nose just fell off..."
So even if you're usually turned off by typos, don't let that stop you from buying this book. I expected more from the Oxford University Press, but I guess they've sacked the proofreader and they're using Spell-Check these days. But then, how did "their idium anomaly" get past Spell-Check? I guess Spell-Check at Oxford includes Latin.
Probably the most dangerous future risk is going to be the advent of real Artificial Intelligence within our lifetime or very near into the future. Eliezer Yudkowsky is the top figurehead and spokesman for factors involved in this risk and is the editor for this specific risk within the book. If our fears are to become a reality, then it doesn't matter much of whatever else we get right. Many of the other risks to worry about, we already have a wealth of information on their occurrences, how they work, how likely they are to affect us, and how they will affect us when they come. The risks concerning the arrival of AI however are far more dangerous in that this isn't an experiment that we get to practically represent so that reality can beat us over the head with the correct answer. If we are to achieve true FAI (Friendly Artificial Intelligence as Yudkowsky calls it) then a massive amount of dedication, money and effort is needed for research needed to avoid a real disaster. If our aims are achieved and realized however, many of the other risks and concerns we have can be offset to the handling of an intelligence much greater than ourselves with a higher probability and likelihood of being overcome.
We are passing through a stage where we are beginning to create problems that are beyond our current capacity to provide solutions for. This book is probably the best general and somewhat technical primer to become acquainted with serious problems we are currently facing and that we will inevitably arrive at in the future. If you are truly keen to getting involved in with the kinds of problems we will have to confront, this book is indispensable.
Amongst the core chapters discussing particular risks, the three that are most ``hard science", on supervolcanos, asteroid or comet impact, and extra-solar-system risks are just great -- one learns for instance that (contrary to much science fiction) comets are more a risk than asteroids, and the major risk in the last category is not nearby supernovas but cosmic rays created by gamma ray bursts. These three chapters are perhaps the only contexts where it's reasonable to attempt to estimate actual probabilities of the catastrophes.
The balanced article on global warming is unlikely to please extremists, concluding that mainstream science predicts a linear increase in temperature that may be unpleasant but not catastrophic, while the various speculative non-linear possibilities leading to catastrophe have plausibilities impossible to assess. The article on pandemics is surprisingly upbeat (``are influenza pandemics likely? Possibly, except for the preposterous mortality rate that has been proposed"), as is the article on exotic physics ("Might our vacuum be only metastable? If so, we can envisage a terminal catastrophe, when the field configuration of empty space changes, and with it the effective laws of physics ..."). The articles on nuclear war, on nuclear terrorism, and on risks from biotechnology and from nanotechnology are perfectly sensible and well-argued. These articles are somewhat technical, so it is a curious relief to arrive at "totalitarian government" which discusses in an easy to read way why 20th century totalitarian governments did not last forever, and circumstances under which a stable worldwide totalitarian government might emerge.
The article on AIs emphasizes that we wrongly imagine intelligent machines as like humans -- "how likely is it that AI will cross the vast gap from amoeba to village idiot, and then stop at the level of human genius?" -- and that we should attempt to envisage something quite different. But the subsequent discussion of Friendly or Unfriendly AIs rests on the assumptions that AIs may be created which have intelligence and motivation ("optimization targets", in the author's effort to avoid anthropomorphizing) to do things on their own initiative, and that their motivations will be comprehensible to humans. Well, I find it hard enough to imagine what "motivation/optimization targets" mean to an amoeba or a village idiot, let alone an AI.
The only article I found positively unsatisfactory was on social collapse. A catastrophe eliminating global food production for one year would likely cause "collapse of civilization" in fighting over the 2 months food supply in storage. But not elimination for just one month. A serious discussion of the sizes of different catastrophes needed to reach this tipping point would be fascinating, but the article merely assumes power law distributions for the size of an unspecified disaster -- this is the sort of thing that brings mathematical modeling into disrepute.
Overall, a valuable and eclectic selection of thought-provoking articles.
Look for similar items by category
- Books > Politics & Social Sciences > Current Events > Disaster Relief
- Books > Politics & Social Sciences > Current Events > Poverty > Social Services & Welfare
- Books > Politics & Social Sciences > Current Events > Terrorism
- Books > Politics & Social Sciences > Politics > Terrorism
- Books > Politics & Social Sciences > Social Sciences > Political Science > Government
- Books > Professional & Technical > Professional Science > Earth Sciences
- Books > Science & Math > Earth Sciences