Global Catastrophic Risks and over one million other books are available for Amazon Kindle. Learn more

Vous voulez voir cette page en français ? Cliquez ici.


or
Sign in to turn on 1-Click ordering.
or
Amazon Prime Free Trial required. Sign up when you check out. Learn More
More Buying Choices
Have one to sell? Sell yours here
Start reading Global Catastrophic Risks on your Kindle in under a minute.

Don't have a Kindle? Get your Kindle here, or download a FREE Kindle Reading App.

Global Catastrophic Risks [Hardcover]

Nick Bostrom , Milan Cirkovic

List Price: CDN$ 67.50
Price: CDN$ 49.04 & FREE Shipping. Details
You Save: CDN$ 18.46 (27%)
o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o
Only 1 left in stock (more on the way).
Ships from and sold by Amazon.ca. Gift-wrap available.
Want it delivered Monday, November 24? Choose One-Day Shipping at checkout.

Formats

Amazon Price New from Used from
Kindle Edition CDN $14.74  
Hardcover CDN $49.04  
Paperback CDN $29.92  
Amazon.ca's 2014 Books Gift Guide
2014 Books Gift Guide
Yes Please, the eagerly anticipated first book from Amy Poehler, the Golden Globe winning star of Parks and Recreation, is featured in our 2014 Books Gift Guide. More gift ideas

Book Description

Aug. 1 2008
A global catastrophic risk is one with the potential to wreak death and destruction on a global scale. In human history, wars and plagues have done so on more than one occasion, and misguided ideologies and totalitarian regimes have darkened an entire era or a region. Advances in technology are adding dangers of a new kind. It could happen again. In iGlobal Catastrophic Risks/i 26 leading experts look at the gravest risks facing humanity in the 21st century, including natural catastrophes, nuclear war, terrorism, global warming, biological weapons, totalitarianism, advanced nanotechnology, general artificial intelligence, and social collapse. The book also addresses over-arching issues - policy responses and methods for predicting and managing catastrophes. This is invaluable reading for anyone interested in the big issues of our time; for students focusing on science, society, technology, and public policy; and for academics, policy-makers, and professionals working in these acutely important fields.

Customers Who Bought This Item Also Bought


Product Details


Product Description

Review

This volume is remarkably entertaining and readable...It's risk assessment meets science fiction. Natural Hazards Observer The book works well, providing a mine of peer-reviewed information on the great risks that threaten our own and future generations. Nature We should welcome this fascinating and provocative book. Martin J Rees (from foreword) [Provides] a mine of peer-reviewed information on the great risks that threaten our own and future generations. Nature

About the Author

Nick Bostrom, PhD, is Director of the Future of Humanity Institute, in the James Martin 21st Century School, at Oxford University. He previously taught at Yale University in the Department of Philosophy and in the Yale Institute for Social and Policy Studies. He is the author of more than 130 publications including many in leading academic journals, and his writings have been translated into more than 16 different languages. Bostrom pioneered the concept of existential risk. He developed the first mathematically explicit theory of observation selection effects. He also is the originator of the simulation argument and is the author of a number of seminal studies on the implications of future technologies. Milan M. 'Cirkovi'c, PhD, is a senior research associate of the Astronomical Observatory of Belgrade, (Serbia) and a professor of Cosmology at Department of Physics, University of Novi Sad (Serbia). He received his Ph. D. in Physics from the State University of New York at Stony Brook (USA). His primary research interests are in the fields of astrophysical cosmology (baryonic dark matter, star formation, future of the universe), astrobiology (anthropic principles, SETI studies, catastrophic episodes in the history of life), as well as philosophy of science (risk analysis, future studies, foundational issues in quantum mechanics and cosmology).

Inside This Book (Learn More)
Browse Sample Pages
Front Cover | Copyright | Table of Contents | Excerpt | Index | Back Cover
Search inside this book:

Customer Reviews

There are no customer reviews yet on Amazon.ca
5 star
4 star
3 star
2 star
1 star
Most Helpful Customer Reviews on Amazon.com (beta)
Amazon.com: 4.1 out of 5 stars  9 reviews
25 of 28 people found the following review helpful
5.0 out of 5 stars Important Sept. 25 2008
By Peter McCluskey - Published on Amazon.com
Format:Hardcover
This is a relatively comprehensive collection of thoughtful essays about the risks of a major catastrophe (mainly those that would kill a billion or more people).
Probably the most important chapter is the one on risks associated with AI, since few people attempting to create an AI seem to understand the possibilities it describes. It makes some implausible claims about the speed with which an AI could take over the world, but the argument they are used to support only requires that a first-mover advantage be important, and that is only weakly dependent on assumptions about that speed with which AI will improve.
The risks of a large fraction of humanity being killed by a super-volcano is apparently higher than the risk from asteroids, but volcanoes have more of a limit on their maximum size, so they appear to pose less risk of human extinction.
The risks of asteroids and comets can't be handled as well as I thought by early detection, because some dark comets can't be detected with current technology until it's way too late. It seems we ought to start thinking about better detection systems, which would probably require large improvements in the cost-effectiveness of space-based telescopes or other sensors.
Many of the volcano and asteroid deaths would be due to crop failures from cold weather. Since mid-ocean temperatures are more stable that land temperatures, ocean based aquaculture would help mitigate this risk.
The climate change chapter seems much more objective and credible than what I've previously read on the subject, but is technical enough that it won't be widely read, and it won't satisfy anyone who is looking for arguments to justify their favorite policy. The best part is a list of possible instabilities which appear unlikely but which aren't understood well enough to evaluate with any confidence.
The chapter on plagues mentions one surprising risk - better sanitation made polio more dangerous by altering the age at which it infected people. If I'd written the chapter, I'd have mentioned Ewald's analysis of how human behavior influences the evolution of strains which are more or less virulent.
There's good news about nuclear proliferation which has been under-reported - a fair number of countries have abandoned nuclear weapons programs, and a few have given up nuclear weapons. So if there's any trend, it's toward fewer countries trying to build them, and a stable number of countries possessing them. The bad news is we don't know whether nanotechnology will change that by drastically reducing the effort needed to build them.
The chapter on totalitarianism discusses some uncomfortable tradeoffs between the benefits of some sort of world government and the harm that such government might cause. One interesting claim:
totalitarian regimes are less likely to foresee disasters, but are in some ways better-equipped to deal with disasters that they take seriously.
6 of 7 people found the following review helpful
5.0 out of 5 stars I'll Read this Book Again Jan. 13 2009
By Mike Byrne - Published on Amazon.com
Format:Hardcover|Verified Purchase
GCR (Global Catastrophic Risks) is a real page-turner. I literally couldn't put it down. Sometimes I'd wake up in the middle of the night with the book open on my chest, and the lights on, and I'd begin reading again from where I'd dozed off hours earlier, and I'd keep reading till I just had to go to sleep.

I had read a review of GCR in the scientific journal "Nature" in which the reviewer complained that the authors had given the global warming issue short shrift. I considered this a plus.

If, like me, you get very annoyed by "typos," be forewarned. There are enough typos in GCR to start a collection. At first I was a bit annoyed by them, but some were quite amusing... almost as if they were done on purpose.

Most of the typos were straight typing errors, or errors of fact. For example, on page 292 the author says that the 1918 flu pandemic killed "only 23%" of those infected. Only 23%? That seems a rather high percentage to be preceeded by the qualifier "only". Of course, although 50 million people died in the pandemic, this represented "only" 2% to 3% of those infected... not 23%. On p 295 we read "the rats and their s in ships" and it might take us a moment to determine that it should have read, "the rats and their fleas in ships."

But many of the typos were either fun, or a bit more tricky to figure out: on p. 254 we find "canal so" which you can probably predict should have been "can also." Much trickier, on p. 255 we find, "A large meteoric impact was invoked (...) in order to explain their idium anomaly." Their idium anomaly?? Nah. Better would have been..."the iridium anomaly!" (That's one of my favorites.) Elsewhere, we find twice on the same page "an arrow" instead of "a narrow"... and so it goes..."mortality greater than $1 million." on p. 168 (why the $ sign?) etc. etc.

But the overall impact of the book is tremendous. We learn all sorts of arcane and troubling data, e.g. form p.301 "A totally unforseen complication of the successful restoration of immunologic function by the treatment of AIDS with antiviral drugs has been the activation of dormant leprosy..." I can hear the phone call now...."Darling, I have some wonderful news, and some terrible news...hold on a second dearest, my nose just fell off..."

So even if you're usually turned off by typos, don't let that stop you from buying this book. I expected more from the Oxford University Press, but I guess they've sacked the proofreader and they're using Spell-Check these days. But then, how did "their idium anomaly" get past Spell-Check? I guess Spell-Check at Oxford includes Latin.
13 of 17 people found the following review helpful
2.0 out of 5 stars Disappointing July 18 2010
By Ian - Published on Amazon.com
Format:Hardcover|Verified Purchase
This is a most disappointing volume. I bought it expecting so much, from Oxford, that great source and fount, from supposed young intellectually virile editors located in supposedly buzz centres of excellence. Big flop. For me the best papers were the most irrelevant: the ones that dealt with what's going to happen to us on the billion year time scale or alternatively if the wildest space objects have a say in our future. Too many other papers seemed to say so much that was unsurprising, unhelpful and I have to say sadly, pretentious. Maths used to no good or insightful end in some of that latter group, presumably just to show off ( e.g the Drake equation in a context that just baffles me for relevance).

I may be wrong but as the volume proceeded I gained the growing impression there was an increase in lack of comprehension among authors about what meaningful they could actually say about the topics they'd been assigned. Perhaps I'm harsh, but I did not enjoy this read - or learn much from it. Final pedantic: for OUP, too many glitches and typos, obviously beneath the dignity of the young high flying editors to bother with.
8 of 11 people found the following review helpful
5.0 out of 5 stars A Catalyst for Ideas and Actions Aug. 23 2008
By R. B. Cathcart - Published on Amazon.com
Format:Hardcover
Individual and government policy instigators everywhere, starting with voting citizens, must face the common problem of bringing expert knowledge to bear on globalized public policy-making. This book is rather like a "think tank" in and of itself and could serve such a purpose successfully! Campaigning in 1912, the intellectual and soon-to-be-US President Woodrow Wilson commented that "What I fear is a government of experts". Yet, in the 21st Century, the world-public has to have the best possible advice on macro-problems that can, may or certainly will impact human society. The Canadian scientist Vaclav Smil, in GLOBAL CATASTROPHES AND TRENDS: THE NEXT FIFTY YEARS (2008) has also foreseen, along with the stellar topic-centered name writers in this excellent revelatory text,the necessity of focused individuals, investigative panels and advisory bodies helping the world-public. None express a desire or need to "rule the world", the stealing of choices from the world-public, or the foreclosure of world-public options for future life-styles! However, they do a masterful job of explicating the macro-problems developing, impending or forecastable. The well-edited prose, informative diagrams and necessary illustrations are simply awe-inspiring! This demonstrative text--by no means to be considered a textbook--is fascinating, alarming, inspiring and just plain delicious reading. I reccommend it as a 10 on a scale of 1 to 10!!
1 of 1 people found the following review helpful
2.0 out of 5 stars A difficult presentation of simple information June 13 2014
By Three if by Space - Published on Amazon.com
Format:Paperback
While the risks are real enough, you practically need to already be an expert to understand Bostrom's points. From the sections I read, the entire book could be collapsed into 50 pages.

I'm an expert in Artificial Intelligence. Bostrom makes the point that AI can become uncontrolled, because while there's a way to test an existing program, there's no way to test a program that re-writes itself, because you don't know what it will turn into. There. I just gave a synopsis of one of the main points of an entire chapter.

Bostrom is wasteful with words on multiple levels. An illustrative example he gives showing how training AI neural nets can fail can be stated in a couple sentences. It's a well-known example in the AI community, yet it takes long, turgid prose for Bostrom to get to the point. Here it is: When an AI program was taught to recognize tanks, it got a very good success rate. When the program was moved to a different country's tanks, it failed completely, because it was trained, not to identify tanks, but the similar lighting conditions in one country.

Look for similar items by category


Feedback