22 of 24 people found the following review helpful
- Published on Amazon.com
This is a relatively comprehensive collection of thoughtful essays about the risks of a major catastrophe (mainly those that would kill a billion or more people).
Probably the most important chapter is the one on risks associated with AI, since few people attempting to create an AI seem to understand the possibilities it describes. It makes some implausible claims about the speed with which an AI could take over the world, but the argument they are used to support only requires that a first-mover advantage be important, and that is only weakly dependent on assumptions about that speed with which AI will improve.
The risks of a large fraction of humanity being killed by a super-volcano is apparently higher than the risk from asteroids, but volcanoes have more of a limit on their maximum size, so they appear to pose less risk of human extinction.
The risks of asteroids and comets can't be handled as well as I thought by early detection, because some dark comets can't be detected with current technology until it's way too late. It seems we ought to start thinking about better detection systems, which would probably require large improvements in the cost-effectiveness of space-based telescopes or other sensors.
Many of the volcano and asteroid deaths would be due to crop failures from cold weather. Since mid-ocean temperatures are more stable that land temperatures, ocean based aquaculture would help mitigate this risk.
The climate change chapter seems much more objective and credible than what I've previously read on the subject, but is technical enough that it won't be widely read, and it won't satisfy anyone who is looking for arguments to justify their favorite policy. The best part is a list of possible instabilities which appear unlikely but which aren't understood well enough to evaluate with any confidence.
The chapter on plagues mentions one surprising risk - better sanitation made polio more dangerous by altering the age at which it infected people. If I'd written the chapter, I'd have mentioned Ewald's analysis of how human behavior influences the evolution of strains which are more or less virulent.
There's good news about nuclear proliferation which has been under-reported - a fair number of countries have abandoned nuclear weapons programs, and a few have given up nuclear weapons. So if there's any trend, it's toward fewer countries trying to build them, and a stable number of countries possessing them. The bad news is we don't know whether nanotechnology will change that by drastically reducing the effort needed to build them.
The chapter on totalitarianism discusses some uncomfortable tradeoffs between the benefits of some sort of world government and the harm that such government might cause. One interesting claim:
totalitarian regimes are less likely to foresee disasters, but are in some ways better-equipped to deal with disasters that they take seriously.
13 of 17 people found the following review helpful
- Published on Amazon.com
This is a most disappointing volume. I bought it expecting so much, from Oxford, that great source and fount, from supposed young intellectually virile editors located in supposedly buzz centres of excellence. Big flop. For me the best papers were the most irrelevant: the ones that dealt with what's going to happen to us on the billion year time scale or alternatively if the wildest space objects have a say in our future. Too many other papers seemed to say so much that was unsurprising, unhelpful and I have to say sadly, pretentious. Maths used to no good or insightful end in some of that latter group, presumably just to show off ( e.g the Drake equation in a context that just baffles me for relevance).
I may be wrong but as the volume proceeded I gained the growing impression there was an increase in lack of comprehension among authors about what meaningful they could actually say about the topics they'd been assigned. Perhaps I'm harsh, but I did not enjoy this read - or learn much from it. Final pedantic: for OUP, too many glitches and typos, obviously beneath the dignity of the young high flying editors to bother with.
5 of 6 people found the following review helpful
- Published on Amazon.com
GCR (Global Catastrophic Risks) is a real page-turner. I literally couldn't put it down. Sometimes I'd wake up in the middle of the night with the book open on my chest, and the lights on, and I'd begin reading again from where I'd dozed off hours earlier, and I'd keep reading till I just had to go to sleep.
I had read a review of GCR in the scientific journal "Nature" in which the reviewer complained that the authors had given the global warming issue short shrift. I considered this a plus.
If, like me, you get very annoyed by "typos," be forewarned. There are enough typos in GCR to start a collection. At first I was a bit annoyed by them, but some were quite amusing... almost as if they were done on purpose.
Most of the typos were straight typing errors, or errors of fact. For example, on page 292 the author says that the 1918 flu pandemic killed "only 23%" of those infected. Only 23%? That seems a rather high percentage to be preceeded by the qualifier "only". Of course, although 50 million people died in the pandemic, this represented "only" 2% to 3% of those infected... not 23%. On p 295 we read "the rats and their s in ships" and it might take us a moment to determine that it should have read, "the rats and their fleas in ships."
But many of the typos were either fun, or a bit more tricky to figure out: on p. 254 we find "canal so" which you can probably predict should have been "can also." Much trickier, on p. 255 we find, "A large meteoric impact was invoked (...) in order to explain their idium anomaly." Their idium anomaly?? Nah. Better would have been..."the iridium anomaly!" (That's one of my favorites.) Elsewhere, we find twice on the same page "an arrow" instead of "a narrow"... and so it goes..."mortality greater than $1 million." on p. 168 (why the $ sign?) etc. etc.
But the overall impact of the book is tremendous. We learn all sorts of arcane and troubling data, e.g. form p.301 "A totally unforseen complication of the successful restoration of immunologic function by the treatment of AIDS with antiviral drugs has been the activation of dormant leprosy..." I can hear the phone call now...."Darling, I have some wonderful news, and some terrible news...hold on a second dearest, my nose just fell off..."
So even if you're usually turned off by typos, don't let that stop you from buying this book. I expected more from the Oxford University Press, but I guess they've sacked the proofreader and they're using Spell-Check these days. But then, how did "their idium anomaly" get past Spell-Check? I guess Spell-Check at Oxford includes Latin.
8 of 11 people found the following review helpful
R. B. Cathcart
- Published on Amazon.com
Individual and government policy instigators everywhere, starting with voting citizens, must face the common problem of bringing expert knowledge to bear on globalized public policy-making. This book is rather like a "think tank" in and of itself and could serve such a purpose successfully! Campaigning in 1912, the intellectual and soon-to-be-US President Woodrow Wilson commented that "What I fear is a government of experts". Yet, in the 21st Century, the world-public has to have the best possible advice on macro-problems that can, may or certainly will impact human society. The Canadian scientist Vaclav Smil, in GLOBAL CATASTROPHES AND TRENDS: THE NEXT FIFTY YEARS (2008) has also foreseen, along with the stellar topic-centered name writers in this excellent revelatory text,the necessity of focused individuals, investigative panels and advisory bodies helping the world-public. None express a desire or need to "rule the world", the stealing of choices from the world-public, or the foreclosure of world-public options for future life-styles! However, they do a masterful job of explicating the macro-problems developing, impending or forecastable. The well-edited prose, informative diagrams and necessary illustrations are simply awe-inspiring! This demonstrative text--by no means to be considered a textbook--is fascinating, alarming, inspiring and just plain delicious reading. I reccommend it as a 10 on a scale of 1 to 10!!
1 of 1 people found the following review helpful
- Published on Amazon.com
This book is a compendium of what are referred to as GCRs (Global Catastrophic Risks) that represent future risks and cases, and merit much consideration if the human species is to survive into the future. Some of the cases given can seem "out there" so to speak in that they can seem like very futuristic and remote possibilities, but nonetheless are still very pertinent for the continuity of humanity. Social collapse, astrophysical catastrophes, technological breakthroughs, apocalyptic ideas given social consideration and credibility, thermonuclear war, and biases in human reasoning are all among the broad categories discussed within the books content. Some areas could have been expanded upon more, and other areas have an intermix of technical jargon here and there. This book would due very well to people looking to supplement their efforts for things like social work and for people in scientific disciplines, and to get a sense of general awareness and exposure from experts in the field as to how relevant these issues already are, or are going to become in the future.
Probably the most dangerous future risk is going to be the advent of real Artificial Intelligence within our lifetime or very near into the future. Eliezer Yudkowsky is the top figurehead and spokesman for factors involved in this risk and is the editor for this specific risk within the book. If our fears are to become a reality, then it doesn't matter much of whatever else we get right. Many of the other risks to worry about, we already have a wealth of information on their occurrences, how they work, how likely they are to affect us, and how they will affect us when they come. The risks concerning the arrival of AI however are far more dangerous in that this isn't an experiment that we get to practically represent so that reality can beat us over the head with the correct answer. If we are to achieve true FAI (Friendly Artificial Intelligence as Yudkowsky calls it) then a massive amount of dedication, money and effort is needed for research needed to avoid a real disaster. If our aims are achieved and realized however, many of the other risks and concerns we have can be offset to the handling of an intelligence much greater than ourselves with a higher probability and likelihood of being overcome.
We are passing through a stage where we are beginning to create problems that are beyond our current capacity to provide solutions for. This book is probably the best general and somewhat technical primer to become acquainted with serious problems we are currently facing and that we will inevitably arrive at in the future. If you are truly keen to getting involved in with the kinds of problems we will have to confront, this book is indispensable.