21 chapters by different authors succeed in the declared goal of giving a big picture of the subject (GCR) as seen by academics in different disciplines. The content is appropriately non-technical -- like the serious end of the ``popular science" genre -- though the writing styles are more reminiscent of an academic paper or lecture than the style of best-selling popular science books. The opening 8 ``background" chapters (on very diverse topics from long-term astrophysics to public policy toward catastrophe) were the least satisfying to me, many (while interesting in themselves) seeming to be each author's favorite lecture recycled with a nod to GCR. Of these chapters, let me just single out Eliezer Yudkowsky's chapter on cognitive biases in an individual's risk assessments, as one of the best 20-page summaries of that topic I have read.
Amongst the core chapters discussing particular risks, the three that are most ``hard science", on supervolcanos, asteroid or comet impact, and extra-solar-system risks are just great -- one learns for instance that (contrary to much science fiction) comets are more a risk than asteroids, and the major risk in the last category is not nearby supernovas but cosmic rays created by gamma ray bursts. These three chapters are perhaps the only contexts where it's reasonable to attempt to estimate actual probabilities of the catastrophes.
The balanced article on global warming is unlikely to please extremists, concluding that mainstream science predicts a linear increase in temperature that may be unpleasant but not catastrophic, while the various speculative non-linear possibilities leading to catastrophe have plausibilities impossible to assess. The article on pandemics is surprisingly upbeat (``are influenza pandemics likely? Possibly, except for the preposterous mortality rate that has been proposed"), as is the article on exotic physics ("Might our vacuum be only metastable? If so, we can envisage a terminal catastrophe, when the field configuration of empty space changes, and with it the effective laws of physics ..."). The articles on nuclear war, on nuclear terrorism, and on risks from biotechnology and from nanotechnology are perfectly sensible and well-argued. These articles are somewhat technical, so it is a curious relief to arrive at "totalitarian government" which discusses in an easy to read way why 20th century totalitarian governments did not last forever, and circumstances under which a stable worldwide totalitarian government might emerge.
The article on AIs emphasizes that we wrongly imagine intelligent machines as like humans -- "how likely is it that AI will cross the vast gap from amoeba to village idiot, and then stop at the level of human genius?" -- and that we should attempt to envisage something quite different. But the subsequent discussion of Friendly or Unfriendly AIs rests on the assumptions that AIs may be created which have intelligence and motivation ("optimization targets", in the author's effort to avoid anthropomorphizing) to do things on their own initiative, and that their motivations will be comprehensible to humans. Well, I find it hard enough to imagine what "motivation/optimization targets" mean to an amoeba or a village idiot, let alone an AI.
The only article I found positively unsatisfactory was on social collapse. A catastrophe eliminating global food production for one year would likely cause "collapse of civilization" in fighting over the 2 months food supply in storage. But not elimination for just one month. A serious discussion of the sizes of different catastrophes needed to reach this tipping point would be fascinating, but the article merely assumes power law distributions for the size of an unspecified disaster -- this is the sort of thing that brings mathematical modeling into disrepute.
Overall, a valuable and eclectic selection of thought-provoking articles.