The author put me on edge almost immediately with how much hype he crammed into his introduction. It was as if he wrote three different introductions, and just included them all instead of editing them down. But I figured I would wait to see the actual substance before passing judgment. Unfortunately, that also failed to deliver. After 30 pages of buildup, his first example (and as it turns out, his ONLY example) illustrating the title of the book and his main thesis, makes me certain that he failed high school math. This is rather distressing, as his bio starts off by saying he is an applied mathematician.
Arbesman cites a study where nearly 500 medical articles from the past 50 years were vetted by current experts, and a graph is shown which displays the time since publication on the x-axis and the percentage that stand up to scrutiny on the y-axis. The graph is ambiguous, showing a stair-step curve so that you can't tell where the actual data points are, but this is not the main problem. The graph clearly shows a curve which is accelerating downward; it is concave down. Arbesman infers from this that "they got a clear measurement for the half-life of facts in these fields by looking at where the curve crosses 50 percent on this chart: forty-five years." He goes on to say that this graph displays exponential decay. This is blatantly and hilariously wrong. An exponential decay curve has exactly the opposite shape, its rate of decay slows down, it is concave up. If the half-life were 45 years, then it would cross the 25 percent mark at 90 years. However, the graph shows it dropping to 25 percent at about 50 years. Even worse, this graph does not track a group of papers published at about the same time, displaying their time to be disproven or rendered obsolete. It takes papers published at different times and tested at one time, so there is no real valid comparison between them. An exponential decay curve will drop to half of its value in a certain amount of time, FOR EACH POINT ON THE CURVE, meaning, you need to measure this 45 years at multiple times and make sure it is consistent. Arbesman tries to infer such a curve from only one data point, and because of the problem with how the papers were selected, it's not even a true data point.
This tendency to draw grand conclusions from insufficient data pervades the text. On page 18 he states that Nobel laureates tend to give first authorship of their papers to their younger colleagues more frequently that other scientists. His immediate conclusion is that nicer people are more likely to win Nobel prizes. This is a fundamental error, similar to how false positives can be relatively frequent in drug tests compared to true positives, despite a high testing accuracy. The population of Nobel laureates is much smaller than the population of nice people who also happen to be scientists. The conclusion does not follow.
Another example: On page 53, he states that the average American lifespan is increasing, and that this rate of increase is itself increasing. Without blinking, he says: "If this acceleration continues, something curious will happen at a certain point. When we begin adding more than one year to the expected life span ... per year, we can effectively live forever." This of course does not follow. It is entirely conceivable that lifespan increases will tend to be heavily weighted towards younger people, i.e. medical technology may help a baby live to be 150, but not an 80-year-old man who didn't have that technology 80 years ago. Whether or not this is the case is irrelevant; Arbesman does not provide enough information to show it.
There are much simpler examples as well. On page 46, he says that the movement abilities of robots "have gone through about thirteen doublings in twenty-six years. That means that we have had a doubling about every two years: right on schedule and similar to Moore's Law." No, it doesn't. He has not given any indication that these doublings have been equally spaced.
His apparent lack of understanding of math, and inability to explain it clearly, also shows up throughout the book. On page 42, when describing Moore's Law of exponential increase, he says, "Processing power grows every year at a constant *rate* rather than by a constant amount." These are the same thing. A constant rate means growing by a constant amount. Of course he means that the rate itself is growing, proportional to the processing power (not at a constant rate, that would be quadratic growth instead of exponential), but his explanation is obfuscated.
A few pages later, he is describing how multiple logistic curves (curves that start out looking exponential but eventually approach a maximum, as in population growth) can be overlayed to produce an exponential looking curve. The point of this is to illustrate how successive technologies can contribute to a continued exponential growth in processing power or whatever. But the graph he includes shows three of these "S-curves" combining to form a linear trend, a straight line. He certainly does not state that the scale is logarithmic, in fact the axes are not even labelled. I really doubt that he understands the difference between constant, polynomial, and exponential growth.
On page 59, we have this gem: "A population's growth rate will increase in size proportionally to the current number of people. To be clear: This is much faster than exponential growth, the fastest growth rate we've considered so far. Exponential growth is a constant rate, and here the rate is growing, and growing along the speed at which the population increases." Now, this completely confused me until I figured out that he was actually talking about technological growth, not the actual growth of the population. But even so, the statement is absurd. Exponential growth is NOT a constant rate. (It's a constant proportion, making the rate grow, well, exponentially.) At this point I'm pretty sure that he's never taken an intro level calculus class.
I'm not just cherry-picking the worst examples. This stuff is on virtually every page. I started skimming heavily at around page 40, and still managed to find all these examples. The funniest one is on page 149: "By fitting the curve of Pluto's diminishing size to a bizarre mathematical function using the irrational number pi, they argued that Pluto would vanish in 1984." Bahaha, that must be a bizarre mathematical function indeed, to be using irrational numbers like pi. That kind of thing NEVER happens in math.
The book is also sloppy in many non-technical ways. Arbesman constantly touches on topics without ever finishing his thoughts, or refers back to something that he didn't actually talk about before. Around page 80, he is telling a story about a nasty rivalry between paleontologists Edward Cope and Othniel Marsh. He talks about Marsh discovering the brontosaurus and the apatosaurus, and then a bit later says, "Despite their vitriol and animosity, they actually didn't fight any more about the brontosaurus." But I don't know what he's talking about here, because he never mentioned them fighting about the brontosaurus to begin with.
On page 161: "Regarding a kerfuffle about the possibility of bacteria that can incorporate arsenic into their DNA backbone ... Carl Zimmer explains: 'But none of those critics had actually tried to replicate the initial results.'" He goes on to make points about replication and publishing negative results, etc. But what about the damn arsenic bacteria? What ever happened with them? I remember reading an article when that was discovered, and now I'm curious.
The most maddening case of this is on page 134: "Some famous problems go decades before being solved, and some, those that exist far out in the tail of the distribution, remain outstanding for hundreds of years. There was even a famous conjecture in the data set that took more than fifteen hundred years before it was eventually proven." WHAT WAS IT?!! The longest ones I can find, proven or not, are Fermat's Last Theorem, the Goldbach Conjecture, and Kepler's Conjecture. Seriously, if anybody knows the answer to this, comment with it please.
Probably the best parts of the book are when Arbesman is quoting or summarizing somebody else. There is actually quite a lot of this, and it's why it just barely gets 2 stars. But it's a shame that it's covered in crap, and I'll never see most of it. This book was a birthday gift, along with Nate Silver's The Signal and the Noise: Why So Many Predictions Fail -- but Some Don't. I let a friend borrow that while I was reading this one, and he said it was pretty good, so maybe go read that instead.