Download the free Kindle app and start reading Kindle books instantly on your smartphone, tablet or computer – no Kindle device required.
Read instantly on your browser with Kindle for Web.
Using your mobile phone camera, scan the code below and download the Kindle app.
Expert Political Judgment: How Good Is It? How Can We Know? Hardcover – July 25 2005
Purchase options and add-ons
The intelligence failures surrounding the invasion of Iraq dramatically illustrate the necessity of developing standards for evaluating expert opinion. This book fills that need. Here, Philip E. Tetlock explores what constitutes good judgment in predicting future events, and looks at why experts are often wrong in their forecasts.
Tetlock first discusses arguments about whether the world is too complex for people to find the tools to understand political phenomena, let alone predict the future. He evaluates predictions from experts in different fields, comparing them to predictions by well-informed laity or those based on simple extrapolation from current trends. He goes on to analyze which styles of thinking are more successful in forecasting. Classifying thinking styles using Isaiah Berlin's prototypes of the fox and the hedgehog, Tetlock contends that the fox--the thinker who knows many little things, draws from an eclectic array of traditions, and is better able to improvise in response to changing events--is more successful in predicting the future than the hedgehog, who knows one big thing, toils devotedly within one tradition, and imposes formulaic solutions on ill-defined problems. He notes a perversely inverse relationship between the best scientific indicators of good judgement and the qualities that the media most prizes in pundits--the single-minded determination required to prevail in ideological combat.
Clearly written and impeccably researched, the book fills a huge void in the literature on evaluating expert opinion. It will appeal across many academic disciplines as well as to corporations seeking to develop standards for judging expert decision-making.
Review
"Woodrow Wilson Foundation Award, American Political Science Association"
"Winner of the 2006 Grawemeyer Award for Ideas Improving World Order"
"Winner of the 2006 Robert E. Lane Award, Political Psychology Section of the American Political Science Association"
"It is the somewhat gratifying lesson of Philip Tetlock's new book . . . that people who make prediction their business--people who appear as experts on television, get quoted in newspaper articles, advise governments and businesses, and participate in punditry roundtables--are no better than the rest of us. When they're wrong, they're rarely held accountable, and they rarely admit it, either. . . . It would be nice if there were fewer partisans on television disguised as "analysts" and "experts". . . . But the best lesson of Tetlock's book may be the one that he seems most reluctant to draw: Think for yourself."---Louis Menand, The New Yorker
"The definitive work on this question. . . . Tetlock systematically collected a vast number of individual forecasts about political and economic events, made by recognised experts over a period of more than 20 years. He showed that these forecasts were not very much better than making predictions by chance, and also that experts performed only slightly better than the average person who was casually informed about the subject in hand."---Gavyn Davies, Financial Times
"Before anyone turns an ear to the panels of pundits, they might do well to obtain a copy of Phillip Tetlock's new book Expert Political Judgment: How Good Is It? How Can We Know? The Berkeley psychiatrist has apparently made a 20-year study of predictions by the sorts who appear as experts on TV and get quoted in newspapers and found that they are no better than the rest of us at prognostication."---Jim Coyle, Toronto Star
"Tetlock uses science and policy to brilliantly explore what constitutes good judgment in predicting future events and to examine why experts are often wrong in their forecasts." ― Choice
"[This] book . . . Marshals powerful evidence to make [its] case. Expert Political Judgment . . . Summarizes the results of a truly amazing research project. . . . The question that screams out from the data is why the world keeps believing that "experts" exist at all."---Geoffrey Colvin, Fortune
"Philip Tetlock has just produced a study which suggests we should view expertise in political forecasting--by academics or intelligence analysts, independent pundits, journalists or institutional specialists--with the same skepticism that the well-informed now apply to stockmarket forecasting. . . . It is the scientific spirit with which he tackled his project that is the most notable thing about his book, but the findings of his inquiry are important and, for both reasons, everyone seriously concerned with forecasting, political risk, strategic analysis and public policy debate would do well to read the book."---Paul Monk, Australian Financial Review
"Phillip E. Tetlock does a remarkable job . . . applying the high-end statistical and methodological tools of social science to the alchemistic world of the political prognosticator. The result is a fascinating blend of science and storytelling, in the the best sense of both words."---William D. Crano, PsysCRITIQUES
"Mr. Tetlock's analysis is about political judgment but equally relevant to economic and commercial assessments."---John Kay, Financial Times
"Why do most political experts prove to be wrong most of time? For an answer, you might want to browse through a very fascinating study by Philip Tetlock . . . who in Expert Political Judgment contends that there is no direct correlation between the intelligence and knowledge of the political expert and the quality of his or her forecasts. If you want to know whether this or that pundit is making a correct prediction, don't ask yourself what he or she is thinking--but how he or she is thinking."---Leon Hadar, Business Times
Review
"This book is a major contribution to our thinking about political judgment. Philip Tetlock formulates coding rules by which to categorize the observations of individuals, and arrives at several interesting hypotheses. He lays out the many strategies that experts use to avoid learning from surprising real-world events."―Deborah W. Larson, University of California, Los Angeles
"This is a marvelous book―fascinating and important. It provides a stimulating and often profound discussion, not only of what sort of people tend to be better predictors than others, but of what we mean by good judgment and the nature of objectivity. It examines the tensions between holding to beliefs that have served us well and responding rapidly to new information. Unusual in its breadth and reach, the subtlety and sophistication of its analysis, and the fair-mindedness of the alternative perspectives it provides, it is a must-read for all those interested in how political judgments are formed."―Robert Jervis, Columbia University
"This book is just what one would expect from America's most influential political psychologist: Intelligent, important, and closely argued. Both science and policy are brilliantly illuminated by Tetlock's fascinating arguments."―Daniel Gilbert, Harvard University
About the Author
- Print length352 pages
- LanguageEnglish
- PublisherPrinceton University Press
- Publication dateJuly 25 2005
- Dimensions15.88 x 2.54 x 23.5 cm
- ISBN-100691123020
- ISBN-13978-0691123028
Product details
- Publisher : Princeton University Press (July 25 2005)
- Language : English
- Hardcover : 352 pages
- ISBN-10 : 0691123020
- ISBN-13 : 978-0691123028
- Item weight : 658 g
- Dimensions : 15.88 x 2.54 x 23.5 cm
- Best Sellers Rank: #1,919,515 in Books (See Top 100 in Books)
- #5,297 in Social Psychology & Interactions (Books)
- #31,438 in Political Science (Books)
- #54,679 in Politics (Books)
- Customer Reviews:
About the author

Philip E. Tetlock (born 1954) is a Canadian-American political science writer, and is currently the Annenberg University Professor at the University of Pennsylvania, where he is cross-appointed at the Wharton School and the School of Arts and Sciences.
He has written several non-fiction books at the intersection of psychology, political science and organizational behavior, including Superforecasting: The Art and Science of Prediction; Expert Political Judgment: How Good Is It? How Can We Know?; Unmaking the West: What-if Scenarios that Rewrite World History; and Counterfactual Thought Experiments in World Politics. Tetlock is also co-principal investigator of The Good Judgment Project, a multi-year study of the feasibility of improving the accuracy of probability judgments of high-stakes, real-world events.
For more see here: https://en.wikipedia.org/wiki/Philip_E._Tetlock
For CV: https://www.dropbox.com/s/uorzufg1v0nhcii/Tetlock%20CV%20%20march%2018%2C%202016.docx?dl=0
Twitter: https://twitter.com/PTetlock
LinkedIn: https://www.linkedin.com/in/philip-tetlock-64aa108a?trk=hp-identity-name
For an interview: https://www.edge.org/conversation/philip_tetlock-how-to-win-at-forecasting
Customer reviews
-
Top reviews
Top review from Canada
There was a problem filtering reviews right now. Please try again later.
Top reviews from other countries
Most academics with an interesting, new theory like this write two books. A magnum opus stuffed with references and experimental detail for the academic community, and another, shorter, more easily digestible work for the lay reader. This is what Bruce Bueno de Mesquita did for "The Logic of Political Survival" when he wrote "The Dictator's Handbook". Sadly Tetlock has failed to come out with the second type of book, so we are left to wade through a fairly dry academic tome which, while having its moments, is a bit too footnote heavy for me.
I urge you to skim through this, maybe using a library copy, but I would counsel against buying it, unless you are of an academic cast of mind.
(Archilochos, griechischer Lyriker, 680 bis 645 v. Chr.)
Philip Tetlock ist gelernter Psychologe und Prof. für Leadership an der Univ. Kalifornien. Er forscht darüber, welche Faktoren für menschliche Weitsicht und Blindheit verantwortlich sind. Von 1985 bis 2003 befragte er 284 ausgewählte amerikanische Experten über den Lauf der Welt. Der Prognosehorizont war meistens 2 bis 5 Jahre, in Einzelfällen aber auch bis zu einer Dekade. Die Teilnehmer mussten z.B. beantworten, ob in einem bestimmten Land die jetzige Regierung nach den nächsten bzw. übernächsten Wahlen noch immer am Ruder ist (in autoritären Regimen, ob sie weg geputscht wird). Andere Fragen waren etwa, ob sich die Provinz Quebec von Kanada loslösen wird, ob zwischen Indien und Pakistan ein Krieg ausbricht. Ob das Wachstum des Bruttonationalproduktes, die Staatsverschuldung oder der Zinssatz der Notenbank höher, niedriger oder gleich ausfallen wird, die Preise für wichtige Rohstoffe nach oben, unten gehen oder gleich bleiben, ob die Internet-Börsenblase innerhalb des Prognosehorizontes platzt. Die Experten mussten nicht nur die Richtung angeben, sondern auch, für wie wahrscheinlich sie die einzelnen Szenarien – z.B. die Abspaltung Quebecs – hielten.
Ein roter Faden in Tetlock's Experimenten ist: Die Experten sind sich – egal auf welchem Gebiet – viel zu sicher. Wenn sie etwas als „praktisch sicher“ einstufen, dann kommt es höchstens mit 70% Wahrscheinlichkeit vor. Es sind auch Wunder gar nicht so selten. Es treten Ereignisse ein, die die Experten für (denk-)unmöglich gehalten haben. Generell ist die Prognose „kräht der Hahn am Mist bleibt das Wetter so wie es ist“ ziemlich gut. Sie hätte gegen die Experten klar gewonnen. Die Experten überschätzen krass die Wahrscheinlichkeit von Wendungen zum Guten oder Schlechten. Dies gilt insbesondere für die Igel.
Die Unterscheidung zwischen Igel (Engl. Hedgehog) und Fuchs geht auf einen – im englischen Sprachraum – allgemein bekannten Aufsatz des Philosophen Sir Isaiah Berlin (1909 bis 1996) zurück. Nach Berlins Auffassung versuchten die Igel, ein allumfassendes System menschlicher Handlungen, der Geschichte und von moralischen Werten zu entwickeln. Die Füchse hingegen tendierten eher dazu, überall eine Vielfalt zu sehen. Ein Fuchs verfolgt viele Ziele, oft ohne inneren Zusammenhang oder sogar widersprüchlich. Typische Igel sind für Berlin Plato, Pascal, Hegel, Dostojewski, Nietzsche oder Proust, Füchse hingegen Shakespeare, Aristoteles, Erasmus von Rotterdam, Goethe, Puschkin oder Joyce. Es gibt natürlich auch Mischformen. Der von Berlin verehrte Tolstoi war seiner Auffassung nach ein Fuchs, der gerne ein Igel gewesen wäre. Für die Welt der Experten könnte man den Typus des Hedgehogs auch etwas vereinfachend mit „Fachidiot“ beschreiben. Der Fuchs ist der umtriebige bunte Hund, der alles und nichts kann.
Der oben geschilderte Zusammenhang ist die zentrale Botschaft des Buches. Tetlock kann sich auch überzeugend belegen. Daneben gibt es eine Reihe von anderen Zusammenhängen. Z.B. Stars die besonders oft in den Medien zitiert werden liefern sehr schlechte Prognosen ab. Das hängt mit der Vorliebe der Medien für Igel zusammen.
Inhaltlich verdient das Buch 6 Sterne. Allerdings ist es ziemlich akademisch-öd geschrieben. Tetlock wollte damit offensichtlich einen akademischen Klassiker verfassen. Das ist ihm gelungen. Für das breite Publikum ist seine Darstellung aber eher ermüdend und wenn man sich nicht in Statistik auskennt teilweise auch unverständlich. Ich kenn mich in Statistik aus, habe mich aber durch manche Passagen eher durchgekämpft. Vieles könnte man ohne inhaltliche Abstriche einfacher, flotter sagen. Aber dann wärs kein akademischer Klassiker geworden.
Wer an einer gut lesbaren Zusammenfassung interessiert ist wohl mit dem Artikel "Trau keinem Igel" in der Dezember 2014 Ausgabe von Chrilly's Monatlicher Goldreport besser bedient (Nach "Chrilly's Monatlicher Goldreport" googeln).
P.S.: Das Wort Akademie bezieht sich auf den Treffpunkt von Platos Gelehrtenkreis. Gerade Plato hat aber grossen Wert auf die Lesbarkeit seiner Werke gelegt und viele seiner Gedanken in Dialogform präsentiert.
Were the experts better at anything? Well, they were pretty good at making excuses. Here are a few: 1. I made the right mistake. 2. I'm not right yet, but you'll see. 3. I was almost right. 4. Your scoring system is flawed. 5. Your questions aren't real world. 6. I never said that. 7. Things happen. Of course, experts applied their excuses only when they got it wrong... er... I mean almost right... that is, about to be right, or right if you looked at it in the right way, or what would have been right if the question were asked properly, or right if you applied the right scoring system, or... well... that was a dumb question anyway, or....
Not only did experts get it wrong, but they were so wedded to their opinions that they failed to update their forecasts even in the face of building evidence to the contrary. And then a curious thing happened -- after they got it wrong and exhausted all their excuses, they forgot they were wrong in the first place. When Tetlock did follow-up questions at later dates, experts routinely misremembered their predictions. When the expert's models failed, they merely updated their models post hoc, giving them the comforting illusion that their expert judgment and simplified model of social behavior remained intact. Compare this with another very complex system -- predicting the weather. In this latter case, there is a very big difference in the predictive abilities of experts and lay persons. Meteorologists do not use over-simplified models like "red in the morning, sailor's warning." They use complex modeling, statistical forecasting, computer simulations, etc. When they are wrong, weathermen do not say, well, it almost rained; or, it just hasn't rained yet; or, it didn't rain, but predicting rain was the right mistake to make; or, there's something wrong with the rain guage; or, I didn't say it was going to rain; or, what kind of a question is that?
Political experts, unlike weathermen, live in an infinite variety of counterfactual worlds; or as Tetlock writes, "Counterfactual history becomes a convenient graveyard for burying embarrassing conditional forecasts." That is: sure, given x, y, and z, the former Soviet Union collapsed; but if z had not occurred, the former Soviet Union would have remained intact. Really? Considering the expert got it wrong in the first place, how could they possibly know the outcome in a hypothetical counterfactual world? At best, this is intellectual dishonesty. At worst, it is fraud.
But some experts did better than others. In particular, those who were less dogmatic and frequently updated their predictions in response to countervailing evidence (Tetlock's "foxes") did much better than the opposing camp (termed "hedgehogs"). The problem is that hedgehogs climb the ladder faster and have positions of greater prominence. My Machiavellian take? You might as well make dogmatic pronouncements because all the hedgehogs you work for aren't any better at predicting the future than you are -- they're just more sure of themselves. So, work on your self-confidence. It is apparently the only thing anyone pays any attention to.