As was the case with their previous book, "Born Digital: Understanding the First Generation of Digital Natives," Palfrey & Gasser's "Interop" offers a supremely balanced treatment of a complicated and sometimes quite contentious set of information policy issues. The authors have a gift for penning engaging and extremely well-written books that enlighten, educate, and entertain.
In "Interop," Palfrey and Gasser propose an ambitious task: developing "a normative theory identifying what we want out of all this interconnectivity" that the information age has brought us. They correctly note "there is no single, agreed-upon definition of interoperability" and that "there are even many views about what interop is and how it should be achieved." Generally speaking, they argue increased interoperability -- especially among information networks and systems -- is a good thing because it "provides consumers greater choice and autonomy," "is generally good for competition and innovation," and "can lead to systemic efficiencies."
But they wisely acknowledge that there are trade-offs, too, noting that "this growing level of interconnectedness comes at an increasingly high price." Whether we are talking about privacy, security, consumer choice, the state of competition, or anything else, Palfrey and Gasser argue that "the problems of too much interconnectivity present enormous challenges both for organizations and for society at large." Their chapter and privacy and security offers many examples, but one need only look around at their own digital existence to realize the truth of this paradox. The more interconnected our information systems become, and the more intertwined our social and economic lives become with those systems, the greater the possibility of spam, viruses, data breaches, and various types of privacy or reputational problems. Interoperability giveth and it taketh away.
Ultimately, however, the authors fail to devise a clear standard for when interoperability is good and when governments should take steps to facilitate or mandate it. They argue that "there is no single form or optimal amount of interoperability that will suit every circumstance" and that "most of the specifics of how to bring interop about [must] be determined on a case-by-case basis. Yet, Palfrey and Gasser also make it clear they want government(s) to play an active role in ensuring optimal interoperability. They say they favor "blended approaches that draw upon the comparative advantages of the private and public sector," but they argue that government should feel free to tip or nudge interoperability determinations in superior directions to satisfy "the public interest." "If deployed with skill," they argue, "the law can play a central role in ensuring that we get as close as possible to optimal levels of interoperability in complex systems."
The fundamental problem this "public interest" approach to interoperability regulation is that it is no better than the "I-know-it-when-I-see-it" standard we sometimes at work in the realm of speech regulation. It's an empty vessel, and if it is the lodestar by which policymakers make determinations about the optimal level of interoperability, then it leaves markets, innovators, and consumers subject to the arbitrary whims of what a handful of politicians or regulators think constitutes "optimal interoperability," "appropriate standards," and "best available technology."
In a longer review of their book over at the Technology Liberation Front blog, I offer an alternative framework that suggests patience, humility, and openness to ongoing marketplace experimentation as the primary public policy virtues that lawmakers should instead embrace. Ongoing marketplace experimentation with technical standards, modes of information production and dissemination, and interoperable information systems, is almost always preferable to the artificial foreclosure of this dynamic process through state action. The former allows for better learning and coping mechanisms to develop while also incentivizing the spontaneous, natural evolution of the market and market responses. The latter (regulatory foreclosure of experimentation) limits that potential.
Defining "optimal interoperability," is not just difficult as Palfrey and Gasser suggest, but I would argue that it is a pipe dream. Sometimes consumers demanded a certain amount interoperability and they usually get it. But it seems equally obvious that consumers don't always demand perfect interoperability. Just look at your iPhone or Xbox for proof. Quite often, a lack of interoperability helps firms finance important new products and services while simultaneously ensuring users a tailored and potentially more secure and satisfying experience. Importantly, however, non-interoperability also spurs new forms of innovation from rivals looking to leap-frog the old front-runners. Progress flows from this never-ending cycle of technological change and industrial churn.
In sum, we cannot define or determine "optimal interoperability" in an a priori fashion; only ongoing experimentation can help us determine what truly lies in "the public interest."
Despite my different approach and conclusions, I am thankful that John Palfrey and Urs Gasser have provided us with a book that so perfectly frames what should be a very interesting ongoing debate over these issues. I highly recommend "Interop."