Moral Machines: Teaching Robots Right from Wrong Hardcover – Sep 3 2009
There is a newer edition of this item:
No Kindle device required. Download one of the Free Kindle apps to start reading Kindle books on your smartphone, tablet, and computer.
Getting the download link through email is temporarily not available. Please check back later.
To get the free app, enter your mobile phone number.
"An invaluable guide to avoiding the stuff of science-fiction nightmares."--John Gilby, Times Higher Education
"Moral Machines is a fine introduction to the emerging field of robot ethics. There is much here that will interest ethicists, philosophers, cognitive scientists, and roboticists."--Peter Danielson, Notre Dame Philosophical Reviews
"Written with an abundance of examples and lessons learned, scenarios of incidents that may happen, and elaborate discussions on existing artificial agents on the cutting edge of research/practice, Moral Machines goes beyond what is known as computer ethics into what will soon be called the discipline of machine morality. Highly recommended."--G. Trajkovski, CHOICE
"The book does succeed in making the essential point that the phrase 'moral machine' is not an oxymoron. It also provides a window onto an area of research with which psychologists are unlikely to be familiar and one from which, at some point, we may be able to learn quite a lot."--PsycCRITIQUES
"In a single, thought-provoking volume, the authors not only introduce machine ethics, but also an inquiry that penetrates to the deepest foundations of ethics. The conscientious reader will, no doubt, find many challenging ideas here that will require a reassessment of her own beliefs, making this text a "must read" among recent books in philosophy and, more specifically, applied ethics."--Tony Beavers, Ethics and Information Technology
..". Moral Machines raises a host of interesting and stimulating philosophical questions and engineering problems, and highlights likely important future debates-- which is a great success for a book that comes on the brink of a field that is likely to surge in popularity in the upcoming decade. Wallach and Allen do so with a clarity and structure that makes their book simultaneously informative and enjoyable to read. Overall, this book is highly recommended reading for all those who already have an interest in the field of machine morality or for those who desire to develop an interest in the field." -- Philosophical Psychology
About the Author
Colin Allen is a Professor of History and Philosophy of Science and of Cognitive Science at Indiana University. Wendell Wallach is a consultant and writer and is affiliated with Yale University's Interdisciplinary Center for Bioethics.
What Other Items Do Customers Buy After Viewing This Item?
Most Helpful Customer Reviews on Amazon.com (beta)
This book, which I nonetheless recommend, suffers from the timid, diffident, and tentative tones that afflict most academic writing. The authors seem to be part of an academic community and seek to retain membership by being minimally offensive. Who can fault them? However, this leads to excessively conventional thinking, a disappointing near-term focus, and no real discussion of the morality of hyper-intelligent robots.
If you want a good survey of current thinking on this topic, mundane as this thinking is, this book is a fine choice. If, instead, you prefer attempts to find solutions to the problems addressed in this book I would recommend Artificial Morality: Virtuous Robots for Virtual Games by Peter Danielson, only because it is more concrete. I would also recommend a bold little book called Robot Nation -- Surviving the Greatest Socio-Economic Upheaval of All Time by Stan Neilson, which. despite its title, turns out to be largely about robot morality.
Wallach and Allen examine the strengths and limitations of traditional approaches to ethics, such as deontology and utilitarianism, and the issues that arise in attempting a top-down programming of such rules into a robot. But the history of ethics is replete with controversy over the adequacy of any proposed set of rules - for instance, it might seem logical to switch the track of a runaway trolley that would kill five workers, even if it would thereby kill one person on the other track - switching maximizes utility. But should a doctor then harvest organs from a patient in for a checkup to save five people in the next room needing transplants?
So what should a robot do? An alternative is to attempt a 'bottom up' approach, and teach ethics to robots by trial and error, as we do children. The authors argue that this approach has both technical and rational limitations as well; principles are especially useful in resolving the difficult moral situations we call moral dilemmas. So they argue that a hybrid approach is probably best, and discuss in thought-provoking ways whether robots would need emotions, and how human-like we should desire these robotic agents to be.
Wallach and Allen convincingly argue that even if full moral agency for machines is a long way off, it is already necessary to start instilling into robots a type of functional morality, as robots are already engaged in high-risk situations and are already equipped with lethal weapons (e.g., the Predator drones now flying in Pakistan).
The text is anchored in near-term considerations and hence is light on some of the more far-reaching aspects of robot ethics - for instance, if full human-type ('Kantian') autonomy for robots is possible, should it be allowed? Or should robots be forever relegated to a 'slave morality', so they could never ultimately choose their own life's goals - lest they be harmful to humans? But the failure to engage in these more long-term debates simply underlines the near-term strengths of this text. For those wondering (or worried) about moral questions involving robots over the next decade, this is a must-read.
P.S. They also have a nice blog with updates: [...]
This book was not intended as an introduction to ethics, but it is the book I would be inclined to assign as an ethics textbook. It covers an introduction to ethics, of course, but also covers material in related disciplines (psychology, economics, etc.), and gets technical about where our society assumes ethical faculties. It forces the reader to think about how ethics work, rather than just express opinions about contemporary moral issues, and is probably the very best book in existence for giving readers an appreciation for the ways the field of ethics will have to grow in the near future.
Additionally, I made copious notes and breezed through the book in less than a week. So, as non-fiction goes, yes its readable. It's also more intelligent than the average philosophy book in terms of the brilliance of interpretation and the potential to find "juicy details". Although it is not brilliant everywhere (and few books are, outside of Confucius, the Buddha, Shakespeare, Nietzsche, and perhaps Erasmus), there are reflections of brilliant thoughts on nearly every page.
Students of philosophy with an interest in entities, interfaces, and social science conundrums will love this book. I agree with the other reviewers that the significant bibliographic material is a major enhancement of the experience.
The book mostly provides an accurate survey of what those commentators agree and disagree about. But there's enough disagreement that we need some insights into which views are correct (especially about theories of ethics) in order to produce useful advice to AI designers, and the authors don't have those kinds of insights.
The book focuses more on near term risks of software that is much less intelligent than humans, and is complacent about the risks of superhuman AI.
The implications of superhuman AIs for theories of ethics ought to illuminate flaws in them that aren't obvious when considering purely human-level intelligence. For example, they mention an argument that any AI would value humans for their diversity of ideas, which would help AIs to search the space of possible ideas. This seems to have serious problems, such as what stops an AI from fiddling with human minds to increase their diversity? Yet the authors are too focused on human-like minds to imagine an intelligence which would do that.
Their discussion of the advocates friendly AI seems a bit confused. The authors wonder if those advocates are trying to quell apprehension about AI risks, when I've observed pretty consistent efforts by those advocates to create apprehension among AI researchers.
Look for similar items by category
- Books > Computers & Technology > Computer Science > Artificial Intelligence > Computer Mathematics
- Books > Computers & Technology > Computer Science > Artificial Intelligence > Robotics
- Books > Computers & Technology > History & Culture > Culture
- Books > Politics & Social Sciences > Philosophy > Ethics & Morality
- Books > Politics & Social Sciences > Social Sciences > Sociology > Culture
- Books > Professional & Technical > Engineering > Computer Technology > Robotics & Automation
- Books > Professional & Technical > Engineering > Mechanical > Robotics
- Books > Science & Math > History & Philosophy
- Books > Science & Math > Technology > Philosophy of Technology
- Books > Science & Math > Technology > Social Aspects