Moral Machines: Teaching Robots Right from Wrong and over one million other books are available for Amazon Kindle. Learn more

Vous voulez voir cette page en français ? Cliquez ici.


or
Sign in to turn on 1-Click ordering.
More Buying Choices
Have one to sell? Sell yours here
Start reading Moral Machines: Teaching Robots Right from Wrong on your Kindle in under a minute.

Don't have a Kindle? Get your Kindle here, or download a FREE Kindle Reading App.

Moral Machines: Teaching Robots Right from Wrong [Hardcover]

Colin Allen , Wendell Wallach

List Price: CDN$ 40.95
Price: CDN$ 38.12 & FREE Shipping. Details
You Save: CDN$ 2.83 (7%)
o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o
Usually ships within 1 to 2 months.
Ships from and sold by Amazon.ca. Gift-wrap available.

Formats

Amazon Price New from Used from
Kindle Edition CDN $8.49  
Hardcover CDN $38.12  
Paperback CDN $17.95  
Save Up to 90% on Textbooks
Hit the books in Amazon.ca's Textbook Store and save up to 90% on used textbooks and 35% on new textbooks. Learn more.
There is a newer edition of this item:
Moral Machines: Teaching Robots Right from Wrong Moral Machines: Teaching Robots Right from Wrong
CDN$ 17.95
Usually ships in 1 to 2 months
Join Amazon Student in Canada


Book Description

Nov. 28 2008
Computers are already approving financial transactions, controlling electrical supplies, and driving trains. Soon, service robots will be taking care of the elderly in their homes, and military robots will have their own targeting and firing protocols. Colin Allen and Wendell Wallach argue that as robots take on more and more responsibility, they must be programmed with moral decision-making abilities, for our own safety. Taking a fast paced tour through the latest thinking about philosophical ethics and artificial intelligence, the authors argue that even if full moral agency for machines is a long way off, it is already necessary to start building a kind of functional morality, in which artificial moral agents have some basic ethical sensitivity. But the standard ethical theories don't seem adequate, and more socially engaged and engaging robots will be needed. As the authors show, the quest to build machines that are capable of telling right from wrong has begun. Moral Machines is the first book to examine the challenge of building artificial moral agents, probing deeply into the nature of human decision making and ethics.

Customers Who Bought This Item Also Bought


Product Details


Product Description

Review


"An invaluable guide to avoiding the stuff of science-fiction nightmares."--John Gilby, Times Higher Education


"Moral Machines is a fine introduction to the emerging field of robot ethics. There is much here that will interest ethicists, philosophers, cognitive scientists, and roboticists."--Peter Danielson, Notre Dame Philosophical Reviews


"Written with an abundance of examples and lessons learned, scenarios of incidents that may happen, and elaborate discussions on existing artificial agents on the cutting edge of research/practice, Moral Machines goes beyond what is known as computer ethics into what will soon be called the discipline of machine morality. Highly recommended."--G. Trajkovski, CHOICE


"The book does succeed in making the essential point that the phrase 'moral machine' is not an oxymoron. It also provides a window onto an area of research with which psychologists are unlikely to be familiar and one from which, at some point, we may be able to learn quite a lot."--PsycCRITIQUES


"In a single, thought-provoking volume, the authors not only introduce machine ethics, but also an inquiry that penetrates to the deepest foundations of ethics. The conscientious reader will, no doubt, find many challenging ideas here that will require a reassessment of her own beliefs, making this text a "must read" among recent books in philosophy and, more specifically, applied ethics."--Tony Beavers, Ethics and Information Technology


"... Moral Machines raises a host of interesting and stimulating philosophical questions and engineering problems, and highlights likely important future debates-- which is a great success for a book that comes on the brink of a field that is likely to surge in popularity in the upcoming decade. Wallach and Allen do so with a clarity and structure that makes their book simultaneously informative and enjoyable to read. Overall, this book is highly recommended reading for all those who already have an interest in the field of machine morality or for those who desire to develop an interest in the field." -- Philosophical Psychology


About the Author

Colin Allen is a Professor of History and Philosophy of Science and of Cognitive Science at Indiana University. Wendell Wallach is a consultant and writer and is affiliated with Yale University's Interdisciplinary Center for Bioethics.

Inside This Book (Learn More)
Browse Sample Pages
Front Cover | Copyright | Table of Contents | Excerpt | Index
Search inside this book:

Customer Reviews

There are no customer reviews yet on Amazon.ca
5 star
4 star
3 star
2 star
1 star
Most Helpful Customer Reviews on Amazon.com (beta)
Amazon.com: 4.1 out of 5 stars  14 reviews
4 of 4 people found the following review helpful
5.0 out of 5 stars The best robot ethics text yet Dec 19 2008
By Keith A. Abney - Published on Amazon.com
Format:Hardcover
Allen and Wallach's Moral Machines is the best text yet in the rapidly expanding field of robot ethics - and their work offers insight into the morals of not only robots, but ourselves as well.

Wallach and Allen examine the strengths and limitations of traditional approaches to ethics, such as deontology and utilitarianism, and the issues that arise in attempting a top-down programming of such rules into a robot. But the history of ethics is replete with controversy over the adequacy of any proposed set of rules - for instance, it might seem logical to switch the track of a runaway trolley that would kill five workers, even if it would thereby kill one person on the other track - switching maximizes utility. But should a doctor then harvest organs from a patient in for a checkup to save five people in the next room needing transplants?

So what should a robot do? An alternative is to attempt a 'bottom up' approach, and teach ethics to robots by trial and error, as we do children. The authors argue that this approach has both technical and rational limitations as well; principles are especially useful in resolving the difficult moral situations we call moral dilemmas. So they argue that a hybrid approach is probably best, and discuss in thought-provoking ways whether robots would need emotions, and how human-like we should desire these robotic agents to be.

Wallach and Allen convincingly argue that even if full moral agency for machines is a long way off, it is already necessary to start instilling into robots a type of functional morality, as robots are already engaged in high-risk situations and are already equipped with lethal weapons (e.g., the Predator drones now flying in Pakistan).

The text is anchored in near-term considerations and hence is light on some of the more far-reaching aspects of robot ethics - for instance, if full human-type ('Kantian') autonomy for robots is possible, should it be allowed? Or should robots be forever relegated to a 'slave morality', so they could never ultimately choose their own life's goals - lest they be harmful to humans? But the failure to engage in these more long-term debates simply underlines the near-term strengths of this text. For those wondering (or worried) about moral questions involving robots over the next decade, this is a must-read.

P.S. They also have a nice blog with updates: [...]
3 of 3 people found the following review helpful
4.0 out of 5 stars Raises problems but offers no solutions Aug. 24 2013
By Courtney - Published on Amazon.com
Format:Paperback
This book seem to have been infected with the same disease that has ravaged the field of bioethics - the failure to grasp that specialized ethics can only proceed from a general theory of ethics. Without a clear specification of the latter, any attempt to devise ethics for robots, or for physicians, is doomed to incoherence, ambiguity, and confusion. Hence, the main problem with Moral Machines is that it lacks an attempt to reach clarity on human ethics. The book does excel in pointing out the problems with conventional thinking about robot morality, but it fails to describe solutions. The authors' suggestion of having robots acquire morality in the same way that humans do, does not solve the problem. It only guarantees that robots will be as morally confused as we are (e.g. 40% of people would save their dog's life over that of a stranger, according to a recent study at Georgia Regents University). Moreover, this approach fails to select a particular moral tradition in which to raise our robots: Lutheranism? Mormonism? Leftism? Just as we don't want robots to share common confusions about, say, surgical techniques, we don't want them similarly confused about ethics.

This book, which I nonetheless recommend, suffers from the timid, diffident, and tentative tones that afflict most academic writing. The authors seem to be part of an academic community and seek to retain membership by being minimally offensive. Who can fault them? However, this leads to excessively conventional thinking, a disappointing near-term focus, and no real discussion of the morality of hyper-intelligent robots.

If you want a good survey of current thinking on this topic, mundane as this thinking is, this book is a fine choice. If, instead, you prefer attempts to find solutions to the problems addressed in this book I would recommend Artificial Morality: Virtuous Robots for Virtual Games by Peter Danielson, only because it is more concrete. I would also recommend a bold little book called Robot Nation -- Surviving the Greatest Socio-Economic Upheaval of All Time by Stan Neilson, which. despite its title, turns out to be largely about robot morality.
1 of 1 people found the following review helpful
5.0 out of 5 stars Eloquent and Thought-Inspiring Sept. 21 2012
By N. Coppedge - Published on Amazon.com
Format:Paperback
From a philosophical writer's point of view, this is one of the best-written books I've ever read. And that deserves emphasis. The writers' ingenuity in connecting the thought frameworks from networks of major concepts to another network of major concepts, and from one minor concept, and connecting to the next, or returning to a previous example, is really profound and unusual. I'm tempted to say that this book passes as poetry.

Additionally, I made copious notes and breezed through the book in less than a week. So, as non-fiction goes, yes its readable. It's also more intelligent than the average philosophy book in terms of the brilliance of interpretation and the potential to find "juicy details". Although it is not brilliant everywhere (and few books are, outside of Confucius, the Buddha, Shakespeare, Nietzsche, and perhaps Erasmus), there are reflections of brilliant thoughts on nearly every page.

Students of philosophy with an interest in entities, interfaces, and social science conundrums will love this book. I agree with the other reviewers that the significant bibliographic material is a major enhancement of the experience.
1 of 1 people found the following review helpful
5.0 out of 5 stars The best book for teaching July 12 2011
By Chris Santos-Lang - Published on Amazon.com
Format:Hardcover|Verified Purchase
Although this book is accessible to a popular audience, it has obvious academic merit. The authors thoroughly search-out all perspectives in this new field (i.e. it has a huge bibliography) and treat each perspective with skillful fairness. It both establishes itself as the authoritative reference, framing the issues for the new field of machine ethics, and establishes the credibility of the field as an academic pursuit. Good libraries ought to have this book.

This book was not intended as an introduction to ethics, but it is the book I would be inclined to assign as an ethics textbook. It covers an introduction to ethics, of course, but also covers material in related disciplines (psychology, economics, etc.), and gets technical about where our society assumes ethical faculties. It forces the reader to think about how ethics work, rather than just express opinions about contemporary moral issues, and is probably the very best book in existence for giving readers an appreciation for the ways the field of ethics will have to grow in the near future.
7 of 10 people found the following review helpful
3.0 out of 5 stars Limited imaginations Dec 27 2009
By Peter McCluskey - Published on Amazon.com
Format:Hardcover
This book combines the ideas of leading commentators on ethics, methods of implementing AI, and the risks of AI, into a set of ideas on how machines ought to achieve ethical behavior.

The book mostly provides an accurate survey of what those commentators agree and disagree about. But there's enough disagreement that we need some insights into which views are correct (especially about theories of ethics) in order to produce useful advice to AI designers, and the authors don't have those kinds of insights.

The book focuses more on near term risks of software that is much less intelligent than humans, and is complacent about the risks of superhuman AI.

The implications of superhuman AIs for theories of ethics ought to illuminate flaws in them that aren't obvious when considering purely human-level intelligence. For example, they mention an argument that any AI would value humans for their diversity of ideas, which would help AIs to search the space of possible ideas. This seems to have serious problems, such as what stops an AI from fiddling with human minds to increase their diversity? Yet the authors are too focused on human-like minds to imagine an intelligence which would do that.

Their discussion of the advocates friendly AI seems a bit confused. The authors wonder if those advocates are trying to quell apprehension about AI risks, when I've observed pretty consistent efforts by those advocates to create apprehension among AI researchers.

Look for similar items by category


Feedback