Skip to main content
Department of Information Technology

Riccardo Bevilacqua

Moral Machines - Teaching Robots Right from Wrong.
Wendell Wallach and Colin Allen

A Hollywood inspired book, driven by a futuristic view of the world, Moral Machines investigates the ethical implications of a (ro)bot dominated environment. In an exposition rich of catastrophic examples, Wallach and Allen try to sell their research field to the reader with less than convincing arguments. Frustrated by credit card refusals, the authors confuse reality with fiction. In a materialistic description of the reality, closer to the filmography of Stanley Kubrick rather than to the dialectic of Kant, the authors draw a hypothetical reality where the border between living beings and unanimated things is getting virtually thinner.
The main question that the authors would need to answer, to justify the full content of this book, is if a (ro)bot could be, at all, a moral agent. In the view of the authors, as it is clear since the beginning, the answer is positive. Two properties are required to be a moral agent: to possess free will and to have conscious understanding. This could have been a very interesting discussion, maybe worth of a self-standing book, however the authors dismiss it in few paragraphs. Not being able to demonstrate that a (ro)bot could possibly ever have free will, Wallach and Allen argue that in a deterministic universe, free will does not exist at all. However, if so, no discussion about ethics would be necessary, since no real choice would be possible. The consciousness of (ro)bots is treated in a less shallow way, but still the authors dismiss with easiness arguments contrary to their a priori opinion. When they are unable to go further with a demonstration, the authors strategically add a "yet" to their assertions: (ro)bots are not conscious yet, (ro)bots does not have emotions yet. Recursively in the book, the authors uses expressions as "no one knows" when their arguments are too weak.
Missing to demonstrate that there is a case at all for their discussion, in the remaining part of their book Wallach and Allen extensively provides nice and self-consistent right answers to the wrong questions. Paragraph titles are well written (for examples, From sensory systems to emotions, or Robot who disobey), but the content does not keep the promise of any consistent explanation. A book of assumptions, a bible for cyberpunk lovers, Moral Machines is not able to address any relevant ethical issue with a scientific approach, while could be of great inspiration for a science fiction movie.
A book that is not worth reading.

Noor Azlinda Ahmad

MORAL MACHINES

Question of whether robots can really be moral is thoroughly discussed in chapter 4 of this book. The objective is to argue the possibilities of developing machines or robots which can embed genuine understanding and consciousness similar to human mind in order to be considered as moral agents. In order to do so, artificial intelligence is the best `tool´ for putting human mind onto a secure scientific footing by using computers. Artificial intelligence (AI) is defined as the ability of intelligent agents in taking actions based on the information instructed by human to maximize the chances of success. Since the intelligent agents (machines or robots) are only executing the program without genuinely understanding the program itself, a question arises as can they really be a moral agent? One philosophy professor, John Searle from his famous `Chinese room´ experiment showed that even without genuine understanding or intelligence, it is possible for a computer (which is machine) to pass a Turing test (test of machine ability to demonstrate intelligence) just by following the rules and instruction in the program. His argument was later accepted by some philosophers who believe that programming a computer is a hopeless approach to the development of genuinely intelligent system. Human´s `special property´ of being free to act, thinks and to make decision (free will) distinguish them from robots. Unlike human, robots are much more deterministic where any event to some large extend is determined by a prior state (in this case it is the computer program). However, there is a question of how might ethics arise for a deterministic system? It was suggested that whenever humans and robots can generate a reciprocal relationships, and if one set of goals involves the possibility of harm to others, an ethical issue will arise.

According to the authors, understanding and consciousness are two important factors for robots to becoming real moral agents (Does it seems to oppose the arguments made by Searle?). Any individual who do not understand or having lack of conscious is seldom attributed to moral agents. But what understanding can robots have? And since robots are programmable, are they really able to plan, to be attentive and to experience things (conscious)? If we really think that these factors are important, then one should agree that machines should have understanding of how human mind make decision. As discussed above, genuine understanding is a matter of making choice and decision and in order to do so, they must be aware of the consequences of their actions. This is what is missing in robots. However, as mentioned by the authors ´if gap between genuine human - like understanding and machine understanding becomes less significant- there is no reason why computers can´t have the two reactions´. Even though machine consciousness is a long way comparable to human consciousness, scientists believe that robots that are both functionally and phenomenally conscious will finally be successfully developed.

Julia Paraskova

Moral machines: a possibility

Building a moral machine is a challenge that we haven´t fully appreciated. It doesn´t come as a surprise that emotions and sociability play a major role in human decision making. We do not need to present cases with machines or artificial intelligence to make a point here. In the past years there have been incidents with young adults bringing guns to school and shooting their peers and teachers. In all of the cases the situation has involved isolated individuals who have lacked the sociability of an average teenager. Emotions or rather, the inability to deal with emotions, have led those individuals to end their life, much like in a computer game, through the use of deadly force. The lack of moral judgment can only be related to their lack of social skills. Could the same thing happen to robots? If we give robots emotions would they feel inferior to human beings? Would the failure to deal with emotions lead them onto a path of destruction? Would their isolation or inability to become a part of the social life of humans, as their equals, lead to a robot bringing an automatic weapon to its place of work and shooting every human in sight?

The idea of artificial intelligence, that is to say a real robotic being that possesses feelings and is capable of moral consideration is wonderful. Personally, I love science fiction. It presents us with an idealistic world in which people live in a sustainable environment, with no hunger or poverty and their ultimate goal is the enlightenment of human kind. In that world, created by Isaac Asimov and other science fiction authors, there are other kinds of problems, such as robots taking over the world. And it makes perfect sense. If a robot is created by humans to do all the menial, boring and dangerous jobs, a machine should remain just that - a machine. From the moment we give a robot a sense of existence, morality, the robot will become a sentient being. If that being is to develop with a sense of morality it needs to be given the same freedom as a human being and we´re back to square one. It would not be morally right to use robotic beings for the benefit of human beings if both have feelings.

The idea of creating a moral machine, however noble, is, for the lack of better words, completely idiotic; at least when it comes to the level of technology we possess at the present time. Remember the arguments presented by philosophers of all time and their inability to define or even describe morality? Since we, humans, are unable to define morality for ourselves, how could we possibly program a computer to make moral decisions? Behind every robot there will be a human engineer - a, per definition, flawed being. Therefore an ethical machine is impossible, unless of course it is designed and programmed by something or someone from a higher level of existence. I suppose the closest approximation to a moral machine would be a computer programmed by god.

Moyen Mustaquim

I read the book with severe interest as it talks about robot, engineer, designer and ethics. The interest grew more as I started reading the first chapter where the author gives a scenario about what autonomous system can do in the coming future. Why humans need artificial moral agents? Or is there really any such things called AMA? a framework for understanding the trajectories of increasingly sophisticated AMAs by emphasizing two dimensions, those of autonomy and of sensitivity to morally relevant facts were discussed further. Systems at the low end of these magnitudes have only what we call "operational morality"-that is, their moral significance is entirely in the hands of designers and users. People want computer making moral decisions. Whether AMAs will lead humans to abrogate responsibility to machines seem mainly pressing. Also the prospect of humans becoming literally enslaved to machine seems to be highly approximate. How close could artificial agents come to being considered moral agents if they lack human qualities, for example recognition and emotions? Chapter 5 outlines what philosophers and engineers have to offer each other, and describes a basic framework for top-down and bottom-up or developmental approaches to the design of AMAs. Chapters 6 and 7, respectively, describe the top-down and bottom-up approaches in detail. In chapter 6, authors discuss the computability and viability of rule- and duty-based conceptions of ethics, as well as the opportunity of computing the net effect of an action as compulsory by consequentialist approaches to ethics. What emerges from the debate in chapter 6 and 7 is that the original distinction between top down and bottom up approaches is too simplistic to cover all the challenges that the designers of AMAs will face. Engineers will need to combine top-down and bottom-up methods to build workable systems. The difficulties of applying general moral theories in a top-down fashion also motivate a discussion of a very different conception of morality that can be traced to Aristotle, namely, virtue ethics. Virtues are a mixture among top down and bottom-up approaches, in that the virtues themselves can be explicitly described, but their acquisition as character traits seems essentially to be a bottom-up process. As the authors mentioned their goal of writing this book was not just to raise a lot of questions but to provide a resource for further development of these themes, hence some discussion in surveying tools of software that are being exploited for the development of computer moral decision making were discussed in chapter 9. Some basic moral decisions may be quite easy to implement in computers, while skill at undertaking more difficult moral dilemmas is well away from present technology. Despite the consequences of how quickly or how far humans progress in developing AMAs, in the process of addressing this challenge, humans will make noteworthy strides in thoughtful what truly remarkable creatures they are. The exercise of thinking through the way moral decisions are made with the granularity necessary to begin implementing like faculties into (ro) bots is therefore an exercise in self-understanding. As the authors said they cannot hope to do full justice to these issues, or indeed to the entire issues rose all the way through the book rather it was their hope that by raising them in descriptive form they would motivate others to pick up where they have left off, and take the next steps toward moving this project from theory to practice, from philosophy to engineering, and on to a deeper thoughtful of the field of ethics itself. The sections describing the example of hospital management system, automatic subway driving system raised the question in my mind, who is liable for such issues- designer, operator or the technology itself? Is there any relationship flanked by morality and responsibility at all? If there is any then who is responsible for the acts of AMAs? Finally discussion with my fellow colleague led me to think where the use of the word autonomous correct at all? Is there any real autonomous system in this world except the human being?

Zhiying Liu

Moral machines
In a lot of scientific-fictional movies, like "terminators", "Bicentennial man", robots, which were developed by the most advanced technology in human society, they behaves as moral or bad- mannered human-beings and the relationship between human and the robots are complicated. With this impressions of robot, I read the book "Moral Machines: Teaching Robots Right from Wrong" by Wendell Wallach and Colin Allen. This book is really well-written and a helpful introductory discussions of the ethical, philosophical, and engineering issues. After reading of this book, I got a lot of fresh understanding about the morality of robots.
As we known, robots have been a part of our living environment for the past decades and they are aiming to a much wider range of professional activities such as: the healthcare industry, white collar office work, search and rescue operations, and the service industries. As these machines increase in capability and ubiquity, it is inevitable that they will impact our lives ethically as well as physically and emotionally. These impacts will be both positive and negative. This situation raises the issue related to the moral status of robots and how that status should affect our life.
As Capurro and Nagenborg said, "Ethics and robotics are two academic disciplines, one dealing with the moral norms and values underlying implicitly or explicitly human behaviour and the other aiming at the production of artificial agents, mostly as physical devices, with some degree of autonomy based on rules and programmes set up by their creators." "Robots are and will remain in the foreseeable future dependent on human ethical scrutiny as well as on the moral and legal responsibility of humans." "Human-robot interaction raises serious ethical questions right now that are theoretically less ambitious but practically more important than the possibility of the creation of moral machines that would be more than machines with an ethical code."
Different from some stories showing the terrible outcome that robots will control the human world, in the book, the authors discussed the inevitability of artificial moral agents (AMAs), what it means for a robot to be moral, the best approaches to creating an AMA, and the legal and ethical implications of such creations. The discussions occasionally stray into philosophical territory. In addition, this book also raises a host of questions that go beyond the ones put forth earlier. For instance, are we comfortable allowing robots to make any decisions of significance in our lives? At what point do robots become "human?" Is it when they understand the consequences of their actions - or perhaps gain self-awareness - or still perhaps feel pain?
In my opinion, when there is a reasonable level of abstraction under which we must grant that the machine has autonomous intentions and responsibilities, robots are moral agents. Thus it is certain that if we pursue this technology, then future highly complex interactive robots will be moral agents with the corresponding rights and responsibilities, but even the modest robots of today can be seen to be moral agents of a sort under certain, but not all, levels of abstraction and are deserving of moral consideration.

Wei Li

The book "Moral machines: Teaching robots right from wrong" which is written by Wallach Wendell discussed the new issue about artificial moral agent. As mentioned in the book the computer that can read human emotions are designed and roboticists are developing service robots to care for the elderly and disable and the government of South Korea has announced its goal to put a robot in every home by the year 2020. All of these show us that our life will be affected by the robots more and more with the development of computer science and robotics. Like the author said "human activity is being facilitated, monitored, and analyzed by computer chips in every conceivable device, from automobiles to garbage cans, and by software "bots" in every conceivable virtual environment, from web surfing to online shopping". An important problem of the need for ethical rules to guide the robots behaviour should be thought seriously. The three laws of robotics is the first time people thought about the machine morality issues in the science friction more than fifty years ago. Nowadays people confront this challenge how to ensure that the artificial systems are beneficial to humanity and don´t cause harm to people.

The author introduces a series of conception around the machine morality in the book.
The book has twelve chapters, I read the fourth chapter and skim the others. The fourth chapter of Can robots really be moral discuss a very interesting and radical problem for the artificial moral agents. Are the machines capable of being truly conscious, genuine understanding and emotions? Many people believe not, however this chapter argue convincingly that the immediate practical aims of artificial morality need not be compromised by the controversy about the limits of software-based intelligence. Some people think that the intelligence can´t be simply compared to the computer program. As conceived by Descartes, the human being, combined mechanical body and immaterial mind into a perfectly coordinated whole. Material machines alone could never have intellectual attributes. Ray Kurzweil is a believer in strong artificial intelligence: the view that an appropriately programmed computer is a mind. Ray Kurzweil predicts the equivalent capacity of one human brain will be available on desktop computers by 2020. However one "special property" some believe is not to be found in any computational technology yet developed is free will. Conscious understanding is another. As Floridi and Sanders have identified, three key features: Interactivity, autonomy, and adaptability those are important to the concept of artificial agents.

It is really a well-written book and also we should take a long time to think and understand it!

Si Chen

Moral machine
Today the development of technology is going so fast and the people are surrounded by different kinds of autonomous systems. People´s lives are getting more and more dependent on these engineering systems and can be significantly affected by their decisions. One extreme case is the application of auto-weapons into the battle field. The decisions made by these killing machines are so important and ethical constraints should be built into these machines.
The technological pathway for moral machine development has two independent dimensions, autonomy and ethical sensitivity. Some simple tools, like hammer, only have operational morality, and their moral significance is entirely in the hands of designers and user. As the machines become more autonomous and ethical sensitive, and have the capacity for accessing and responding to moral challenges, they enter the zone of functional morality.
Although it seems inevitable that machines will gain artificial intelligence in the future, many people are still concerned about the consequences brought by these advanced technologies. Some people are worried that these technologies may out of control, and bring catastrophic consequences to human. On the other hand, the machines lack some human qualities. They don´t have consciousness and emotions. Furthermore, human culture will also be affected. For example, supporting systems become common today in hospitals to help doctor making decisions. People are concerned that the responsibility for decision making will be delegated to the computer in the future. In addition, people or even the whole human society relies too much on these machines. Although the development of technology significantly improves the quality of our life, it also makes us vulnerable to the problems caused by their reliability issues.
No matter people like or not, artificial intelligence is coming. We know that the machines with artificial intelligence can affect our lives. The engineers and philosophers should work closely to make sure these machines behaving morally. There are two approaches to design artificial moral machines. One is top-down approach, which takes an ethical theory, analyze the informational and procedural requirements necessary to implement this theory in a computer system, apply that analysis to the design of subsystems and the way they relate to each other in order to implement the theory. Top-down approach is rule-based and the ethical principles are defined. Bottom-up approach creates an environment where an agent explores courses of action and learns and is rewarded for behavior that is morally praiseworthy, like childhood development. It is revolution or learning based, and the ethical principles in bottom-up approach must be discovered or constructed.
Some scientists also think that the human-robot interaction is important for moral decision making. For example, the interaction between service robot and patient in hospital could be helpful for the treatment of patient. The robots should be designed to have some super-rational capacities, like emotions, and social skills.
Technology is here to stay. It is impossible to get rid of it now and we have to deal with it. We should make the best out of it, and minimize the harm it brings to us.

Susanne Bornelöv

In the book Moral machines: Teaching robots right from wrong Wendell Wallach and Colin Allen
discuss the need of ethical considerations in the constructions of robots.
Today, computer systems make decisions independent of humans. The authors predict that a
catastrophic incident will happen in the near future, caused by such decisions. This will raise the
question about ethics in (ro)botics. We must start think about how to make sure that these decisions
are ethical to minimize the future risks.
One such example is the automatic systems checking credit card transactions. Suspisious
transactions are today automatically cancelled, without any human making the explicit decision in
each case. Theoretically, there are situations in which it would be extremely impractical not to be
able to use your credit card, or even constitute the difference between life and death. Still, these
systems are widely used and accepted.
Another example could be the driver-less trains that exist today. If our technique improves, and the
trains would be able to identify people on the railroad, which priorities should it have in the very
eventual situation where all options (let's say to turn left or to turn right) would harm some people.
Artificial moral agents (AMAs) is introduced as a term to describe robots that take ethics into
account. The authors identify two dimensions which are important for the analyze of AMAs.
Autonomy, which is basically whether they are allowed to take their own decisions, and ethical
sensitivity, which means whether they take ethical considerations into accout or not. Today, there
are computer systems which are definitely autonomous, like the software checking credit card
trasactions, and there are systems that are ethical, as the MedEthEx systems, a decision support tool
used in health care. Few systems however manage to combine these two dimensions and there are
lots of problems that must be solved in order to have working AMAs.
There is another more philosophocal questions, namely, whether AMAs can exist at all. A ethics
equivalent to the Turing test has been proposed, but even if an AMA would pass the test, would that
assure that the AMA is really ethical? Some people believe that to be an ethical agent, you must
have consciousness and free will. Without the ability to make choises, there is no ethics. The
authors however argue, that it doesn't matter for practical purposes. As long as the system act
ethical, we don't need to care about what is really going on in the system.
Different approaches may be used in the construction of AMAs. A top-down approach, aiming at
formulating all rules that the robots should follow, or a bottom-up approach, in which we could start
by simple robots and let they evolve and learn moral, similar to the idea of genetic algorithms. The
authors argue both for and against each of these two approaches, and conclude that a combination
must be carried out.
Finally, the authors say that the human interest in AI is like the human interest in animals. It's
natural for us to project human feelings to the animals, because they are most similar to humans
among all non-human things. As the field of robotics will develop, we may expect ourself to get a
similar relation to robots.

Hamid Sarve

Wendell Wallach & Colin Allen discuss machine morality in Moral
Machines, Teaching Robots Right from Wrong. The authors raise the
question of the morality of artificial intelligence, AI. This is a
rather interesting question for me, as I myself have used AI in my
research.

In the early days of AI in the 60s, many scientists calculated that
once we have the computer-capacity of having a digital system with the
same the number of synapses as the human brain, digital computers with
AI would overtake human brains. Although this complexity has been
reached, the AI systems are still far behind the human brain when it
comes to comprehending problems. However, as the authors point out,
an intelligent system could perform a conversion in Chinese without
actually comprehending the language (the Chinese room argument). From
the outside world it will still be seen as a Chinese-understanding
entity.

So could a robot be a moral agent? In my point of view, a
deterministic system cannot be moral. Although fulfilling the three
features important for an artificial agent (the authors mention
interactivity, autonomy and adaptability) the freedom of will is not a
feature of an AI-system. However, I miss the discussion of what degree
the designers of the system can be responsible of the actions of an
AI-system. The authors mention that "We've suggested that the question
of whether deterministic systems can be considered real moral agents
is as unanswerable as the question of whether human beings really have
free will." Suppose a scenario where we humans have a creator (or
programmer if you will). Are we then still moral agents? Suppose
further that our will is controlled by some certain rules dictated by
the creator. Then we too are deterministic and not really moral
agents. So in that sense I would claim the authors have a valid point.

Mona Riza Mohd Esa

By reading this book, I could not really imagined that if the scientist and technology are really able to design a machine or robot or so - called intelligent agent exactly like human that can have emotion, understanding and ethical behaviors. Would it be same as the world we are living now? By remembering those sci - fiction movies that illustrate this situation like for example "Artificial Intelligent", "Bicentennial Man" or "Robocop" (just try to name a few, realizing that a lot with this type of movies out there), even though having these advantages of high technology capabilities (which somehow human alone will not be able to do that) at the end of the story, some kind of disasters or discrepancies, which lead if, robots really are human. I would not like this to be happened during our children future and the next generation too. Despite, human also not a perfect God´s creation, at least we are not overruled by the one we created.
So based on the arguments of really robot can be a moral agent, authors can possibly denied the truth that they (robot) can be a moral agent due to complexity and limitation. Human which given by God with a lot of ability that no one can really or completely understand or to manipulate in order to create such human-like robot. 3 main things that completely distinguish human and robot / machine are consciousness, genuine understanding and free will. A robot can only be in deterministic system that human created and receiving information and instructions. Furthermore, if robots or machines can have those 3 elements in their deterministic system, will they be able to be a real moral agent? Will they be able to ethically behave in the same situation like all human do? At least, in my opinion, if they able to fulfill 3 key important keys to an intelligent agent concept which are autonomy, interactive and adaptability, perhaps they might get away a bit further from deterministic system.
Morally, a human can always be human and robot can always be robot which helping human to make life more easier and convenience than to suffer. The most scariest thing when a mistake done by robots, who there to be blamed? If the mistake was not that serious then its excusable but if it really huge mistakes, then who will be there to be carry all the unforgivable consequences. Well of course its human himself.
This well written, not really easy to digest, with overflowing examples, its really worth to spend time reading it.

Eva Lindblom

The book Moral Machines by Wallach and Allen deals with questions around Artificial Moral Agents
(AMA) in contrast to the moral agents we are as humans. In the beginning of the book the authors
discuss different types of AMAs as well as how to progress in the development of AMAs. One possible
way of progress towards AMAs is to go from today's robots into operational morality further into
functional morality and the last stage would be a full artificial moral agent. The different levels are
defined from their degree of autonomy and ethical sensitivity. Whether this would be a way to reach
closer to fully moral artificial agents is not yet not known and will maybe not be before we are there.
But a question on the way is of course; do we want to have an evolution of moral machines?
Of course, there are arguments both for and against developing machines that can be moral agents. One
argument that often arises against the development is the horror-scenario that we would have been able
to create a machine so independent that it will then conquer the humans. Another thing that I think of is
the question on how to do it practically. How could we build a moral machine when knowing so little
about how our morality works? What kind of commands should be implemented? According to me, it
is also here the most exciting part of the development of moral machines exists. Because, to be able to
build an AMA we need to know more about how our ethical decisions are made and what really is in
the core of ethics and morality. An argument supporting the evolution of AMAs is that when
developing more and more complex computerized systems, as we do today, the need of morality in
them increases because we cannot longer have the control and overview them. The greater freedom of a
machine, the more it will need moral standards as said by Rosalind Picard at MIT.
Today there are three main fields of interaction between robots and humans. They are robots as
soldiers, as companion and as slaves. Today we do not have any AMAs but how would it be if we had?
Could we still use them as soldiers? Or as slaves? Would that be morally okay? Could we construct a
fully moral agent without it having capacity to reason consciously, having a free will and being able to
take moral responsibility? And if not, if the AMA would have to have these capacities would they then
not suffer from being used as slaves just like any human would?
Another area, not very far from building moral machines, is the subject of human enhancement. Human
enhancement deals with evolution of humans in terms of increasing our capacity beyond the species
typical level. It could for example be to operate a chip into the brain or to take drugs that will in some
way boost our capacity beyond the normal range given for humans today. One question discussed in
the report about ethics of human enhancement is what the good of life is and whether human
enhancement increases the happiness in life. The answer to that, is I guess that we don't know. We
cannot predict the effect of an enhancement like being able to see in the dark, or reading someone's
thoughts. I wonder what would happen if we could operate in a chip into the brain and thereby have
access to e.g. internet and all information gathered there. The technological evolution is progressing
very fast but the physical evolution of humans is not that fast. Could we at the same time as we
implement these enhancements into our body also implement a tool that can control the information
flood? Or will it be like for an autistic person who cannot sort between important and not important
noise and therefore gets exhausted in situations with several sound sources. A human enhancement
therefore needs to be accompanied with a control tool otherwise it would lead to more suffer than
happiness and for sure not be an ethical thing to do.

Zhibing Yang

`Moral Machines´ by Wallach and Allen is a book trying to deal with the newly emerging area of machine ethics. Human beings are fascinated with tool-making, and this leads to more and more automatic systems. In this end, the authors argue that it becomes more and more necessary that ethical subroutines should be deployed in the systems to evaluate the possible actions before they are executed. Though currently machines are not capable of being conscious, understanding and having free will and emotions, and are not likely soon, the author noted in Chapter Four that `human understanding and human consciousness emerged through biological evolution as solutions to specific challenges. They are no necessary the only methods for meeting those challenges.´ As for implementing machine morality, the authors discussed different approaches, namely top-down, bottom-up and combined approaches. A top-down approach takes an ethical theory, for example, categorical imperative of Kantian ethics, analyze the necessary informational and procedural requirements, and applies that analysis to the design of subsystems and the way they relate to each other in order to implement the theory. In bottom-up approaches, the emphasis is placed on the development and evolution of moral capabilities. Unlike in top-down approaches, in bottom-up approaches any ethical principles must be discovered and constructed. In practice, a combined approach using both top-down and bottom-up approach will be needed to build the complex systems. Although it seems that realization of machine consciousness is far away from present, there is a possibility that technologies can be so advanced and easily out of human control. That is when machine or robot autonomy will start to evolve. It is interesting speculating on future issues of what machine morality might mean in terms of rights, responsibility, liability, duties, and so on. Who is to be held responsible when a robot makes mistakes and cause harm to human beings? The designer of the robot or the robot itself (or botself)? How can we hold the robot to be accountable? Imaging the case of autonomous robots, can we simply destroy the autonomous robots when they are threatening human existence?

Unlike moral machines, the issues of human enhancement are closer to reality. Commonly-seen examples of human enhancement include cosmetic surgery, psychopharmacology, and so on. Other examples of human enhancement technology which are not yet available but could be envisaged include cybernetics and radical life extension, etc. There are associated risks and social problems with human enhancement technologies. Issues concerning human dignity and human rights are also raised. An emerging question is that `how far should we go with human enhancement´. Before we answer this question, we need a comprehensive impact assessment of human enhancement technologies, taking into account political, ethical, legal, societal, cultural, political, safety, security, and health aspects.

Wei Li

The book "Moral machines: Teaching robots right from wrong" which is written by Wallach Wendell discussed the new issue about artificial moral agent. As mentioned in the book the computer that can read human emotions are designed and roboticists are developing service robots to care for the elderly and disable and the government of South Korea has announced its goal to put a robot in every home by the year 2020. All of these show us that our life will be affected by the robots more and more with the development of computer science and robotics. Like the author said "human activity is being facilitated, monitored, and analyzed by computer chips in every conceivable device, from automobiles to garbage cans, and by software "bots" in every conceivable virtual environment, from web surfing to online shopping". An important problem of the need for ethical rules to guide the robots behaviour should be thought seriously. The three laws of robotics is the first time people thought about the machine morality issues in the science friction more than fifty years ago. Nowadays people confront this challenge how to ensure that the artificial systems are beneficial to humanity and don´t cause harm to people.

The author introduces a series of conception around the machine morality in the book.
The book has twelve chapters, I read the fourth chapter and skim the others. The fourth chapter of Can robots really be moral discuss a very interesting and radical problem for the artificial moral agents. Are the machines capable of being truly conscious, genuine understanding and emotions? Many people believe not, however this chapter argue convincingly that the immediate practical aims of artificial morality need not be compromised by the controversy about the limits of software-based intelligence. Some people think that the intelligence can´t be simply compared to the computer program. As conceived by Descartes, the human being, combined mechanical body and immaterial mind into a perfectly coordinated whole. Material machines alone could never have intellectual attributes. Ray Kurzweil is a believer in strong artificial intelligence: the view that an appropriately programmed computer is a mind. Ray Kurzweil predicts the equivalent capacity of one human brain will be available on desktop computers by 2020. However one "special property" some believe is not to be found in any computational technology yet developed is free will. Conscious understanding is another. As Floridi and Sanders have identified, three key features: Interactivity, autonomy, and adaptability those are important to the concept of artificial agents.

It is really a well-written book and also we should take a long time to think and understand it!

Jens Engström

The book Moral machines: Teaching robots right from wrong by Wallach and Allen deals with the question if we humans are able to create machines that are autonomous, conscious and in most senses can act and think as we do. And in that case will they be moral and obey the ethical rules we, through many years of development, have set up to live after. The book looks from the philosopher´s side and how these issues can and sometimes may be interpreted into the engineering of artificial intelligence. The authors are quite subjective and give an optimistic approach to the subject. Several important questions are discussed, why humans need moral machines, do we want moral machines, and can a machine be moral. These are the questions I have chosen to focus a bit on.

The first question raises the very interesting question: Will human-like robots make us physically and mentally degenerated? Like in the movie Wall-E where the humans travel around in armchairs and let the robots do their work and their decisions. Or can we take it to a more moderate level and let the robots carry out heavy and boring work that we do not want to do and let us focus on more "important" things.

It is very easy to slip into sci-fi references when discussing this topic. And in the second question particularly. There are numerous movies that deal with the judgment day when autonoumus robots roam the human world and threaten to distinguish us. Maybe it is in the human nature to be afraid of the unknown and that we during the last period of our existing have got used to be the superior species on this planet. Thus, we are afraid to loose control of our situation. And this is a justified question, even though we may not be able to create a fully autonomous robot, a highly complex computer that controls e.g. a missile or financial system can give terrible consequences due to the human factor. But still, this lays in our hand since we build the machine, and it is up to us what features we create in it. This leads into the last question.

The last question rises, what is morality? In order to create a "moral machine" we need to understand what such magical terms as conscious and free will is. As the author points out this is still questions that philosophers discusses. Then, can we build into a machine something we do not understand? Or can we only build a machine that has operational morality. A machine that obeys the ethical rules we set up. Then the machine act in morally correct way, but can´t make decisions on its own. Is it then a "moral" machine?

Patrik Stensson

The book address a very interesting topic that also is quite close to the overarching reasoning for my own research. It has however, as I interpret it, a rather different view on what technology actually can accomplish and what future properties mankind actually should strive to endow technology with, compared to my own view. But in order to be able to discuss that I must first briefly discuss the concepts of being an (moral) agent and what I think agency is.
My interpretation is that 'agent' and 'agency' is often used rather loosely and in very different contexts, and therefore with rather different meanings, for example in engineering and psychology. It will not be possible to clarify this completely here so the short version is that an agent in general is something that has influence on something and agency is to deliberately act as an agent in order to have influence. As such, technology may very well be an agent even in moral terms because it surely affects how people think and act. This also gives that even the most passive, ?stupid? and simple technology unavoidably is an agent in whatever context it is used, and therefore the designer must consider possible moral implications of its design. To continue, this means that a certain technology could be viewed as morally ?competent? even without the slightest resemblance to being an advanced robot-like gadget. For instance, a passive tool that is designed to allow being used by people with disabilities obviously is the result of other considerations than a tool that requires the user to be completely fit. Whether or not the considerations behind the first tool are more morally sound than the ones behind the second tool is a completely different question. On the other hand, agency requires consciousness and intentionality, which technology simply does not have (at least until today, and hopefully never will). That is, as far as I'm concerned the agency stays in principal with the human beings, which in this case becomes the designers.
What about more ?active? technology then, technology with capabilities that mimics (or are intended to mimic) human reasoning and intelligence? Well, in principle there is no difference. The design and functionality of the technology is the result of the ideas, thoughts, beliefs, knowledge and considerations made by the scientists, developers and manufacturers (collectively referred to as the designers) creating the technology in question. The moral ?competence? of the gadgets (whether it is a simple tool or an advanced robot doesn't matter) reflects the designers moral competence. With so-called artificial intelligence there is still no difference. The ?things? can never learn anything that they have not been designed to learn, unless we end up designing things where we completely disregard having any kind of control, which is where my opinion seem to differ from that of the authors. I strongly believe that technology always and necessarily should be under complete authority of human beings that thereby have the full responsibility for its effects. Consequently, technology should never be made autonomous (in a wide sense) because technology should always be like a totally curbed slave to mankind, which by definition is the complete opposite of being autonomous. Furthermore, if these premises are accepted, the question of whether technology is required to have moral agency is a non-question. The agency remains, and should always remain, with the human-beings. But that does not mean that robots can skip the moral dilemmas. Technology must necessarily, together with increasing technological complexity and capability, be designed with increasingly thorough moral considerations that probably will lead to increasingly complex implementations of ?moral functionality?, because technology is unavoidably increasingly salient agents in modern society. This might in fact very well be what Wallach and Allen is talking about, but at least I must read the book with quite ?well-colored glasses? in order to interpret it in that way.

Liang Tian

A need for ?moral machines? is at the rise of autonomous robots otherwise computer programs that can operate without human interference. The Moral Machines by Wendell Wallach and Colin Allen?s made an assumption that such need is evitable. An artificial moral agent (AMA) is proposed (conceptualized also in other works) at a hope to either support human decision making or to make moral decision at its own operational (programmable) purpose. The book focuses on the possibility of single machine that can operate as a moral agent. The book chapters are organized in a way that first, an attempt is made on the claim of the necessity of the AMAs (Chapter 1, 2 and 3) and then to address the adequacy of the AMAs (Chapter 4 and 5). Some rather detailed methodology framework is described (Chapter 6, 7, 8) alongside with implementation examples (Chapter 9, 10) as well as reflections (chapter 11 and 12). However the lack of general morality theory put a same threat to the AMAs because it will face the similar ethical dilemma that we human may face eventually. The exploration on the AMAs however will help people to understand their own sophistication on morality decisions thus contribute to the development to moral philosophy (the epilogue).

The book is a quick response to the rapid development of autonomous technology and the arising relevant morality concerns. The author made a bold claim that the idea of ?moral machines? is not futuristic but essentially an emerging need. By digging into such need, a two-dimensional path way is used to formulate the developments of AMAs (Chapter 2). The discussion is then broadened into what we may call formulating a general moral agent (Chapter 3 and 4). The possibility to translate humanity into programmable procedures is presented using sophisticated frameworks of either up-down or bottom-up approaches (Chapter 6-8). Some very detailed technological / software examples are given in the follow-up section of the book (chapter 9, even though the author agreed in epilogue that such attempt was rather premature). The author also put efforts on emotional aspects which arises altogether with substantiating consciousness that may contribute to morality functionality (chapter 10). By focusing on AMAs, it is indeed an exploration to the nature of humanity itself. It is suggested possible to develop machine-based ethical sophistications alongside with the development of human morality recognition (Chapter 11). Even though the author does not provide any real solution to AMAs not to say to the generic moral philosophy. The book however can serve as an overview to technological development (current stage) otherwise a guideline to relevant researches such as artificial intelligence (AI).

Updated  2015-10-05 15:25:19 by Iordanis Kavathatzopoulos.