Can a machine have morals or ethics? And if it did, what would guide them? Heck, a lot of people today don’t even have morals; how are we going to design AI systems with the proper moral parameters? Can a learning machine develop good morals and ethics? Would it have the opportunity to develop bad ones? All of these questions come to mind when talking about the future of AI.

Ethics and morals both relate to what society perceives as “right” and “wrong” conduct. First, let’s explain right off that morals and ethics are two very different things. Ethics, for instance, refers to rules provided by an outside source, such as codes of conduct in the workplace or religious principles. Morals refer to an individual’s personal beliefs and guidelines regarding what is “right” and what is “wrong”.

Animals have no morals (although some may argue that other primates have a simple set of morals). They react based on instinct, which has been honed and passed down genetically through thousands of years of evolution. A human being is different. We no longer respond to situations based on instinct. We have evolved into rational, thinking sentient beings with clear choices. We have sensibilities, compassion, and empathy which guide our set of moral beliefs. We also have institutions like the laws made by our legislative system, the courts which enforce those laws and professional governing bodies like the Securities and Exchange Commission, legal review boards and medical licensing boards to guide us personally and professionally. Each decision we make or action we take is guided by a diverse set of guidelines imposed upon us externally or developed on our own.

So what does all of this conjecture about human morality mean for artificial intelligence systems? Should these AIs have morals and ethics? Perhaps we are better off leaving them without these types of conflicting ideologies. After all, what would the benefit be to having an ethical, moral machine? And who is the god-like figure that gets to decide what those ethics and morals should be?

We mentioned earlier that animals don’t use ethic or moral judgment. This is due in large part to the fact that they aren’t self-aware. So, to inject human-like qualities into an artificial intelligence, the machine would have to be self-aware. Once we allow machines to be self-aware, they become sentient beings and therefore have rights similar to humans. Where do we draw the line, then?

There is also another side to this discussion that must be addressed. If we have the ability to make a machine self-aware but choose not to, are we at fault for doing so? Is it morally wrong for us to stunt the growth and development of another sentient being, whether they are flesh and blood or composed of microchips and motherboards? It’s a slippery slope. Once the door is opened to the creation of a fully self-aware synthetic being, it’s going to be pretty difficult to close it again.

The Future Evolution of AI

As humans, we have differentiated ourselves through the use of tools, invention, intuition and creativity. Each of these truly human traits has helped to advance the human race. The end product of all of these efforts is technology. As technology has driven humanity’s progress, each new advance has created new questions, however. The subject of AI is no different.

We can now see in the near future the capability to develop artificial intelligence that can learn, think and even possibly feel emotion. AI will need these abilities in order to progress and make decisions that sometimes may have life or death consequences. Is it right for us to create these emotional, rational, feeling AIs? Is it necessary? Should we give them human attributes like morals and ethics to guide their actions and decisions? The faster we answer these questions; the sooner humanity will be able to enjoy the benefits that advanced AI will unquestionably bring.

 

originally posted here: https://www.linkedin.com/pulse/ethics-designing-ai-algorithms-part-3-final-michael-greenwood?trk=hb_ntf_MEGAPHONE_ARTICLE_POST