As our civilization becomes more and more reliant upon computers and other intelligent devices, there arises specific moral issue that designers and programmers will inevitably be forced to address. Among these concerns is trust. Can we trust that the AI we create will do what it was designed to without any bias? There’s also the issue of incorruptibility. Can the AI be fooled into doing something unethical? Can it be programmed to commit illegal or immoral acts? Transparency comes to mind as well. Will the motives of the programmer or the AI be clear? Or will there be ambiguity in the interactions between humans and AI? The list of questions could go on and on.

Imagine if the government uses a machine-learning algorithm to recommend applications for student loan approvals. A rejected student and or parent could file a lawsuit alleging that the algorithm was designed with racial bias against some student applicants. The defense could be that this couldn’t be possible since it was intentionally designed so that it wouldn’t have knowledge of the race of the person applying for the student loan. This could be the reason for making a system like this in the first place — to assure that ethnicity will not be a factor as it could be with a human approving the applications. But suppose some racial profiling was proven in this case. Already there are immediate payday loans which seem to act like AI, but mostly it is just approved if you have the collateral.

If directed evolution produced the AI algorithm, then it may be impossible to understand why, or even how. Maybe the AI algorithm uses the physical address data of candidates as one of the criteria in making decisions. Maybe they were born in or at some time lived in poverty‐stricken regions, and that in fact, a majority of those applicants who fit these criteria happened to be minorities. We wouldn’t be able to find out any of this if we didn’t have some way to audit the systems we are designing. It will become critical for us to design AI algorithms that are not just robust and scalable, but also easily open to inspection.

Being open to inspection isn’t the only characteristic that we would hope to instill in an AI. It is also important that they are predictable to those functions which we designed them to oversee. To a designer, this “preference for precedent” (a desire to readily predict the outcome on a regular basis) may seem incomprehensible. Why restrict our future creation by limiting it with the past, when technology is always supposed to be progressing?

It will also become more critical as AI becomes more and more complicated that its algorithms are designed against manipulation. Meaning they will need to be resistant to outside influences that are intended to corrupt their function to the benefit something else. If an AI system fails its task, who will take the blame? The company who made it? The end‐users?

Bureaucracies frequently take shelter by creating systems that diffuse responsibility so widely, that no single person can take the blame. An AI system could end up being an even better shelter. Just blame it all on the machine. It’s easy, and it can’t defend itself.

Thankfully, most AIs currently in use create very few ethical issues that aren’t already present in products and services. The approach of AI algorithms toward more human-like understanding will have expected difficulties as we begin to deploy these algorithms more and more in our society.

The problem arises as AI platforms become more and more intelligent, and are required to behave more like humans. When this becomes more of the norm, general AI algorithms may no longer perform in predictable contexts, requiring new kinds of safety assurances and the creation of a whole new slew of ethical considerations for AI systems.

The prospect of AIs with advanced intelligence and advanced abilities presents us with the extraordinary challenge of creating an algorithm whose outputs will have advanced ethical behavior. These challenges may seem like a science fiction fantasy to some, but we will indeed encounter them in the future.

So how do we deal with these moral dilemmas? How do we face them and find ways to overcome them? By following one simple guide: That we as Experience Designers should strive to improve the world, not duplicate it. We must exercise extreme caution. As long as our existing world is not perfect, and the data shaping the new world is coming from us, we won’t be developing future models that are improvements upon our past unless we make a clear point to do so.

If you liked this, please hit the ♥ below. I would appreciate that, and it will keep me writing more — thanks!

View at Medium.com