In the right hands, can AI be used as a tool to engender empathy?  Can algorithms learn what makes us feel more empathic toward people who are different from us? And if so, can they help convince us to act or donate money to help?

These are the questions that researchers at the MIT Media Lab’s Scalable Cooperation Lab and UNICEF’s Innovation Lab claim to address in a project called Deep Empathy.

Image of Deep Empathy’s website

First, the lab trained an AI to transform images of North American and European cities into what they might look like if those continents were as war-torn as Syria is today.  It does this by so-called “neural style transfer” which keeps the content of one image and the style of another — a technique originally used to alter your photographs as if Picasso,  Van Gogh or Munch painted them.  Thus, their algorithm spits out images of Boston, San Francisco, London, and Paris where the cities are bombed-out shells with dilapidated buildings and an ashen sky.

Image of Boston from MIT Media Lab

Now the researchers are also hoping to train another AI to distinguish whether one image will engender more empathy than another.  The project’s website now has a survey that asks you to choose between two images to create a set of training data to train an algorithm that could “help nonprofits determine which photos to use in their marketing so people are more likely to donate”.  Thus far, people from 90 countries have labeled 10,000 images for how much empathy they induce.

The claim is that “in a research and nonprofit context, helping people connect more to disasters that can feel very far away–and  helping nonprofits harness that energy to help people in need–feels  like a worthy way of using machine learning.”  And that is *precisely* what we should be worried about.  If Facebook did this for their marketing, everyone would rightfully go berserk.  So why do researchers and non-profits get a free pass (especially in these days of political agendas and outright fraud)?

originally posted here: