Google and other big tech companies are watching you.
That’s the premise of all the debates and regulatory steps around the current privacy issues. After all, data is the new gold for companies like Google and Facebook. They thrive on collecting and using their users’ data to target audiences in their ads. Initiatives like the EU’s General Data Protection Regulation aims to give the users the right to their data by making companies ask users for explicit consent. The assumption is that by knowing how Facebook collects your information, for example, you will have more control.
But with the advent of artificial intelligence, this has become more complex. The problem is, machine learning — the engine behind AI — can only improve itself by devouring data. As it uses big data, companies skirt around privacy regulations by saying they’re anonymizing data. AI also does not run on a rule-based system, so algorithmic transparency isn’t always possible.
Right now, machine learning is being used in phones, homes, sites, and even banks to detect fraud, predict behavior, and optimize experiences. This widespread data collection and personalization may seem harmless and useful even until it isn’t.
Consumer privacy and biases
Recently, the US Department of Housing and Urban Development sued Facebook for discriminatory practices in targeting housing related-ads on its platform. The lawsuit highlights how Facebook’s AI-powered platform unlawfully discriminates based on color, religion, origin, sexuality, and disability.
As AI platforms learn from the data it consumes, it can also perpetuate the societal biases and amplify them by serving them as “objective” and “rational” outcomes.
AI in the workplace
While consumer privacy regulations are catching up, employees are in a difficult position as privacy rights can be waived in the workplace. Increasingly, companies are employing AI-enabled RFIDs, trackers, and sensors to detect productivity, measure engagement, and monitor employee interactions.
In talent acquisition, companies are relying on AI to target potential employees who “fit” into their demands and work culture. But research suggests that these algorithms also show biases, which essentially creates glass ceilings for minorities and women.
Legal System and Predictive Policing
Biases in AI-enabled platforms can have even more harrowing outcomes. Some law enforcement authorities are using facial recognition and predictive policing to introduce data-based operations.
For example,
the Los Angeles Police Department uses AI risk-assessment platform PredPol, whose algorithm is fundamentally flawed. Crunching data from current arrests, incarceration, probation and parole statistics show a risk-assessment biased towards minorities. There is a growing concern that the cycle of justice inequity will likely persist. In the legal industry, Special Counsel points out how personal data from social media, cell phones, and computers are becoming a necessity in litigations. AI-enabled eDiscovery platforms that can quickly comb through data are being used by firms to save on time and resources. But the fact that lawyers are now relying on AI’s judgment calls is also raising concerns, as machines, while efficient, can also make mistakes that can lead to bad decisions.
AI and health data
Another potentially controversial aspect of life where AI encroaches on privacy is health data. Fitness trackers and phones mine data from physical activities at our behest. Even Facebook has launched a program where it predicts suicidal tendencies based on your social media posts, while Google Home has similar patents where it identifies undiagnosed conditions of users from their emergent health data.
One studied showed that Facebook is also using its data to predict health conditions such as diabetes, hypertension, and even gastrointestinal disorders. While this might be useful, it raises concerns about how these companies can now leverage users’ medical conditions and susceptibilities in serving behavioral ads.
Until AI remains a black box of data processing, no one has a clue how extensive and invasive its data collection and aggregation capabilities are. As it increasingly becomes a part of every aspect of our lives, the issue of privacy and regulation will continue to get even more complex.
Written by: Andrew Blank
Leave a Reply