Clicky

Artificial Intelligence’s Social Impact and Data Privacy Issues

Many people already believe that the fourth industrial revolution, which will transform the world in the years to come, began in the current era of AI and Big Data. Artificial intelligence is already playing a significant role in our daily lives, as evidenced by Google searches, Map navigation, voice assistants like Alexa, and personalized suggestions on websites like Facebook, Netflix, Amazon, and YouTube. These are just a few examples. It’s possible that nobody even realizes this. In fact, a survey predicts that the market for artificial intelligence will soar to a staggering $169.41 billion by 2025.

However, AI also has a drawback that poses a serious privacy and social risk. The issue stems from the way certain businesses are gathering and using a significant quantity of user data in their AI-based system without the users’ awareness or agreement, which can have worrying social repercussions.

Your Data: Is it Private?

You may not even be aware that you are disclosing personal data, either knowingly or unknowingly, every time you use the internet for searches, website browsing, or mobile app use. And since you would have selected the “I agree” box on the terms and conditions of utilizing their services, you often give these companies permission to collect and treat your data legally.

You authorize them to gather information about your browsing habits, clicks, likes, and dislikes in addition to the data you voluntarily contribute to your websites, such as your name, age, emails, contacts, videos, and photo uploads. Reputable organizations like Google and Facebook do not sell this data to anyone and utilize it to enhance their services. However, there have been cases where unauthorized third parties have stolen sensitive user data through security flaws or hacks. In fact, many businesses entice customers to utilize their internet services with the express purpose of collecting user data, which they then sell to third parties for exorbitant sums of money.

With the rise of rogue mobile apps, whose main goal is to gather data from the phone for which it did not ask permission, the issue has gotten worse. These apps typically collect data while posing as games or entertainment. In the modern world, smartphones carry extremely sensitive information including private photos, videos, GPS locations, call logs, texts, etc., and we aren’t even aware that these mobile apps are stealing our data. These malicious apps are occasionally taken down from the Apple Store and Play Store, but not before they have been downloaded millions of times.

Why are data privacy and artificial intelligence such major concerns?

People are becoming more and more conscious of how vulnerable their data is online. However, the majority of people are still unaware of how serious it is when AI-based systems use their social data in immoral ways.

Let’s go over a few well-known occurrences to better appreciate the kind of danger we’re referring to.

The Facebook-Cambridge Analytica Scandal

In 2018, it was revealed that the data analytics company Cambridge Analytica had used Facebook likes to study users’ psychological and social behavior in order to target them with advertising efforts for the 2016 US Presidential election.

The problem was that Facebook does not sell the data of its users for such purposes. It was discovered that a developer produced a Facebook quiz app that made use of a flaw in a Facebook API to get information about users and their friends as well. He later sold it to the Cambridge Analytica company, which was accused of using unethical means to access and mine Facebook user data in order to play a significant impact in the outcome of the 2016 US Presidential elections. The worst part is that users of the quiz app would have unquestioningly granted the app full permissions without realizing they were disclosing the data of their friends as well.

Face Recognition Scandal at Clearview

A face recognition system was developed by the artificial intelligence company Clearview to aid police in identifying criminals. They assert that their software has aided in the capture of several pedophiles, terrorists, and sex traffickers.

However, The New York Times published a lengthy article in January 2020 about how Clearview had made a mockery of data protection by downloading over three billion user photographs from sites including Facebook, YouTube, Twitter, Instagram, and Venmo in order to build the AI system. Mr. Ton-That, the company’s CEO, asserted that only public photographs from these networks were scraped by the technology. Surprisingly, the program pulled the photographs from the producer of the show’s Instagram account, according to an interview with CNN Business.

To stop Clearview from removing images from its platform, Google, Facebook, YouTube, and Twitter sent cease and desist letters. But without your knowledge, the pictures you posted online might be included into that AI program. Numerous innocent persons could become targets of police investigations if this software falls into the hands of a dishonest police officer or if the system generates false positives.

Deep Fakes

Deepfakes are images or films that were produced using deep learning and feature actual people doing or saying things that they did not do or say. Deepfakes can be entertaining when utilized for entertainment, but worse Deepfakes like fake news and information are being produced.

A tool called DeepNude was introduced in 2019 that allowed users to input any image of a woman and have it transform into a nude image that looked realistic. It is rather unsettling that anyone might use nude photographs made by DeepNude to take advantage of lady images that are available online. There was too much debate, so it was stopped. Even yet, it is only a matter of time before someone uses Deepfake technologies to steal your publicly accessible movies or pictures.

Chinese mass surveillance

China has come under heavy fire recently for extensive surveillance of its citizens without their knowledge. They constantly monitor their people using facial recognition technology and over 200 million surveillance cameras. China also analyzes the video recordings of their actions.

To make matters worse, China introduced a social credit system to assess the reliability of its inhabitants and assign them ratings in accordance with that assessment based on monitoring. People with good credit are given additional perks, and those with bad credit are denied them. The worst thing is that AI-based surveillance is determining all of this without the knowledge or agreement of the subjects.

How to use AI to stop data misuse

The aforementioned case studies ought to have demonstrated that using artificial intelligence to handle private data unethically might have extremely negative social repercussions. Let’s look at what we can do to stop the misuse of private data by artificial intelligence.

A government obligation

To increase openness between these internet companies and the consumers, many nations have now developed their own data regulation policies. Most of these rules are designed to provide consumers more control over the information they can share and to notify them of how the platform will utilize it. The GDPR law, which became operative for the EU countries a few years ago, is a fairly well-known example of this. It provides residents of the EU more control over their personal data and how businesses use it.

Company Obligation

Most social data on users is literally owned by big firms like Google, Facebook, Amazon, Twitter, YouTube, Instagram, and LinkedIn. Due to their reputation as industry titans, companies should take extra precautions to prevent any data leaks, whether deliberate or accidental.

Community Engagement using AI

The AI community should speak out against the unethical use of AI on consumers’ personal data without their permission, especially the thought leaders in the field. They should also spread the word that this behavior can have such terrible social repercussions. Already, a lot of institutions offer courses in AI ethics and educate the subject.

Responsibility of the User

Last but not least, we must keep in mind that despite government rules, these are merely policies, and that it is our own responsibility to abide by them. We must be cautious about the information we post on social media sites and mobile applications, and we must always check the rights we provide them to view and use our data. We should not just “accept” whatever is stated in the “terms and conditions” that are presented to us on these online platforms.

Conclusion

The artificial intelligence field is very concerned about the ethics of AI because of the prejudices and social biases it can foster. However, utilizing AI to process personal data without people’s agreement and then misusing it further intensifies ethical questions about the technology. The development of AI in the real world is still in its infancy, so it is up to all of us to take proactive measures to ensure that the creation of AI through the improper use of personal data does not become commonplace in the future.