Clicky chatsimple

The Social Impact Of AI And Concerns About Data Privacy

Category :

Workflow Automation

Posted On :

Share This :

Many already think that the current era of AI and Big Data marked the beginning of the fourth industrial revolution, which will change the world in the years to come. As seen by Google searches, Map navigation, voice assistants like Alexa, and tailored recommendations on websites like Facebook, Netflix, Amazon, and YouTube, artificial intelligence is already heavily incorporated into our daily lives. These are but a handful of instances. Nobody may be aware of this at all. In fact, according to a poll, the artificial intelligence market is expected to grow to an astounding $169.41 billion by 2025.

But AI also has a flaw that seriously jeopardizes society and privacy. The problem arises from the fact that some companies are collecting and utilizing a large amount of user data in their AI-based system without the consent or knowledge of the users, which might have unsettling social consequences.

Is The Data You Own Private?

Every time you use the internet for searches, website surfing, or mobile app use, you might not even be aware that you are sending personal data—either deliberately or inadvertently. Additionally, by checking the box on the terms and conditions of using their services, you frequently grant these businesses permission to lawfully collect and handle your data.

Along with the information you freely provide to your websites, including your name, age, emails, contacts, videos, and photo uploads, you also permit them to collect information about your browsing patterns, clicks, likes, and dislikes. Reputable companies like Facebook and Google use this data to improve their services rather than selling it to third parties. Nevertheless, there have been instances where hackers or security holes have allowed unauthorized parties to take private user information. A lot of companies lure clients to use their online services specifically so they may gather user data, which they subsequently sell to other companies for astronomical prices.

The problem has gotten worse with the rise of rogue mobile apps, whose primary objective is to collect data from the phone without requesting permission. Usually, these apps gather information while seeming to be games or amusement. Even if we aren’t aware that these mobile apps are taking our data, smartphones in the modern world hold incredibly sensitive information, such as private images, videos, GPS positions, call records, texts, and more. Sometimes the Apple Store and Play Store remove these rogue programs, but not before millions of downloads have occurred.

Why Are Artificial Intelligence And Data Privacy Such Big Issues?

People are growing increasingly aware of the internet vulnerability of their data. But most people still don’t realize how dangerous it is when AI-powered programs misuse their social media accounts.

To gain a better understanding of the type of hazard we are discussing, let us review a few well-known incidents.

The Cambridge Analytica-Facebook Scandal

In order to target users with advertisements during the 2016 US Presidential election, the data analytics firm Cambridge Analytica utilized Facebook likes to analyze users’ psychological and social activity, as was made public in 2018.

The issue was that Facebook did not sell user data for these kinds of uses. It was found that a developer created an app for Facebook quizzes that used a security hole in a Facebook API to obtain user and friend data. Later, he sold it to Cambridge Analytica, a firm that was charged with accessing and mining Facebook user data in an unethical manner to significantly influence the results of the 2016 US Presidential elections. The worst thing is that those who used the quiz app would have blindly given it complete access to their personal information without even thinking twice about it.

Clearview’s Face Recognition Scandal

The artificial intelligence startup Clearview created a face recognition technology to help police identify offenders. They claim that multiple pedophiles, terrorists, and sex traffickers have been apprehended with the use of their software.

However, in order to create the AI system, Clearview downloaded over three billion user photos from websites including Facebook, YouTube, Twitter, Instagram, and Venmo, making a mockery of data privacy. This was detailed in a lengthy article that The New York Times published in January 2020. The CEO of the company, Mr. Ton-That, said that the technology was only able to collect public photos from these networks. According to a CNN Business interview, the program unexpectedly took the photos from the show’s producer’s Instagram account.

Google, Facebook, YouTube, and Twitter sent cease and desist letters to Clearview to stop them from removing photographs from its platform. However, the images you uploaded to the internet could be included into that AI software without your awareness. If this software ends up in the hands of a dishonest police officer or if the system produces false positives, many innocent people may find themselves the focus of police investigations.

Deep Fakes

Deepfakes are movies or pictures created using deep learning that show real individuals acting or saying things they never would. When used for amusement, deepfakes can be amusing, but worse kinds of deepfakes, such as fake news and information, are being created.

In 2019, the tool DeepNude was released, enabling users to upload any picture of a woman and have it converted into a realistic-looking nude image. The idea that someone may exploit lady pictures that are freely available online by using nude photos created by DeepNude is a little unnerving. It was terminated because there was too much debate. Even so, it won’t be long until someone pilfers your publicly viewable videos or images using Deepfake technologies.

Chinese Mass Monitoring

China has recently under intense criticism for secretly monitoring a large number of its citizens. With the aid of more than 200 million surveillance cameras and facial recognition technologies, they continuously monitor their citizens. China examines the action-captured videos as well.

To make problems worse, China implemented a social credit system to rate its citizens according to an evaluation of their dependability based on observation. Additional benefits are granted to people with excellent credit, while those with poor credit are not. The worst part is that all of this is being determined by AI-based surveillance without the subjects’ consent or awareness.

How To Prevent Data Misuse With AI

The aforementioned case studies should have shown that there could be very detrimental social effects from employing artificial intelligence for unethical private data handling. Let’s examine the steps we can take to prevent artificial intelligence from misusing personal information.

A Duty Of The Government

Many countries have now created their own data control laws in an effort to promote transparency between these internet corporations and the general public. The majority of these guidelines aim to provide users greater control over the data they can disclose and to inform them of the ways in which the platform will use it. The GDPR law is a well-known example of this, as it went into effect for the EU member states a few years ago. It gives EU citizens greater control over the use of their data by enterprises.

Business Responsibility

Large companies like Google, Facebook, Amazon, Twitter, YouTube, Instagram, and LinkedIn literally possess the majority of user social data. Companies should take extra care to prevent any data leaks, whether intentional or unintentional, given their status as titans of the business.

AI-Powered Community Engagement

The AI community, especially the thought leaders in the area, should speak out against the unethical use of AI on consumers’ personal data without their consent. They ought to warn others about the awful social consequences that this conduct can have. Numerous institutions already teach and offer courses on AI ethics.

User’s Responsibilities

Finally, but just as importantly, we need to remember that even if there are laws in place, they are only policies, and we must follow them. We have to exercise caution while sharing information on social media platforms and mobile apps, and we should constantly verify the permissions we grant them to access and utilize our data. The “terms and conditions” that are provided to us on these internet platforms should not be blindly accepted.

In Summary

Because AI has the potential to propagate prejudices and social biases, the area of artificial intelligence is deeply concerned about the ethical implications of AI. But employing AI to handle personal data without consent and then abusing it raises even more moral concerns about the technology. Since artificial intelligence (AI) is still in its early stages of development, it is our collective responsibility to take preventative action now to make sure that in the future, the illegal use of personal data to create AI will not become the norm.