Clicky chatsimple

Beware of Privacy Violations in Applications of Artificial Intelligence

Category :

Data Privacy and Security

Posted On :

Share This :

A statement to this effect has been made: “Privacy matters to the electorate, and smart business looks at how to use data to find out information while remaining in compliance with regulatory rules.” Since “smart business” also includes “the electorate” as employees, at least one pressing concern is whether privacy or ethical violations in technologies like artificial intelligence (AI) will matter enough to workers who might be more focused on putting food on the table than on raising issues or making whistleblower reports, with potential negative job consequences for them. And what happens if the nation, area, or industry is too young to have significant regulatory standards to follow? Does that mean that anything goes from that point on? After all, the “smart business” in this situation won’t violate any laws.

Is merely adhering to regulations, some of which may be seriously out-of-date and not in line with technology advancements, truly an adequate form of due diligence for “smart business,” especially in light of the fact that the law is silent on many privacy-related issues? Furthermore, local privacy laws may have no enforcement power to stop foreign-based companies from violating local residents’ private rights using AI.

Why do 98% of Americans still believe they should have more control over the sharing of their data? Why are 79% of Indians still uncomfortable with the sale of their data to third parties? Why do 74% of people around the world still express concern about their data? If privacy regulation and compliance were truly sufficient, then why do these things? Therefore, contrary to what some may believe, regulatory laws are not as effective as they should be, especially not for “smart business.” To put it another way, merely adhering to privacy laws is a necessary but completely insufficient prerequisite for truly clever (and moral) business.

Introducing Privacy Protection Challenges in Artificial Intelligence (AI)

It may be claimed that clever organizations look for technological solutions to aid them in achieving their strategic goals, and even if artificial intelligence is still in its infancy, some use cases may benefit from its development. Organizations haven’t had many reasons to “… include privacy protections into their systems. Significant privacy violations in recent years have generated sensational headlines, but the firms guilty have ultimately suffered very minimal repercussions.

When seen through a privacy by design lens, artificial intelligence has not differed from other technologies in that privacy has not been prioritized in the creation of AI technology. In contrast to the risk created by data breaches, the processing of personal data by AI carries a substantial risk to individuals’ rights and freedoms while simultaneously carrying very little “fallout” for the firms involved. AI poses several privacy challenges, such as:

  • Due to affordable data storage, data persists longer than the human people who created it.
  • Data repurposing refers to the use of data for purposes other than those originally intended.
  • Data leaks—information gathered about individuals who are not the subject of the data collection—

Data acquired via AI also poses privacy concerns such as freely given informed consent, the ability to opt out, restricting data collection, outlining the purpose of AI processing, and even the ability to have data deleted upon request. But how would the individuals whose data was gathered, potentially as a result of a spillover effect, even be aware that their information had been taken in order to contact companies about their own data or ask for it to be deleted?

With few exceptions, EU citizens are required by Article 22 of the General Data Protection Regulation to have the right “not to be subject to a decision based solely on automated processing, including profiling.” What does it mean for automated artificial intelligence algorithms in terms of EU subjects, given that any human interventions in the AI process — perhaps to get around the requirement for solely automated processing — cannot be faked and must have outcomes that demonstrate a real influence?

Threats from AI to democracy and privacy

Half of the countries in the globe scored worse on democracy in 2017 than the previous year, according to The Economist, primarily due to the decline in trust in public institutions and the government. According to the Director Journal, the 28th Governor General of Canada articulated the growing and “disturbing” global pattern of distrust in institutions in 2017 and discovered for the first time in the same year that fewer than half of Canadians trust their government, business, media, non-governmental organizations, and their leaders.

The Cambridge Analytica scandal in the 2016 US presidential election and the implications for privacy in artificial intelligence played a significant part in this decline in confidence, and threats to democracy are still being fueled by AI manipulating democracy’s levers. Another illustration is the US company Clearview AI’s violation of Canadian privacy laws in collecting images of Canadian adults and even children for mass surveillance and facial recognition without their consent and for commercial sale. This only serves to erode public confidence in the ability of entire nations to responsibly handle privacy and AI-related issues. In Australia and the UK, independent, concurrent investigations into Clearview AI are taking place.

An investigation report from the Office of the Privacy

According to the Canadian Office of the Privacy Commissioner (OPC), information scraping appears to be against the terms of service of websites including Facebook, YouTube, Instagram, Twitter, and Venmo (section 6.ii). However, the OPC finds that express consent is necessary when dealing with particularly sensitive biometric information and/or when the collection, use, or disclosure goes “outside the reasonable expectations of the individual” (sections 38 and 40), despite Clearview AI’s claim that the information is freely available on the internet and that no consent is necessary. Facial information is without a doubt one of the most sensitive types of personal data, but would a reasonable person really consent to having their biometric data, which is also one of the most sensitive types, used by a for-profit organization where the data could be used for anything and where the purpose could change into anything else indefinitely in the future (because data is forever in the digital world)? No, definitely not.

Given that “[data] is the lifeblood of AI,” personally identifiable information (PII) and protected health information (PHI) are some of the most sensitive data. Therefore, we must investigate how much AI uses PII, PHI, biometrics, and whether the appropriate caution has been taken to ensure, for example, that the levers of democracy are not twisted.

The Proactive Identification of Potential Privacy Violations in AI Developments is Here

Therefore, how can we start evaluating whether privacy is preserved in AI deployments, given that one of the main issues with AI is the auditability of its algorithms? How can we truly know that the AI system is computing what we think it is computing and that it is respecting privacy from both a legal and ethical standpoint, regardless of whether the AI algorithms are constantly adapting to new information?

Artificial intelligence’s ability to reproduce, reinforce, or magnify undesirable prejudices is a major worry. Depending on the type of data collection used, these biases can multiply, which may also lead to problems like the spillover effects mentioned in an earlier paragraph.

The issue with current audit techniques for finding privacy and other flaws is that risks aren’t discovered until after the system has been implemented and a bad effect has already been felt. In contrast to how internal audits often function as an adequate second line of defense, in the case of artificial intelligence (AI), audits merely act as required but insufficient controls. In fact, more checks and balances are needed to support AI audits. With SMACTR as a potential AI audit framework, the audit professional may also take COBIT into account as a starting point, creating:

  • AI is an example of “managed innovation” (APO04), more precisely the management practice of “Monitor the Implementation and Use of Innovation” (APO04.06).
  • “Managed Business Process Controls” (DSS06), in particular the management principle to “Ensure Traceability and Accountability for Information Events” (DSS06.05), which discusses the nature of data and information gathering and their flows

Note that DSS06.05 incorporates a privacy construct, albeit a limited one, regarding the determination and fulfillment of data and data outcome retention obligations, in addition to the fact that accountability is a governance construct. This is due to the fact that after the initial reason for collecting and processing the personally identifiable data expires, the retention of that data raises privacy concerns. 

Privacy in the Context of AI

AI privacy considerations are different from ordinary data privacy considerations. How to develop appropriate policies that protect privacy without limiting advancements in AI technology is one of the difficulties in preserving privacy in artificial intelligence. As well as the nature of the data itself and how it is used to develop the AI capabilities, the data contexts at issue include the scanning mechanisms that allow the AI tools to learn about their settings. The traditional consent requirement for organizations looking to use personal data is weak when applied to the spillover effects discussed in this blog post because no consent is obtained when collecting spillover data, and the people affected by the data do not even realize they are a part of it.

Even while consent must be informed and freely provided, contrary to popular belief, it is not a particularly potent tool. The Clearview AI case demonstrates that less permission was sought than the OPC recommends. Similar to this, Microsoft removed its database of 10 million facial images because most of the individuals whose faces were in the dataset were unaware that their image had been included. The database had been used by companies like IBM, Panasonic, Alibaba, military researchers, and Chinese surveillance firms. 

Regarding democracy, there are two issues that need more regulatory and/or legislative attention. They relate to both the sources of data that are gathered or accessed by artificial intelligence systems and how the data is used to influence political results. Indeed, partisan outcomes go counter to at least one of Privacy by Design’s seven pillars, which emphasize achieving positive sum outcomes rather than having one political party triumph over another in order to produce a zero-sum result rather than the desired positive sum result.  

While most firms may be concerned with privacy compliance and ethics when using sensitive personal data, the problems in artificial intelligence might differ greatly in terms of both scope and content. To guarantee that these AI-based privacy concerns are the subject of adequate oversight, it is the responsibility of both the IT governance professional and the privacy expert.