Clicky chatsimple

Be Wary Of Privacy Violating Artificial Intelligence Applications

Category :

AI

Posted On :

Share This :

According to a statement, “smart business looks at how to use data to find out information while remaining in compliance with regulatory rules.” Privacy matters to the electorate. At least one urgent concern is whether privacy or ethical transgressions in technologies like artificial intelligence (AI) will matter enough to workers who might be more focused on putting food on the table than on raising issues or making whistleblower reports, with potential negative job consequences for them. This is because “smart business” also includes “the electorate” as employees. What happens, too, if the country, region, or sector is too young to have substantial regulatory norms to adhere to? Does that imply that anything happens after that? In this case, the “smart business” won’t break any laws, after all.

Given that the law is silent on many privacy-related concerns, is just following regulations—some of which may be substantially outdated and not in pace with technological advancements—truly an appropriate form of due diligence for “smart business”? Furthermore, local privacy regulations might not be able to stop international corporations from employing AI to violate the private rights of local residents.

Why do 98% of Americans still think that they should be able to choose how their data is shared? Why do 79% of Indians still feel that selling their data to outside parties is unsettling? Why do 74% of global respondents still worry about their data? Why take these actions if privacy regulation and compliance were really adequate? Consequently, despite popular belief, regulatory rules are not always as successful as they might be, particularly when it comes to “smart business.” Stated differently, the simple observance of privacy rules is an essential but far from sufficient condition for truly astute (and ethical) business.

Introducing Artificial Intelligence (AI) Privacy Protection Challenges

One may argue that astute firms search for technical means of assisting them in accomplishing their strategic objectives, and although artificial intelligence is still in its early stages of development, certain use cases might profit from its advancement. There aren’t many good reasons for organizations to “… integrate privacy safeguards into their systems.” Recent years have seen significant privacy abuses that have made for dramatic headlines, but the guilty businesses have ultimately faced very few consequences.

From the perspective of privacy by design, artificial intelligence has not been any different from previous technologies in that privacy has not been given priority during the development process. The processing of personal data by AI poses a significant risk to people’s rights and freedoms, in contrast to the risk posed by data breaches, but it also has very little “fallout” for the participating corporations. AI presents a number of privacy concerns, including:

Data survives longer than the humans who generated it because data storage is becoming more and more economical.

The term “data repurposing” describes the utilization of data for objectives apart from its initial intended use.

Information obtained about people who are not the subject of the data collection—data leaks.

Concerns about data obtained using AI also include the need to obtain freely supplied informed consent, limit data collection, specify the goal of processing data, and even having the option to have data erased upon request. However, how would the people whose data was collected—possibly as a consequence of a spillover effect—even know that their data had been stolen in order to get in touch with the companies and request that their own data be removed?

Article 22 of the General Data Protection Regulation mandates that all EU residents have the right “not to be subject to a decision based solely on automated processing, including profiling,” with very limited exceptions. Given that any human interaction in the AI process (perhaps to circumvent the requirement for fully automated processing) cannot be faked and must have results that reflect a true effect, what does this entail for automated artificial intelligence algorithms in terms of EU subjects?

AI’s Threats To Privacy And Democracy

The Economist reports that half of the world’s nations scored lower on democracy in 2017 than they did in 2016, mostly as a result of a drop in popular confidence in the government and public institutions. Less than half of Canadians trust their government, business, media, non-governmental organizations, and their leaders, according to the Director Journal, which was the first time the 28th Governor General of Canada addressed the “disturbing” global pattern of growing mistrust in institutions in 2017.

This loss of trust was mostly caused by the Cambridge Analytica controversy during the 2016 US presidential election and the privacy implications of AI. Threats to democracy continue to arise from AI tinkering with democratic mechanisms. Another example is the US corporation Clearview AI’s breach of privacy regulations in Canada by gathering photos of adults and even children in the country for mass surveillance and facial recognition without the individuals’ permission and for commercial sale. This does nothing except further damage public trust in national governments’ capacity to manage privacy and AI-related concerns. Independent, parallel investigations of Clearview AI are being conducted in Australia and the UK.

A Report On An Investigation From The Office Of Privacy

Information scraping appears to be prohibited by the terms of service of websites like as Facebook, YouTube, Instagram, Twitter, and Venmo, according to the Canadian Office of the Privacy Commissioner (OPC) (section 6.ii). Despite Clearview AI’s assertion that the information is publicly available on the internet and that consent is not required, the OPC determines that express consent is required when handling particularly sensitive biometric information and/or when the collection, use, or disclosure goes “outside the reasonable expectations of the individual” (sections 38 and 40). Without a doubt, one of the most sensitive types of personal data is facial information. However, given that data is forever in the digital world, would a reasonable person really consent to having their biometric data—also among the most sensitive—used by a for-profit organization where it could be used for anything and could eventually change into something else? Definitely not, no.

Protected health information (PHI) and personally identifiable information (PII) are among the most sensitive data types since, as stated in the statement “[data] is the lifeblood of AI.” Thus, we need to look at the extent to which AI employs biometrics, PHI, and PII, as well as if the necessary precautions have been made to make sure that, for instance, the democratic levers are not twisted.

The Time Has Come To Proactively Identify Possible Privacy Violations In AI Developments

Given that one of the primary concerns with AI is the auditability of its algorithms, how can we begin assessing if privacy is maintained in AI deployments? Irrespective of the AI algorithms’ ongoing adaptation to new data, how can we be certain that the system is computing what we believe it is computing and that it is protecting privacy from a legal and ethical perspective?

The potential for artificial intelligence to exaggerate, reinforce, or replicate unwanted attitudes is a serious concern. These biases can proliferate based on the kind of data collection that is employed, which could potentially result in issues similar to the spillover effects that were previously described.

The problem with the way privacy and other vulnerabilities are now audited is that vulnerabilities aren’t found until after the system has been put into place and negative effects have already been experienced. When it comes to artificial intelligence (AI), audits are necessary but insufficient, in contrast to internal audits, which frequently serve as a sufficient second line of defense. To support AI audits, more checks and balances are required. The audit professional may use COBIT as a starting point and use SMACTR as a possible AI audit methodology, generating:

Artificial Intelligence is one instance of “managed innovation” (APO04), or more specifically, “monitoring the implementation and use of innovation” (APO04.06) as a management practice.

The management principle to “Ensure Traceability and Accountability for Information Events” (DSS06.05), which addresses the nature of data and information collecting and their flows, is specifically addressed in “Managed Business Process Controls” (DSS06).

Notably, in addition to accountability being a governance construct, DSS06.05 integrates a privacy construct—albeit a narrow one—concerning the determination and fulfillment of data and data outcome retention responsibilities. This is because maintaining personally identifiable data creates privacy issues after the original justification for gathering and using it runs out.

Privacy In The AI Environment

Considerations for AI privacy are distinct from those for traditional data privacy. One of the challenges in maintaining privacy in artificial intelligence is figuring out how to create suitable laws that safeguard privacy without impeding technological developments. The data contexts in question include the scanning processes that let the AI tools learn about their settings, in addition to the type of data and how it’s used to build AI capabilities. When it comes to the spillover effects covered in this blog post, the typical consent requirement for organizations wishing to use personal data is inadequate because no consent is required when collecting spillover data, and the individuals impacted by the data are unaware that they are part of it.

Contrary to common assumption, consent is not a particularly powerful instrument, even if it must be freely given and informed. Less consent was requested than what the OPC suggests, as the Clearview AI instance illustrates. In a similar vein, Microsoft deleted their 10 million-face database because the majority of the people whose faces were in it had no idea that their photograph had been included. Chinese surveillance corporations, IBM, Panasonic, Alibaba, and military researchers had all used the database.

There are two democratic issues that require additional legislative or regulatory action. They deal with the sources of data that artificial intelligence systems collect or retrieve as well as the methods by which the data is applied to sway political outcomes. Partisan results do, in fact, contradict at least one of Privacy by Design’s seven pillars, which place an emphasis on attaining positive sum results as opposed to having one political party win over another to achieve a zero-sum conclusion as opposed to the intended positive sum result. 

When employing sensitive personal data, most businesses might be concerned about privacy compliance and ethics; yet, the challenges associated with artificial intelligence may vary significantly in terms of their extent and content. Both the IT governance specialist and the privacy expert must ensure that these AI-based privacy issues are the focus of sufficient scrutiny.