Clicky chatsimple

CIA AI Director Lakshmi Raman: ‘Thoughtful’ Approach To AI

Category :

AI

Posted On :

Share This :

TechCrunch conducted an interview with Lakshmi Raman, the director of AI at the CIA, as part of their ongoing Women in AI series, which aims to give women academics and others with an interest in AI their well-deserved and long-overdue moment in the spotlight. We discussed her journey to become director, the CIA’s use of AI, and the need to strike a balance between adopting new technology and using it properly.

Raman has a lengthy history in intelligence. After receiving her master’s degree in computer science from the University of Chicago and her bachelor’s degree from the University of Illinois Urbana-Champaign, she joined the CIA in 2002 as a software developer. She advanced into management at the organization a few years later and finally took the helm of the CIA’s enterprise data science initiatives as a whole.

Raman notes that, considering the traditionally male-dominated ranks of the intelligence community, she was lucky to have female role models and predecessors at the CIA as resources.

“There are still individuals I can seek guidance from, who I can consult for advice, and who I can approach to learn about what the next phase of leadership entails,” the woman stated. “I believe that every woman has to deal with certain issues when navigating her career.”

AI As A Tool For Intelligence

As the CIA’s director, Raman plans, coordinates, and directs all AI initiatives. She stated, “We believe AI is here to support our mission.” “Humans and machines working together are at the forefront of our use of artificial intelligence.”

The CIA is not new to AI. According to Raman, the organization has been investigating the use of data science and AI since about 2000. Its main focus has been on natural language processing, which involves analyzing text, computer vision, which involves analyzing images, and video analytics. She also mentioned that the CIA maintains a roadmap that is informed by both industry and academia in order to stay abreast of emerging trends like generative AI.

“Content triage is an area where generative AI can make a difference, especially when we consider the enormous amounts of data that we have to consume within the agency,” Raman stated. “We’re looking at things like ideation assistance, search and discovery assistance, and assistance in generating counterarguments to help counter any potential analytic bias.”

The U.S. intelligence community is under pressure to use all resources that could assist the CIA in containing the rising geopolitical tensions in the world, from disinformation efforts by foreign entities (such as China and Russia) to terror threats spurred by the Gaza War. A two-year timeframe was set last year for domestic intelligence services to move past experimentation and small-scale pilot projects towards the widespread adoption of generative AI. The Special Competitive Studies Project is a highly influential advisory committee that focuses on AI in national security.

Osiris, a generative AI tool created by the CIA, is similar to OpenAI’s ChatGPT but tailored for intelligence use cases. It provides an overview of data, currently limited to declassified, publicly or commercially available data, and enables analysts to delve further by posing follow-up queries in easily understood language.

Thousands more analysts utilize Osiris now, not just at the CIA but also at the other 18 U.S. intelligence agencies. Raman stated that the CIA had agreements in place with reputable suppliers, but he would not clarify if the technology was created internally or with the help of outside firms.

“We do use commercial services,” Raman said, adding that the CIA also uses AI techniques for other purposes, such as translating and notifying analysts of potentially significant changes after hours. “To be able to help us not only provide the larger services and solutions that you’ve heard of, but even more niche services from non-traditional vendors that you might not already think of,” we must be able to collaborate closely with the private sector.

An Unstable Technology

There are many reasons to question and worry about the CIA’s employment of artificial intelligence.

The CIA has a secret, undisclosed data repository that contains information collected about U.S. citizens, despite being generally prohibited from looking into Americans and American businesses, according to a public letter released in February 2022 by Senators Ron Wyden (D-OR) and Martin Heinrich (D-New Mexico). Furthermore, a study released by the Office of the Director of National Intelligence last year revealed that the CIA and other American intelligence organizations purchase data on Americans from data brokers such as LexisNexis and Sayari Analytics with no oversight.

There is no doubt that many Americans would disapprove if the CIA ever used AI to sift through this material. Given the limitations of AI, it might lead to extremely unfair conclusions and would be a blatant violation of civil freedoms.

Numerous studies have demonstrated that arrest rates may easily distort predicted crime algorithms from companies such as Geolitica, which then disproportionately alert neighborhoods of color. According to other studies, persons of color are misidentified more frequently than white people when using facial recognition technology.

Beyond prejudice, today’s finest AI still has hallucinations and fabricates data in answer to inquiries. Consider Microsoft’s meeting summary software, which sometimes assigns quotes to fictitious individuals. It is easy to see how this could cause issues in intelligence work, where precision and verifiability are critical.

Raman insisted that the CIA “follows all ethical guidelines” and use AI “in a way that mitigates bias,” in addition to abiding by all applicable laws in the United States.

She described it as “a thoughtful approach to AI.” We want our consumers to learn as much as they can about the AI system they are utilizing, so to speak. That’s the strategy we’re pursuing. Responsible AI development requires the participation of all relevant parties, including AI developers, our office for civil liberties and privacy, and so on.

To Raman’s point, it’s critical that the system’s designers identify any potential gaps in the AI system, regardless of what the system is intended to do. Researchers from North Carolina State University discovered in a recent study that police were using AI tools—such as facial recognition and gunshot detection algorithms—without proper knowledge of the technology’ drawbacks.

In a particularly heinous instance of law enforcement AI abuse—possibly due to ignorance—the NYPD allegedly once created facial recognition matches on criminals using sketches, distorted photographs, and celebrity photos when surveillance stills produced no results.

“Any AI-generated output should be easily understood by the users, which naturally entails labeling AI-generated content and offering concise explanations of how AI systems operate,” stated Raman. “We are following all applicable laws, regulations, and guidelines governing the use of our AI systems, and we are adhering to all of these rules in everything we do within the agency.” Additionally, we make sure that our users, partners, and stakeholders are aware of these rules.

That’s what this reporter really hopes is true.