Clicky chatsimple

CSA With Google Cloud: C-Suite-Driven Cybersecurity

Category :


Posted On :

Share This :

The cybersecurity “department of no” stereotype would prevent security teams and CISOs from using generative AI solutions.

Yes, AI has risks, but many security professionals have previously experimented with it and don’t think it’s coming for their jobs. They know AI can be useful.

A new Cloud Security Alliance (CSA) and Google Cloud State of AI and Security Survey Report predicts that more than half of enterprises will employ gen AI security technologies by year’s end.

“When we hear about AI, it’s the assumption that everyone is scared,” said CSA AI security alliance chair Caleb Sima. “Every CISO is saying no to AI, it’s a huge security risk, problem.”

Next stop: Atlanta for the AI Impact Tour on April 10. A Microsoft-sponsored, invite-only event will address how generative AI is changing the security workforce. Request an invite today—space is limited.
Invitation request
However, “AI is transforming cybersecurity, offering both exciting opportunities and complex challenges.”

Growing Implementation—And Disconnect

The survey found that 67% of security professionals have tested AI for security activities. AI security tools will be used by 55% of enterprises this year for rule formulation, attack simulation, compliance violation detection, network detection, false positive reduction, and anomaly classification. That effort is driven by C-suites, according to 82% of respondents.

Credit: Google Cloud/CSA

Against tradition, only 12% of security professionals anticipated AI will replace them. About 30% thought the technology would improve their skills, assist their work (28%), or replace substantial sections of their employment (24%). Most (63%) thought it could improve security.

“For certain jobs, there’s a lot of happiness that a machine is taking it,” said Anton Chuvakin, Google Cloud CISO security advisor.

Sima said, “most people are more inclined to think that it’s augmenting their jobs.”

C-suite employees expressed 52% AI familiarity, compared to 11% for staff. Use cases were evident to 51%, but not to 14% of workers.

“Let’s be honest, most staff don’t have time,” remarked Sima. Instead, their CEOs are overwhelmed with AI news from other leaders, podcasts, news sites, journals, and more, causing daily challenges.

He added the C-suite and staff’s lack of knowledge and implementation of AI underscores the necessity for a planned, cohesive approach to integrate this technology.

Wild AI In Cybersecurity

Sima claimed AI’s top cybersecurity usage is reporting. Usually, a security team member manually gathers tool outputs, taking “not a small chunk of time”. But “AI can do that much faster, much better,” he claimed. Policies and playbooks can be reviewed and automated with AI.

More proactively, it may detect threats, execute end detection and response, find and correct code vulnerabilities, and offer remedial measures.

Sima remarked “How do I triage these things?” is a hot topic. There’s lots of info and alerts. In the security industry, we are good at detecting terrible things, but not at prioritizing them.”

The noise makes it hard to tell “what’s real, what’s not, what’s prioritized,” he said.

However, AI can swiftly identify phishing emails. The model can get data, determine the email’s sender, recipient, and website link reputation in seconds, while reasoning about threat, chain, and communication history. Sima estimated that human analysts would need 5–10 minutes to validate.

“They can now confidently say ‘This is phishing,’ or ‘This is not phishing,’” he added. “It’s great. Today, it works.”

Executives Pushing, But A Slump Ahead

Chuvakin referred to a “infection among leaders” in AI cybersecurity. They want to use AI to fill skills and knowledge gaps, detect threats faster, boost productivity, eliminate errors and misconfigurations, and respond to incidents faster.

He added, “We will hit the trough of disillusionment in this.” He said we are “close to the peak of the Hype Cycle” since AI has received a lot of attention and money, but application cases are unclear.

Discovering and applying realistic use cases that will be verified and “magical” by year’s end is the goal.

Chuvakin stated “security thoughts are going to change drastically around AI” when there are genuine examples.

AI Pushing Low-Hanging Fruit Downward

But enthusiasm and risk remain: 31% of Google Cloud-CSA poll respondents saw AI as beneficial for both defenders and attackers. Furthermore, 25% believed AI may benefit criminals.

“Attackers are always at an advantage because they can use technologies significantly faster,” said Sima.

Many have linked AI to the earlier cloud evolution: “What did the cloud do? Cloud allows scalable attacks.”

Threat actors can target everyone instead of one. AI will help them focus and be more complex.

Sima noted that a model may troll LinkedIn to gather information for a convincing phishing email.

He stated, “It allows me to be personalized at scale.” “It lowers that low-hanging fruit.”