Google Taps AI for Its ‘Threat Intelligence’ Cybersecurity Plan

Google introduced Threat Intelligence at the RSA Conference in San Francisco this week. Claiming actionable information at “visibility only Google can deliver, based on billions of signals across devices and emails,” Threat Intelligence draws on the capabilities of the company’s Gemini LLMs, Mandiant cybersecurity arm, and cloud-based VirusTotal tool. An AI-powered Gemini agent “provides conversational search” across the repository of Threat Intelligence, “enabling customers to gain insights and protect themselves from threats faster than ever before,” Google says in a move to empower even small teams without IT departments with threat protection.

Threat Intelligence “uses the Gemini 1.5 Pro large language model, which Google says reduces the time needed to reverse engineer malware attacks,” writes The Verge.

Google states in a Threat Intelligence blog post that Gemini 1.5 Pro, released earlier this year, was able to analyze and identify the killswitch in the WannaCry virus in a single, 34-second pass.

The Alphabet company also says Gemini 1.5 Pro offers the world’s longest context window, with support for up to 1 million tokens, making it “one of the most advanced malware-analysis techniques available.”

While Google has digital eyes on a “vast repository” of electronic data, “Mandiant provides human experts who monitor potentially malicious groups and consultants who work with companies to block attacks,” reports The Verge, noting that “VirusTotal’s community also regularly posts threat indicators.”

Google plans to use Mandiant staff “to assess security vulnerabilities around AI projects,” The Verge adds, explaining that “through Google’s Secure AI Framework, Mandiant will test the defenses of AI models and help in red-teaming efforts.”

While conversational artificial intelligence LLMs are helpful at things like threat summary and reverse engineering malware, the models themselves are sometimes targeted for malicious attacks. For instance, “data poisoning” adds bad code to data scraped by AI, sabotaging appropriate prompt responses.

“Threat intelligence can be difficult in the modern enterprise landscape: Attackers are working at all levels, data and tools are scattered and observability is blurred,” writes VentureBeat, noting that high-functioning security teams must not only actively monitor situational threats, but stay apprised of up-to-the-minute research around vulnerabilities.

Google Threat Intelligence strives to combine the two, adding the power of AI.

Google Cloud VP of Engineering for Cloud Security Eric Doerr tells VentureBeat that Threat Intelligence aims for “the right breadth and depth,” where typically providers focused on one or the other.

Microsoft has also been focusing on cybersecurity. The Verge points out that the company has a competing product, Copilot for Security, “powered by GPT-4 and Microsoft’s cybersecurity-specific AI model,” which like the new Google offering “lets cybersecurity professionals ask questions about threats.”

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.