As people explore new applications for generative AI beyond generating fake images, Google is directing its AI efforts towards cybersecurity to simplify threat reports.
In a recent blog post, Google announced its new cybersecurity product, Google Threat Intelligence, which integrates the expertise of its Mandiant cybersecurity division and VirusTotal threat intelligence with the Gemini AI model.
The product leverages the Gemini 1.5 Pro large language model to expedite the analysis of malware attacks. Google claims that Gemini 1.5 Pro, launched in February, could analyze the WannaCry virus’s code — responsible for the 2017 ransomware attack — and identify a kill switch in just 34 seconds. This capability is not surprising given the proficiency of large language models (LLMs) in code comprehension.
Another potential application of Gemini in cybersecurity is summarizing threat reports into natural language within Threat Intelligence. This feature aims to help companies evaluate potential threats effectively, preventing overreactions or underreactions.
Google highlights that Threat Intelligence provides a comprehensive network of information to monitor potential threats proactively. Mandiant offers human experts who monitor suspicious groups, along with consultants who assist companies in thwarting attacks. Additionally, VirusTotal’s community regularly shares threat indicators.
Google acquired Mandiant, known for uncovering the 2020 SolarWinds cyber attack, in 2022. Furthermore, the company plans to utilize Mandiant’s expertise to assess security vulnerabilities in AI projects. Through Google’s Secure AI Framework, Mandiant will evaluate AI model defenses and contribute to red-teaming efforts.
While AI models can aid in summarizing threats and analyzing malware, they are susceptible to malicious actors. Google aims to address this concern, including threats like “data poisoning,” where bad code is added to scraped data, rendering AI models ineffective.
Google isn’t alone in integrating AI with cybersecurity. Microsoft introduced Copilot for Security, powered by GPT-4 and Microsoft’s cybersecurity-specific AI model, allowing cybersecurity professionals to inquire about threats. The effectiveness of these applications for generative AI is yet to be determined, but their utilization beyond image generation is a positive development.