Cloud CISO Perspectives: New AI threats report: Distillation, experimentation, and integration

Welcome to the first Cloud CISO Perspectives for February 2026. Today, John Hultquist, chief analyst, Google Threat Intelligence Group, explains the research detailed in our newest AI Threat Tracker report.

As with all Cloud CISO Perspectives, the contents of this newsletter are posted to the Google Cloud blog. If you’re reading this on the website and you’d like to receive the email version, you can subscribe here.

aside_block




New report on AI threats: Distillation, experimentation, and continued integration



By John Hultquist, chief analyst, Google Threat Intelligence Group












John Hultquist, chief analyst, Google Threat Intelligence Group





Monitoring the adoption and abuse of AI has become a major focus of Google Threat Intelligence Group. Over the past few years, we have watched threat actors experiment and slowly incorporate AI into their operations across the intrusion lifecycle in ways that will clearly represent a serious challenge to enterprise defenders.







Among the most concerning developments we have seen is experimentation with agentic capabilities, which are being used by actors like China-nexus group APT31 to automate reconnaissance and scale their operations. Other threat actors from North Korea and Iran have evolved from simply plugging AI into existing social engineering processes to using it as a dynamic tool that can develop social engineering itself and support complex interactions.

Model extraction attacks, attempts to distill a model’s underlying logic, are also on the rise, and a reminder that AI is a new attack surface with its own inherent risks. While these attacks are concentrated on the frontier labs now, we expect them to appear elsewhere as others expose their models to customers and the public.




The IP theft involved is a clear business risk to model developers and enterprises, so organizations that provide AI models as a service should monitor API access for extraction and distillation patterns.









We’ve documented our observations of the use and abuse of AI, as well as the actions we’ve taken in response, in our new GTIG AI Threat Tracker report. We issue these reports regularly to help improve our collective understanding of the adversarial misuse of AI, and how to safeguard against it.

The new report specifically examines five categories of adversarial misuse of AI:

* Model extraction attacks: These occur when an adversary uses knowledge distillation, a common machine-learning technique for training models, to extract training information and transfer it to a model they control. It enables an attacker to accelerate AI model development quickly and at a significantly lower cost. The IP theft involved is a clear business risk to model developers and enterprises, so organizations that provide AI models as a service should monitor API access for extraction and distillation patterns.
* AI-augmented operations: In the report, we document real-world case studies of how threat groups are streamlining reconnaissance and rapport-building phishing. One consistent finding is that government-backed attackers have been increasingly misusing Gemini for coding and scripting tasks, gathering information about potential targets, researching publicly known vulnerabilities, and enabling post-compromise activities.
* Agentic AI: Threat actors have begun to develop agentic AI capabilities to support malware and tooling development. Some examples of this behavior include prompting Gemini with an expert cybersecurity persona, and attempting to create an AI-integrated, code-auditing capability.
* AI-integrated malware: New malware families, such as HONESTCUE, are experimenting with using Gemini's API to generate code that enables download and execution of second-stage malware.
* Underground jailbreak ecosystem: Malicious services like Xanthorox are emerging in illicit marketplaces, claiming to be independent models while actually relying on jailbroken commercial APIs and open-source model context protocol (MCP) servers.




Building AI safely and responsibly

At Google, we are committed to developing AI boldly and responsibly. We are taking proactive steps to disrupt malicious activity by disabling the projects and accounts associated with threat actors, while continuously improving our models to make them less susceptible to misuse. That includes using threat intelligence to disrupt adversary operations.

We also proactively share industry best practices to arm defenders and enable stronger protections across the ecosystem. We recently introduced CodeMender, an experimental AI-powered agent utilizing the advanced reasoning capabilities of our Gemini models to automatically fix critical code vulnerabilities. Last year we also began identifying vulnerabilities using Big Sleep, an AI agent developed by Google DeepMind and Google Project Zero.

We believe the industry needs security standards for building and deploying AI responsibly. That's why we introduced the Secure AI Framework (SAIF), a conceptual framework to secure AI systems, and why we’re helping to ensure AI is built responsibly.

For more on these threat actor behaviors, and the steps we’ve taken to thwart their efforts, you can read the full GTIG AI Threat Tracker: Distillation, Experimentation, and (Continued) Integration of AI for Adversarial Use report here.

aside_block




In case you missed it



Here are the latest updates, products, services, and resources from our security teams so far this month:

* How AI can boost defenders, from defense in depth to the cyber kill chain (Q&A): Cybersecurity expert Bruce Schneier shares his thoughts on how AI is impacting attackers and defenders, political power, and civil society. Read more.
* Delivering a secure, open, and sovereign digital world: At Google Cloud, we believe that digital services should be built on a foundation of trust. To support that goal, today we’re expanding our Sovereign Cloud portfolio. Read more.
* Introducing Single-tenant Cloud HSM for more data encryption control: Single-tenant Cloud HSM is a new service that helps you retain full control over your cryptographic keys. Read more.
* How we’re helping democracies stay ahead of digital threats: At the recent Munich Security Conference, we released a new whitepaper outlining the current threat landscape and sharing our recommendations for a unified, full-stack approach to security that can help democracies. Read more.
* The quantum era is coming. Here’s how we’re getting ready to secure it: We’re issuing a call to action to secure the quantum computing era, and outlining our own commitments on post-quantum cryptography. Read more.
* New Android theft protection updates: Phone theft is more than just losing a device; it's a form of financial fraud that can leave you suddenly vulnerable. That’s why we're committed to providing multi-layered defenses that help protect you before, during, and after a theft attempt. Read more.




Please visit the Google Cloud blog for more security stories published this month.x

aside_block




Threat Intelligence news



* Threats to the defense industrial base: The modern defense sector faces a relentless barrage of cyber operations conducted by state-sponsored actors and criminal groups. In recent years, Google Threat Intelligence Group (GTIG) has observed several distinct areas of focus in adversarial targeting of the defense industrial base. Read more.
* UNC1069 targets the cryptocurrency sector with new tooling and AI-enabled social engineering: North Korean threat actors continue to evolve their tradecraft to target the cryptocurrency and decentralized finance (DeFi) sectors. Mandiant recently investigated an intrusion targeting a FinTech organization in this sector, attributed to UNC1069, a financially-motivated threat actor active since at least 2018. Read more.
* Vishing for access: Tracking the expansion of ShinyHunters-branded SaaS data theft: Mandiant has identified an expansion in threat activity that uses tactics, techniques, and procedures (TTPs) consistent with prior ShinyHunters-branded extortion operations. These operations primarily use sophisticated vishing and victim-branded credential harvesting sites to gain initial access to corporate environments by obtaining single sign-on (SSO) credentials and multi-factor authentication (MFA) codes. Read more.
* Proactive defense against ShinyHunters-branded data theft targeting SaaS: Here are actionable hardening, logging, and detection recommendations to help organizations protect against ShinyHunters-branded SaaS data theft. Read more.




Please visit the Google Cloud blog for more threat intelligence stories published this month.


Now hear this: Podcasts from Google Cloud



* Freedom, responsibility, and federated guardrails: Centralized security doesn’t work anymore for modern organizations, says Alex Shulman-Peleg, global CISO, Kraken. He discusses with hosts Anton Chuvakin and Tim Peacock how key changes — driven by cloud, SaaS, and AI — have made the traditional model unsustainable. Listen here.
* Scaling a modern SOC with real AI agents: Dennis Chow, director, Detection Engineering, UKG, joins Anton and Tim to explain his team’s hybrid agent workflow, their production use cases for AI and AI agents in the SOC, and how they measure success. Listen here.
* Behind the Binary: Jailbreaking, prompt injection, and the agentic flaw in MCP: Host Josh Stroschein is joined by Kevin Harris, who says that skilled adversaries have a 100% success rate against all of the defenses that we know about. Listen here.




To have our Cloud CISO Perspectives post delivered twice a month to your inbox, sign up for our newsletter. We’ll be back in a few weeks with more security-related updates from Google Cloud. 🔗 Google Security


https://cloud.google.com/blog/products/identity-security/cloud-ciso-perspectives-new-ai-threats-report-distillation-experimentation-integration/?utm_source=dlvr.it&utm_medium=blogger

No hay comentarios.

Imágenes del tema de enot-poloskun. Con tecnología de Blogger.