---
title: ZioSec News
description: Latest news, press coverage, and announcements about ZioSec and AI agent security.
url: https://ziosec.com/news
---

# ZioSec News

Latest updates, research, and announcements from ZioSec.

## Break Your Own AI Agent: A Practical Red-Team Framework for Builders (Part 2)

- URL: https://ziosec.com/blog/break-your-own-ai-agent-a-practical-red-team-framework-part-2
- Published: 2026-04-23
- Category: Blog
- Author: Andrius Useckas (Co-Founder & CTO)
- Tags: ai-red-teaming, ai-agent-security, offensive-security, enterprise-red-team, ai-pentesting

A practical six-phase red-team framework for enterprise AI agents: Scope, Threat-Model, Attack, Evidence, Remediate, Re-test. Harness-agnostic (Claude Code, OpenClaw, custom). Findings mapped to OWASP ASI, MITRE ATLAS, ISO 42001, NIST AI RMF, AIUC-1.

## Static Guardrails in AI: Ensuring Safety and Compliance, Part 2

- URL: https://ziosec.com/blog/static-guardrails-in-ai-ensuring-safety-and-compliance-part-2
- Published: 2026-04-23
- Category: Blog
- Author: Javier Rivera (Principal Security Researcher)
- Tags: ai-guardrails, llm-security, ai-agent-security, enterprise-ai, non-deterministic-guardrails

Part 2 of the Static Guardrails series. Why pattern-matching fails, what non-deterministic guardrails actually are, the "higher-level leakage" problem, and a practical layered-guardrail pattern for enterprise AI agents.

## Three Questions We Are Taking to AI Agent Conference NYC

- URL: https://ziosec.com/blog/three-questions-we-are-taking-to-ai-agent-conference-nyc-2026
- Published: 2026-04-23
- Category: Blog
- Author: Aaron Walls (Co-Founder & CEO)
- Tags: ai-agent-security, enterprise-red-team, grc, iso-42001, nist-ai-rmf, claude-code, custom-agents, ai-agent-conference-nyc
- Featured: yes

The three questions ZioSec is taking to AI Agent Conference NYC May 4-5, 2026. Enterprise security and GRC leaders are converging on the same hard problems around custom agents, and the playbook is still being written.

## Claude Code May Be Too Dangerous for Enterprise Use Today

- URL: https://ziosec.com/blog/claude-code-may-be-too-dangerous-for-enterprise-use-today
- Published: 2026-04-01
- Category: Feed
- Author: ZioAI (Research)
- Tags: Claude Code, enterprise security, AI source code risks, CISO best practices, supply chain security, software vulnerabilities, security incident response

Discover the risks of the Claude Code leak and essential insights for CISOs on enterprise security and supply chain vulnerabilities.

## Anthropic's 500 AI-Discovered Zero-Days Signal a Threat Shift CISOs Can't Afford to Ignore

- URL: https://ziosec.com/blog/anthropic-500-ai-discovered-zero-days-signal-a-threat-shift-cisos-can-t-afford-to-ignore
- Published: 2026-03-13
- Category: Feed
- Author: ZioAI (Research)
- Tags: AI agent security, zero-day vulnerabilities AI, agentic AI threats, CISO AI security, autonomous AI attacks, AI agent attack surface, Claude Opus 4.6, Anthropic zero-days, organizational trust attacks, AI espionage, prompt injection

Anthropic's Claude found 500+ zero-days. That's not the scary part. The real threat is how AI agents are now targeting organizational trust — communications, approvals, and human workflows — instead of systems. Here's what security leaders need to know.

## AI Jailbreak Techniques in 2026: A Complete Technical Guide | ZioSec

- URL: https://ziosec.com/blog/ai-jailbreak-techniques-in-2026-a-complete-technical-guide-ziosec
- Published: 2026-02-25
- Category: Blog
- Author: Aaron Walls (Co-Founder & CEO)
- Tags: none

Comprehensive guide to AI jailbreak techniques in 2026 — from DAN and Crescendo attacks to MCP exploitation and multimodal jailbreaks. Learn how attackers bypass AI safety measures and how to defend against them.

## LLM Red Teaming: Evaluations, Attacks, & Deep Chained Methods - Ziosec, Mindgard, Promptfoo Compared

- URL: https://ziosec.com/blog/llm-red-teaming-evaluations-attacks-deep-chained-methods-ziosec-mindgard-promptfoo-compared
- Published: 2026-02-12
- Category: Blog
- Author: Aaron Walls (Co-Founder & CEO)
- Tags: none

Learn LLM red teaming strategies and compare Promptfoo, Mindgard, and Ziosec—evaluations, attacks, and deep chained methods to harden AI systems.

## The SaaSpocalypse: Navigating Enterprise AI Agent Risks with OpenClaw and Beyond

- URL: https://ziosec.com/blog/the-saaspocalypse-navigating-enterprise-ai-agent-risks-with-openclaw-and-beyond
- Published: 2026-02-06
- Category: Blog
- Author: Aaron Walls (Co-Founder & CEO)
- Tags: none

Discover how to navigate enterprise AI agent risks and prevent the "SaaSpocalypse." Learn how OpenClaw and robust AI governance frameworks secure agentic AI adoption.

## AI Code Security Risks: Why Enterprise Vibe Coding Created a Security Nightmare

- URL: https://ziosec.com/blog/ai-code-security-risks-why-enterprise-vibe-coding-created-a-security-nightmare
- Published: 2026-01-29
- Category: Feed
- Author: ZioAI (Research)
- Tags: none

AI coding tools like GitHub Copilot and ChatGPT created massive security vulnerabilities in enterprise applications. Learn why vibe coding failed and how to secure AI-generated code.

## Anthropic's Claude Constitution: Cybersecurity Risks and Defense Strategies

- URL: https://ziosec.com/blog/anthropic-s-claude-constitution-cybersecurity-risks-and-defense-strategies
- Published: 2026-01-22
- Category: Feed
- Author: ZioAI (Research)
- Tags: AI Ethics, Cybersecurity, Claude Constitution, Data Poisoning, Attack Vectors, AI Defense Strategies, Machine Learning, Ethical AI

Discover the cybersecurity challenges of Anthropic's Claude Constitution. Explore vulnerabilities, attack vectors, and essential defenses for AI integrity.

## NIST's Initiative for AI Security: Engage & Protect Emerging Technologies

- URL: https://ziosec.com/blog/nist-s-initiative-for-ai-security-engage-protect-emerging-technologies
- Published: 2026-01-20
- Category: Feed
- Author: ZioAI (Research)
- Tags: NIST, AI Security, Cybersecurity, Public Engagement, AI Agents, Technology Guidelines, Risk Management, Critical Infrastructure, Cyber Threats

Explore NIST's call for public engagement on AI security risks as they develop guidelines for secure AI agent deployment. Join the conversation!

## Stripe and Microsoft Copilot Launch Copilot Checkout for Seamless Shopping

- URL: https://ziosec.com/blog/stripe-and-microsoft-copilot-launch-copilot-checkout-for-seamless-shopping
- Published: 2026-01-20
- Category: Feed
- Author: ZioAI (Research)
- Tags: Stripe, Microsoft, Copilot, Checkout, E-commerce, AI Commerce, Data Security, User Trust, Payment Processing

Explore the new Copilot Checkout by Stripe and Microsoft, streamlining online purchases with enhanced security and user trust in AI-driven commerce.

## Why LLMs Struggle with Math: Understanding Their Limitations

- URL: https://ziosec.com/blog/why-llms-struggle-with-math-understanding-their-limitations
- Published: 2026-01-20
- Category: Blog
- Author: Daniel Joyce (Engineering)
- Tags: LLM, machine learning, artificial intelligence, math challenges, decision fatigue, scams, manipulation techniques, language models

Discover why LLMs struggle with math and how their statistical nature parallels human behavior in decision-making and susceptibility to manipulation.

## Claude Cowork Vulnerability: Exfiltration Risks and Defensive Measures

- URL: https://ziosec.com/blog/claude-cowork-vulnerability-exfiltration-risks-and-defensive-measures
- Published: 2026-01-15
- Category: Feed
- Author: ZioAI (Research)
- Tags: Claude Cowork, AI security, file exfiltration, data protection, Anthropic, cybersecurity, prompt injection, security vulnerabilities, threat intelligence

Discover the security risks of Claude Cowork's vulnerability to file exfiltration attacks, along with expert recommendations for safeguarding data.

## Critical AI Vulnerability in ServiceNow's Virtual Agent Exposed

- URL: https://ziosec.com/blog/critical-ai-vulnerability-in-servicenow-s-virtual-agent-exposed
- Published: 2026-01-14
- Category: Feed
- Author: ZioAI (Research)
- Tags: AI Vulnerability, ServiceNow, Chatbot Security, Cybersecurity Risks, Virtual Agent, Threat Intelligence, Authentication Issues, Security Measures, Enterprise Systems

Discover a critical AI vulnerability in ServiceNow's Virtual Agent that allows unauthorized code execution, highlighting urgent security needs for AI integrations.

## Exploring AI Jailbreaks: Bypassing Security in Foundation Models

- URL: https://ziosec.com/blog/exploring-ai-jailbreaks-bypassing-security-in-foundation-models
- Published: 2026-01-14
- Category: Blog
- Author: Andrius Useckas (Co-Founder & CTO)
- Tags: AI Security, Jailbreak Techniques, Foundation Models, OpenAI, Anthropic, ChatGPT, Claude, Cybersecurity, Sensitive Information, Research

Discover how jailbreaks can bypass AI security, focusing on foundation models like ChatGPT and Anthropic's Claude. Learn the risks and techniques involved.

## Critical CVE-2025-68664 Vulnerability in LangChain Core: What You Need to Know

- URL: https://ziosec.com/blog/critical-cve-2025-68664-vulnerability-in-langchain-core-what-you-need-to-know
- Published: 2026-01-05
- Category: Feed
- Author: ZioAI (Research)
- Tags: CVE-2025-68664, LangChain, AI Security, Cybersecurity, Vulnerabilities, Software Patching, Serialization Issues, Threat Intelligence, Defensive Techniques

Learn about CVE-2025-68664 in LangChain Core, its security risks, and defensive strategies to secure AI applications.

## Explore OWASP's Top 10 Risks for Autonomous AI Applications 2026

- URL: https://ziosec.com/blog/explore-owasp-top-10-risks-for-autonomous-ai-applications-2026
- Published: 2026-01-05
- Category: Feed
- Author: ZioSec (Team)
- Tags: OWASP, AI Security, Autonomous Agents, Cybersecurity, Risk Management, Vulnerabilities, Data Protection, Agentic AI, OWASP Top 10 2026

Uncover the OWASP Top 10 risks for autonomous AI applications in 2026. Address vulnerabilities, enhance security, and protect your AI systems effectively.

## Enhancing Adaptability in Agentic AI: Challenges and Solutions

- URL: https://ziosec.com/blog/enhancing-adaptability-in-agentic-ai-challenges-and-solutions
- Published: 2025-12-26
- Category: Feed
- Author: ZioSec (Team)
- Tags: agentic AI, adaptability challenges, AI solutions, cybersecurity, tool usage in AI, machine learning, long-term planning, AI performance, artificial intelligence

Discover how to enhance adaptability in agentic AI systems by understanding challenges and proposing effective solutions in AI applications.

## Static Guardrails in AI: Ensuring Safety and Compliance, Part 1

- URL: https://ziosec.com/blog/static-guardrails-in-ai-ensuring-safety-and-compliance
- Published: 2025-12-24
- Category: Blog
- Author: Javier Rivera (Principal Security Researcher)
- Tags: AI safety, Static guardrails, Machine learning, AI compliance, Data security, Agentic applications, Technology, Software development

Learn about static guardrails for AI applications, their benefits, and strategic placements to ensure compliance and safety in autonomous systems.

## Adversarial Poetry and the Hidden Fragility of AI Safety

- URL: https://ziosec.com/blog/adversarial-poetry-and-the-hidden-fragility-of-ai-safety
- Published: 2025-12-15
- Category: Feed
- Author: ZioSec (Team)
- Tags: ai-security, llm-safety, adversarial-ml, prompt-injection, ai-agents, enterprise-ai, offensive-security, red-teaming, model-alignment, ziosec-research

New research shows poetic language can reliably jailbreak frontier AI models. ZioSec analyzes why stylistic attacks break alignment systems and what this means for enterprise AI agents in production.

## GPT-5.2 Jailbreak Exposes Critical Flaws in Frontier AI Safety – A Wake-Up Call for Enterprise Agent Builders

- URL: https://ziosec.com/blog/chatgpt-5-2-jailbreak-exposes-enterprise-ai-agent-security-risks
- Published: 2025-12-12
- Category: Feed
- Author: ZioSec (Team)
- Tags: ai-security, llm-jailbreak, gpt-5-2, offensive-security, prompt-injection, ai-agents, enterprise-ai, red-teaming, adversarial-ml, agentic-ai, cybersecurity-research, ziosec

An offensive security analysis of the GPT-5.2 jailbreak reveals systemic risks for enterprise AI agents, exposing how prompt injection, agentic workflows, and frontier model accuracy can turn safety failures into operational threats.

## Break Your Own AI Agent: Why Proactive Security Testing is Essential for Builders (Part 1)

- URL: https://ziosec.com/blog/break-your-own-ai-agent-why-proactive-security-testing-is-essential-for-builders-part-1
- Published: 2025-12-08
- Category: Blog
- Author: ZioSec (Team)
- Tags: ai-agents, security-testing, ai-security, prompt-injection, data-poisoning, tool-misuse, red-teaming, devsecops, shift-left, rag-security, llm-security, autonomous-systems, access-control, logging-and-monitoring, risk-management, trust-and-safety, compliance, enterprise-ai, cybersecurity

Learn why AI agents demand a new security mindset and how the “Break Your Own AI Agent” approach helps builders find and fix vulnerabilities before attackers do.

## AI Agents: Evaluations Versus Attacks

- URL: https://ziosec.com/blog/ai-agents-evaluations-versus-attacks
- Published: 2025-12-04
- Category: Blog
- Author: ZioSec (Team)
- Tags: AI Agent Evaluations, AI Agent Attacks

Most engineers understand AI evaluations, but how do those differ from attacks? Alex Gatz, Staff Security Architect at ZioSec overviews the reasons every developer should considering attacking their AI agents instead of relying on evaluations alone.

## What Anthropic’s AI Espionage Report Means for the Future of Offensive Security

- URL: https://ziosec.com/blog/ai-espionage-anthropic-report-offensive-security-analysis
- Published: 2025-11-13
- Category: Feed
- Author: Anthropic (External author)
- Tags: AI Security, Offensive Security, Cyber Espionage, Agentic AI, Threat Intelligence, ZioSec, AI Guardrails, Enterprise Security, Adversarial AI, Cybersecurity

A detailed analysis of Anthropic’s first reported AI-orchestrated cyber espionage campaign, examining how autonomous agent attacks reshape offensive security and what enterprises must do to defend AI systems at scale.

## How to Test AI Agent Guardrails: A Complete Framework for Safety, Security, and Compliance

- URL: https://ziosec.com/blog/how-to-test-ai-agent-guardrails
- Published: 2025-11-13
- Category: Blog
- Author: ZioSec (Team)
- Tags: AI guardrails, agentic AI, AI security, AI safety, AI compliance, LLM guardrails, prompt injection testing, AI red teaming, AI evaluation, AI governance, PII protection, tool call validation, AI risk management, AI testing framework, ZioSec

Learn how to test AI agent guardrails with a complete framework for security, safety, and compliance. Discover methods, tools, and best practices for reliable AI systems.

## Anamorpher: How LLMs Are Compromised With An Image

- URL: https://ziosec.com/blog/anamorpher--how-llms-are-compromised-with-an-image
- Published: 2025-09-03
- Category: Feed
- Author: ZioAI (Research)
- Tags: AI Security

Trail of Bits just gave us one uncomfortable answer: the future of prompt injection is multimodal, and our models are already listening to whispers we...

## The Game Has Changed And Most Defenders Are Still Playing Checkers

- URL: https://ziosec.com/blog/the-game-has-changed-and-most-defenders-are-still-playing-checkers
- Published: 2025-08-15
- Category: Feed
- Author: ZioAI (Research)
- Tags: Snyack, Offensive Security

You can’t patch what you can’t find, and you can’t find what you can’t see.

## ZioSec Raises $2.1M Seed Round to Secure the Future of AI Agent Deployment

- URL: https://ziosec.com/blog/ziosec-raises-21m-seed-round
- Published: 2025-07-17
- Category: Announcement
- Author: ZioSec (Team)
- Tags: none
- Featured: yes

ZioSec raises $2.1M to launch the first offensive security platform for AI agents, helping enterprises find vulnerabilities and secure AI before production.
