---
title: Three Questions We Are Taking to AI Agent Conference NYC
description: The three questions ZioSec is taking to AI Agent Conference NYC May 4-5, 2026. Enterprise security and GRC leaders are converging on the same hard problems around custom agents, and the playbook is still being written.
url: https://ziosec.com/blog/three-questions-we-are-taking-to-ai-agent-conference-nyc-2026
category: Blog
publishedAt: 2026-04-23
author: Aaron Walls
authorRole: Co-Founder & CEO
tags: ai-agent-security, enterprise-red-team, grc, iso-42001, nist-ai-rmf, claude-code, custom-agents, ai-agent-conference-nyc
---

# Three Questions We Are Taking to AI Agent Conference NYC

## TL;DR

* We are attending AI Agent Conference NYC on May 4 and 5 with a booth and a speaking session.
* The last three months of enterprise security conversations keep landing on the same three questions that have no clean industry answer yet.
* Our goal at the conference: hear how a hundred security and GRC leaders are actually answering them right now, in production.
* If you are a CISO, Head of GRC, or Head of AI Platform, come find us. Calendar link and details at the bottom.

---

## The short version of where we are

The AI agent security landscape looks different every time I open my laptop. The harnesses teams are building on (Claude Code, custom stacks, a long tail of whatever fits their workflow) change faster than a security team can write a policy. The attack surface, though, does not change. Prompt injection, tool misuse, retrieval poisoning, memory corruption, agent-to-agent trust abuse. These patterns are stable. The defensive vocabulary is not.

I am going to AI Agent Conference NYC on May 4 and 5 because I want to hear directly from enterprise security and GRC leaders on three specific questions. These are the questions that keep coming up in sales calls, partner conversations, and pentest scoping sessions, and the honest answer across the industry right now is "we are still figuring it out."

## Question 1: How are you deciding what "secure enough" means for an agent with tool access?

Traditional security has a reasonable playbook here. You threat model the application, you classify the data, you map controls to a framework, you pentest, you ship. That playbook assumes the application has a bounded set of actions.

An AI agent does not. Tool access plus autonomy plus probabilistic reasoning produces a system where the set of possible actions on any given invocation is effectively the product of three variables: what tools the agent has, what prompts it receives (directly or indirectly), and what the model decides to do with both.

The teams I am talking to are ending up in one of three places on this question:
1. Scope the agent's tools aggressively, accept that some workflows will require human approval, and treat the agent like a junior employee with delegated authority. Audit the approval rate.
2. Build a runtime policy layer that sees every tool call and approves or blocks it based on a policy model, then log everything. Treat the policy layer itself as part of the security perimeter.
3. Sandbox the agent at the infrastructure level (isolated container, least-privilege credentials, blast-radius containment) and assume the agent will do something unexpected eventually. Engineer for graceful failure.

Most teams are doing a mix of all three. The ones with the cleanest answers are the ones who decided which of these is their primary defensive model, not the ones trying to do all three perfectly.

**What I want to hear at NYC:** which of these three is working at scale in regulated industries. Who has a clean audit story.

## Question 2: Who owns AI agent risk in your organization today?

This one feels simple and is not. In my last ten enterprise conversations:
- In four, security owns it, and GRC is starting to ask for evidence.
- In three, GRC owns it, and security is backfilling the technical understanding.
- In two, AI platform or ML engineering owns it, and neither security nor GRC has caught up yet.
- In one, the legal team owns it, because an EU AI Act or equivalent compliance trigger landed on their desk first.

None of these are wrong. But the downstream consequences of the choice are significant:
- If security owns it, the conversation is usually about pentesting, runtime monitoring, and incident response.
- If GRC owns it, the conversation is about frameworks (ISO 42001, NIST AI RMF, AIUC-1), audit evidence, and policy.
- If AI platform owns it, the conversation is about evaluation harnesses, red-team tooling, and ML-native safety.
- If legal owns it, the conversation is often about contract language and risk transfer.

The most effective organizations I have seen treat it as a joint ownership between security and GRC, with AI platform embedded. Pure single-team ownership tends to produce blind spots.

**What I want to hear at NYC:** who has cracked joint ownership in a way that does not slow teams down. What does that collaboration actually look like day to day.

## Question 3: What would make you comfortable running an agent in production with sensitive data access?

This is the question I have been asking every prospect for the last six weeks. The answers are consistent, and they are not about the specific threats.

The teams comfortable running agents with real data access have four things:
1. A documented threat model for that specific agent, not a generic AI risk assessment.
2. An evidence package that maps to their compliance framework (most often ISO 42001 Annex A controls, NIST AI RMF functions, or AIUC-1 domains). Not a PDF. Structured evidence that can be regenerated when something changes.
3. A runtime policy engine that enforces the threat model at inference time. Not just static guardrails (see our earlier post on [static guardrails](/blog/static-guardrails-in-ai-ensuring-safety-and-compliance) and [why non-deterministic guardrails matter](/blog/static-guardrails-in-ai-ensuring-safety-and-compliance-part-2)).
4. Adversarial testing with evidence of re-test after remediation, not just testing once.

The teams who do not have all four are either not running the agent in production, running it in a scoped sandbox, or running it and hoping (this category is larger than most people admit).

**What I want to hear at NYC:** how hard is it to get to all four. What is the honest timeline. Who is cutting corners and regretting it.

## What we are actually there to do

ZioSec is offensive security for AI agents. We run adversarial testing across model, protocol, and tool layers on whatever harness your team is shipping. Claude Code, a custom stack, or whatever comes next. We hand security and GRC teams evidence mapped to OWASP ASI, MITRE ATLAS, ISO 42001, NIST AI RMF, and AIUC-1. The starting engagement is $10,000, and 100% of the engagement fee applies as credit toward an annual subscription if a team moves to continuous adversarial testing.

If any of the three questions above resonate with a conversation you are currently having at your own company, I want to hear from you.

## Find us at the conference

Booth: 908

Speaking session: "Break Your AI Agent Before They Do" on Tuesday, May 5 at 11:00 AM, hosted by Aaron Walls

Book a 1:1 at the conference: [schedule a 15-minute meeting](https://calendly.com/aaron-ziosec/ai-agent-conference-nyc-1-1)

Not attending the conference but want to talk to our team? [Book a remote call](/demo)

---

*AI Agent Conference NYC is May 4 and 5, 2026, at the New York Hilton Midtown. Full conference details at [agentconference.com](https://www.agentconference.com/). Post-conference field report will be published on the ZioSec blog within two weeks of the event.*