Agent Inventory
What is running in your environment?
You cannot govern what you cannot see. MCP servers live in local config files, do not generate network signatures, and never reach an IT inventory without deliberate discovery.
Governance framework
A practical governance framework for CIOs, CISOs, and AI steering committees. The questions come from real enterprise conversations; the frameworks are what works.
The framework
They get answered deliberately, or they get answered by default. Inventory enables the registry. The registry informs identity. Identity powers authorization. Authorization generates the audit trail. The audit trail feeds incident response.
What is running in your environment?
You cannot govern what you cannot see. MCP servers live in local config files, do not generate network signatures, and never reach an IT inventory without deliberate discovery.
What is sanctioned versus shadow?
Tier every connection by data sensitivity. If governed takes three weeks and shadow takes five minutes, shadow wins.
How are agents authenticated?
Short-lived, scoped, on-behalf-of credentials, not static keys in config files. Autonomous agents get non-human identities, authorized more restrictively than human-delegated ones.
Who can access what, with which tools, under what conditions?
Authorization operates at the tool-call level. This specific operation, these parameters, this user, right now. Not blanket MCP-server access.
Can you explain every action an agent took?
Capture timestamp, user, client, agent, server, tool, parameters, result, policy evaluation, and duration. Allowed, blocked, and attempted actions all count.
What data can flow through the AI layer?
Once data enters a prompt, it flows through the LLM context window, may get logged or cached, and can influence other responses. Access controls alone are not enough.
Who owns each agent, and when is it retired?
Every agent needs a named owner, not a team. Orphans are the leading source of governance gaps. Review permissions the same way you review human access.
What happens when something goes wrong?
Bulk queries at 3am that are technically allowed. Prompt injection. Autonomous actions while the user is offline. Your existing IR playbook does not cover these patterns.
Abstract
Your engineers are already connecting AI agents to enterprise systems. Claude Code queries production databases. ChatGPT pulls CRM data. Cursor touches cloud infrastructure. Copilot Studio builds agents that reach ServiceNow, SAP, and Workday. The productivity is real. So is the risk.
Agents inherit user access, act without real-time oversight, and live in config files that never reach an IT inventory. One misconfigured connection becomes a data incident. One unvetted server becomes a compliance failure. One autonomous action becomes an audit finding you cannot explain. This framework is the eight decisions every AI steering committee must make before adoption reaches critical mass.
Who it's for
What's inside
Book a demo. No pitch. We will walk this framework against your environment and share what we have seen across financial services, healthcare, manufacturing, and technology.