Natoma Blog

What NVIDIA NemoClaw Doesn't Cover, and Why It Matters for Enterprise Agents

Industry news

Company & product

TL;DR

Autonomous agents need two security boundaries, not one. NVIDIA NemoClaw gives agents the best compute sandbox available: OS-level isolation, network egress control, credential separation from the runtime. But compute isolation and tool governance are fundamentally different problems. A sandboxed agent that can't reach enterprise tools in a governed way is a secure agent that can't do work. NemoClaw owns the compute boundary. Natoma owns the tool boundary: managed credentials, tool-call-level authorization, and a full audit trail across 100+ enterprise applications. One endpoint in NemoClaw's egress policy. Zero credentials anywhere in the agent's environment. Together, the stack is airtight.

NVIDIA just shipped the agent sandbox enterprises actually needed

At GTC 2026, Jensen Huang called OpenClaw "the operating system for personal AI." NVIDIA NemoClaw is the reference stack that makes it real: always-on autonomous agents running on local NVIDIA hardware (DGX Station, DGX Spark, GeForce RTX PCs) inside isolated OpenShell containers. NemoClaw launched as alpha at GTC (Apache 2.0, open for community experimentation).

This is not a wrapper around Docker. NVIDIA NemoClaw enforces three immutable protection layers the moment a sandbox is created:

  • Network isolation. Landlock + seccomp + network namespace. The agent can only reach endpoints explicitly allowlisted in a YAML policy. Everything else is blocked.

  • Filesystem confinement. Reads and writes restricted to /sandbox and /tmp. The agent cannot touch the host filesystem.

  • Process hardening. Syscall filtering blocks privilege escalation. The agent cannot break out of its container.

On top of that, a mutable inference routing layer intercepts all model calls. The agent calls inference.local. OpenShell routes to whatever backend the operator configured: NVIDIA Nemotron, Anthropic, OpenAI, Gemini, or a local model running on Ollama or vLLM. Inference API keys stay on the host in ~/.nemoclaw/credentials.json. The agent never sees them.

This is real security infrastructure. Not a prompt guard. Not a content filter. OS-level compute isolation with declarative policies.

The agent is sandboxed. Its tools are ungoverned.

Now tell that sandboxed agent to do enterprise work.

"Summarize my open PRs in GitHub and post a standup update to #engineering in Slack."

The agent needs credentials for GitHub's API and Slack's API. Where do those live?

NemoClaw manages inference credentials beautifully. Your Anthropic API key, your NVIDIA endpoint key. Those stay on the host, invisible to the sandbox. Solved.

But enterprise tool credentials are a different problem entirely. Your GitHub personal access token. Your Slack bot token. Your Jira API key. Your Salesforce OAuth credentials. NemoClaw has no mechanism for these. It wasn't designed to.

NemoClaw ships with preset egress policies for Slack, Jira, Docker Hub, and PyPI. Those policies open the network path: an entry in openclaw-sandbox.yaml that says destination: "api.slack.com:443", action: allow. But an open network path is not governed access. The agent can reach Slack. Nothing controls what it does once it gets there. No scoped permissions. No tool-call authorization. No audit trail.

You could put tool credentials inside the sandbox. Now you've broken the credential isolation that makes NemoClaw valuable. The entire security model is that credentials stay on the host and the agent never sees them. Putting a Salesforce OAuth token inside the container defeats the point.

You could hardcode credentials on the host and pass them through environment variables. Same problem, different location. Every tool credential in plaintext on a developer's workstation. No rotation, no scoping, no record of what the agent did with them.

And even if you solve credentials, you still don't have governance. The agent can call any Slack API endpoint it wants. Post to any channel. Read any DM. Delete messages. You have no policy layer saying "this agent can post to #engineering but cannot access #executive." You have a network hole and a hope.

NemoClaw solves the compute boundary with real depth. The tool boundary is wide open.

Two boundaries, two architectures

Compute isolation and tool governance are fundamentally different disciplines. They require different architectures built by different teams. NemoClaw is built by the people who built CUDA and DGX. They go deep on OS-level sandboxing. Enterprise tool governance (credential lifecycle management, fine-grained authorization across 100+ SaaS applications, compliance audit trails) is a different surface entirely. You wouldn't expect your container runtime to also be your identity provider.

The compute boundary controls the agent's execution environment. Where it runs. What system resources it can touch. Which network endpoints it can reach. How processes are constrained. If the agent's code is compromised, the blast radius is limited to the container. NemoClaw handles this.

The tool boundary controls what the agent does once it reaches an enterprise application. Which tools it can invoke. Which actions it's authorized to take. Whose permissions it inherits. What gets logged. This isn't a feature gap in NemoClaw. It's a different problem that needs a different system.

Here's why: agents use tools through MCP, the Model Context Protocol. MCP is the standard interface for AI to connect to enterprise applications. It's adopted by Anthropic, OpenAI, Google, and every major model provider. The agent doesn't call a hardcoded REST endpoint. It decides dynamically which tools to call based on the task. That dynamic, non-deterministic behavior is exactly what makes agents powerful, and exactly what makes them ungovernable without a purpose-built authorization layer. Network egress rules can't govern tool selection. Only a system that understands MCP tool calls at the protocol level can.

An enterprise-grade autonomous agent needs both boundaries. Simultaneously.

What the tool boundary looks like in practice

One endpoint goes into NemoClaw's network egress allowlist:

# nemoclaw-blueprint/policies/openclaw-sandbox.yaml
network:
  egress:
    - destination: "agent.yourcompany.natoma.app:443"
      action: allow

The agent inside the sandbox calls that one URL. Here's what happens next.

The agent asks to summarize your open GitHub PRs. The request hits Natoma. Natoma authenticates the agent (it's a registered entity with its own non-human identity and CLIENT_CREDENTIALS auth, not an anonymous API token). Natoma checks the agent's profile: it's authorized for GitHub read operations. Natoma resolves the GitHub credential from its vault, makes the API call on the agent's behalf, and returns the results. The agent never saw the GitHub token. The tool call is logged: agent identity, tool invoked, arguments, response, policy evaluation.

Now the agent wants to post a standup summary to Slack. Same flow. Natoma checks the Cedar policy: this agent can post to #engineering. It cannot post to #executive. It cannot read DMs. It cannot delete messages. The Slack credential is resolved server-side. The message posts. Logged.

Then the agent tries something unexpected. It attempts to query your Snowflake data warehouse. The agent's profile doesn't include Snowflake. Natoma blocks the call before it reaches any external service. The attempt is logged: what the agent tried to do, which policy blocked it, what happened instead.

Your security team reviews the week's agent activity. One dashboard. Every tool call across GitHub, Slack, Jira, Salesforce, Snowflake. They can see the 47 successful actions, the 3 blocked attempts, and exactly which policies made each decision. Not "the agent reached api.slack.com 200 times." The actual operations: posted 12 standup summaries, commented on 8 PRs, updated 3 Jira tickets to "in review," and attempted to access Snowflake but was denied.

That's the difference between a network egress log and a tool governance layer.

The credential-zero architecture

Here's the complete stack:

Layer

What's protected

How

Inference credentials

API keys for Anthropic, OpenAI, NVIDIA, Gemini

NemoClaw: stored on host, agent sees only inference.local

Compute environment

Agent runtime, filesystem, processes, network

NemoClaw: Landlock + seccomp + network namespace

Tool credentials

GitHub, Slack, Jira, Salesforce, Snowflake tokens

Natoma: stored server-side, encrypted, rotatable, never in agent code

Tool authorization

What the agent can do inside each application

Natoma: Cedar policies at tool-call level

Audit and compliance

What happened, who authorized it, what was blocked

Natoma: full tool-call audit trail with content validation

The agent holds zero credentials. Inference keys are on the NemoClaw host. Enterprise tool credentials are in Natoma. The agent's environment contains nothing sensitive.

What that means operationally:

If the sandbox is compromised, there is nothing to exfiltrate. No credential rotation across 10 services. No incident response scramble. Revoke the agent's Natoma identity. Done.

When you rotate credentials (and you should, regularly), you rotate them in Natoma. The agent is unaffected. It never held them. No redeployment, no config change, no sandbox rebuild.

When the auditor asks "what did this agent do last quarter, and who authorized it," you pull one report. Not pieced together from Slack logs, GitHub audit events, and Jira access records. One trail, covering every tool the agent touched, with policy evaluations attached.

Two independent systems, each handling their domain, with no credential material shared between them. That's not defense-in-depth as a slide deck talking point. That's the architecture.

What this means for your autonomous agent strategy

If you're evaluating autonomous agents for production, ask two questions:

1. How is compute isolated? Can the agent escape its runtime? Can it reach endpoints you didn't authorize? Can it read files outside its workspace? NemoClaw answers these. If you're running agents on NVIDIA hardware, the compute boundary is handled.

2. How are tools governed? When the agent calls Salesforce, who authorized that specific action? What permissions does it inherit? Can it write to production data? What gets logged? If your answer is "we put the API key in an environment variable," you don't have a tool boundary. You have a credential sitting in plaintext.

Most agent infrastructure answers one of these questions. Sandboxes without tool governance give you a secure agent that can't do enterprise work. Tool governance without compute isolation gives you a governed agent running in an uncontrolled environment.

NemoClaw + Natoma is both. The compute boundary and the tool boundary. No credentials in the agent's environment. Full authorization on every tool call. A complete audit trail that your security team can actually use.

An agent you can deploy in an enterprise. Not a demo. Not a prototype. Production.

Get started: natoma.app

FAQ

Can NemoClaw agents use MCP servers directly without Natoma? You could allowlist individual MCP server endpoints in NemoClaw's egress policy, but you'd manage credentials, access controls, and audit logging yourself for each one. Every new tool means a new egress rule, a new credential to manage, a new gap in your audit trail. Natoma consolidates all of that behind a single governed endpoint.

Does this work with any LLM, or only NVIDIA models? Both systems are model-neutral. NemoClaw supports Nemotron, Anthropic, OpenAI, Gemini, Ollama, and vLLM. Natoma works with any MCP-compatible client. The integration is at the infrastructure layer, not the model layer.

What if I'm running agents in the cloud, not on local NVIDIA hardware? Natoma works in any environment: cloud, on-prem, VPC, desktop, endpoint. The tool boundary applies regardless of where the agent runs. NemoClaw targets local NVIDIA hardware for always-on compute. If you're running agents in the cloud, Natoma still governs tool access. The compute isolation comes from your cloud provider's container infrastructure instead of NemoClaw.

How many enterprise tools can a NemoClaw agent access through Natoma? 100+ MCP server connections: GitHub, Jira, Slack, Salesforce, Snowflake, AWS, ServiceNow, Workday, NetSuite, Confluence, Notion, and more. An admin creates a profile that scopes the agent to only the tools it needs. The agent sees a curated set, not the full catalog. This matters on resource-constrained hardware like DGX Spark: fewer tools in context means better agent reasoning and faster responses.

Why MCP instead of direct API calls from the sandbox? MCP is the standard protocol for AI-native tool use, adopted by Anthropic, OpenAI, Google, and every major model provider. Unlike hardcoded API scripts, MCP lets the agent decide dynamically which tools to call based on the task. That dynamic behavior is what makes agents useful and what makes them impossible to govern with static network rules. You need a system that understands tool calls at the protocol level.

Stop building gateways.

Start building world-class AI experiences.

Natoma is enterprise-ready, battle-tested, and ready to help you skip the heavy lifting when implementing AI into your organization.

SOC2 certified

GDPR compliant

CCPA

US Data Privacy

Stop building gateways.

Start building world-class AI experiences.

Natoma is enterprise-ready, battle-tested, and ready to help you skip the heavy lifting when implementing AI into your organization.

SOC2 certified

GDPR compliant

CCPA

US Data Privacy

Stop building gateways.

Start building world-class AI experiences.

Natoma is enterprise-ready, battle-tested, and ready to help you skip the heavy lifting when implementing AI into your organization.

SOC2 certified

GDPR compliant

CCPA

US Data Privacy

Natoma enables you to safely and easily connect AI systems to your enterprise data.

SOC2 certified

GDPR compliant

CCPA

US Data Privacy

Copyright 2026 Natoma Labs, Inc.

Natoma enables you to safely and easily connect AI systems to your enterprise data.

SOC2 certified

GDPR compliant

CCPA

US Data Privacy

Copyright 2026 Natoma Labs, Inc.

Natoma enables you to safely and easily connect AI systems to your enterprise data.

SOC2 certified

GDPR compliant

CCPA

US Data Privacy

Copyright 2026 Natoma Labs, Inc.