Natoma Blog

How to Give a NemoClaw Agent Real Enterprise Tools

TL;DR

Last month we argued that autonomous agents need two security boundaries: compute isolation and tool governance. NVIDIA NemoClaw handles the first. Natoma handles the second. Since then we wired them together. The result is a reproducible setup where an always-on agent inside a NemoClaw sandbox calls real enterprise tools through Natoma, with zero service credentials in the agent's environment, Cedar policy on every tool call, and a full audit trail. This post is the wiring diagram. Three steps from zero to production.

From thesis to running code

In What NVIDIA NemoClaw Doesn't Cover, and Why It Matters for Enterprise Agents we made the case. NemoClaw gives autonomous agents a real compute sandbox. Network namespaces, Landlock, seccomp, filesystem confinement. Real OS-level isolation, not a prompt guard.

But enterprise tool access is a different problem. An egress rule can open a path to Slack. It cannot tell you which channel the agent is allowed to post in, whose permissions it inherits, or what happened the last time it ran.

That blog was the thesis. This one is the wiring diagram.

We built a reproducible setup where a NemoClaw-sandboxed agent reaches GitHub, Slack, Jira, and anything else you expose, through Natoma. The agent holds no tokens. Every tool call is policy-evaluated. Every action is logged.

Three steps to stand it up. Here is what each one does.

The architecture, in one picture

[ NemoClaw sandbox (OpenClaw) ]
              |
              v    HTTPS + Bearer JWT
      [ Natoma Cloud ]
              |
              v    Cedar policy, managed creds, audit log
     [ Linear, Notion, ... ]

Two boundaries, composed. OpenClaw inside the sandbox speaks remote MCP natively over HTTPS, so there is no proxy to run on the host. Service credentials live inside Natoma. The Natoma agent JWT is baked into the sandbox's MCP config at setup time, then forgotten on the host. The sandbox sees only the remote MCP servers you register — in this walkthrough, linear and notion. Everything else is invisible to it.

Step 1: Install NemoClaw

Assume NVIDIA hardware. DGX Station, DGX Spark, or an RTX PC.

curl -fsSL https://www.nvidia.com/nemoclaw.sh | bash
nemoclaw onboard

The wizard walks you through it. Give the sandbox a name when prompted (we use natomaclaw-demo throughout this post). Pick an inference provider: Nemotron, Anthropic, OpenAI, Gemini, Ollama, or vLLM. Paste the API key. The key stays on the host in ~/.nemoclaw/credentials.json. The agent never sees it.

You now have a sandboxed runtime with OS-level isolation. If the agent's code is compromised, blast radius ends at the container.

That is the first boundary. Now add the second.

Step 2: Create an agent identity in Natoma

Open the Natoma admin console. Create an agent, call it natomaclaw-demo, and assign it the connections it should have along with the specific permissions you want on each. For this walkthrough we give it Linear with list_issues, get_issue, and update_issue, and Notion with search, fetch, and append_block_children. Copy the client_id and client_secret from the agent's detail page, and grab the per-connection MCP URLs for Linear and Notion.

Two decisions made in one UI: what this agent is, and what it can touch. Those are the inputs to the governance layer. Everything downstream evaluates against them.

Step 3: Wire the sandbox to Natoma

On the host, you need nemoclaw and openshell on your PATH (both come with the NemoClaw install) and Python 3 with PyYAML (pip install pyyaml). Then drop the agent's credentials and your Natoma URLs into ~/.natoma-creds:

NATOMA_CLIENT_ID=<from agent detail page>
NATOMA_CLIENT_SECRET=<from agent detail page>
NATOMA_TOKEN_ENDPOINT=https://auth.natoma.app/oauth/token
NATOMA_LINEAR_MCP_URL=https://your-org.mcp.natoma.app/linear
NATOMA_NOTION_MCP_URL=https://your-org.mcp.natoma.app/notion

Save the script below as natoma-connect.sh, make it executable, and run it:

#!/bin/bash
# Connect a Natoma agent's MCP servers to a NemoClaw sandbox.
# Usage: ./natoma-connect.sh [sandbox-name]
#   default sandbox: natomaclaw-demo
# Prereqs:
#   - ~/.natoma-creds with CLIENT_ID, CLIENT_SECRET, TOKEN_ENDPOINT, LINEAR_MCP_URL, NOTION_MCP_URL
#   - nemoclaw + openshell CLIs on PATH
#   - Python 3 with PyYAML

set -euo pipefail
SANDBOX="${1:-natomaclaw-demo}"

set -a; source ~/.natoma-creds; set +a
: "${NATOMA_CLIENT_ID:?missing in ~/.natoma-creds}"
: "${NATOMA_CLIENT_SECRET:?missing in ~/.natoma-creds}"
: "${NATOMA_TOKEN_ENDPOINT:?missing in ~/.natoma-creds}"
: "${NATOMA_LINEAR_MCP_URL:?missing in ~/.natoma-creds}"
: "${NATOMA_NOTION_MCP_URL:?missing in ~/.natoma-creds}"

MCP_HOST=$(printf '%s' "$NATOMA_LINEAR_MCP_URL" | awk -F/ '{print $3}')

echo "→ Minting agent token"
ACCESS_TOKEN=$(curl -sS -X POST "$NATOMA_TOKEN_ENDPOINT" \
    -H "Content-Type: application/x-www-form-urlencoded" \
    -d "grant_type=client_credentials&client_id=${NATOMA_CLIENT_ID}&client_secret=${NATOMA_CLIENT_SECRET}" \
    | python3 -c 'import json,sys; print(json.load(sys.stdin)["access_token"])')

if [ -z "$ACCESS_TOKEN" ]; then
    echo "ERROR: failed to mint agent token" >&2
    exit 1
fi

echo "→ Patching sandbox network policy: allow openclaw/node → ${MCP_HOST}"
TMPDIR=$(mktemp -d)
trap 'rm -rf "$TMPDIR"' EXIT
openshell policy get --full "$SANDBOX" | awk '/^---$/{found=1;next} found' > "$TMPDIR/policy.yaml"

MCP_HOST="$MCP_HOST" python3 - "$TMPDIR/policy.yaml" <<'PY'
import os, sys, yaml
path = sys.argv[1]
host = os.environ["MCP_HOST"]
with open(path) as f:
    pol = yaml.safe_load(f)
pol.setdefault("network_policies", {}).pop("natoma", None)
pol["network_policies"]["natoma"] = {
    "name": "natoma",
    "endpoints": [{
        "host": host,
        "port": 443,
        "protocol": "rest",
        "tls": "terminate",
        "enforcement": "enforce",
        "rules": [
            {"allow": {"method": "POST", "path": "/**"}},
            {"allow": {"method": "GET",  "path": "/**"}},
        ],
    }],
    "binaries": [
        {"path": "/usr/local/bin/openclaw"},
        {"path": "/usr/local/bin/node"},
    ],
}
with open(path, "w") as f:
    yaml.safe_dump(pol, f, sort_keys=False)
PY

openshell policy set "$SANDBOX" --policy "$TMPDIR/policy.yaml" >/dev/null

echo "→ Registering MCP servers in ${SANDBOX}"
SERVERS_JSON=$(ACCESS_TOKEN="$ACCESS_TOKEN" python3 -c '
import json, os
def entry(url):
    return {"type": "http", "url": url,
            "headers": {"Authorization": "Bearer " + os.environ["ACCESS_TOKEN"]}}
print(json.dumps({
    "linear": entry(os.environ["NATOMA_LINEAR_MCP_URL"]),
    "notion": entry(os.environ["NATOMA_NOTION_MCP_URL"]),
}))')
nemoclaw "$SANDBOX" config set \
    --key "mcp.servers" \
    --value "$SERVERS_JSON" >/dev/null

echo "→ Verifying sandbox can reach Natoma"
openshell sandbox exec -n "$SANDBOX" -- openclaw mcp list 2>/dev/null \
    | awk '/^- /{print "  "$0}'

echo
echo "✓ Natoma agent connected to sandbox: ${SANDBOX}"

Four phases, start to finish:

1. Mint a short-lived agent token. An OAuth client_credentials call against Natoma's token endpoint returns a JWT scoped to the natomaclaw-demo identity you created in Step 2. Short-lived by design, rotatable without touching the sandbox, and held only in memory by this script — it does not get written to disk on the host.

2. Patch the sandbox network policy. Dump OpenShell's current policy for the sandbox with openshell policy get --full, add a network_policies.natoma block scoped to the Natoma MCP host on port 443, and write it back with openshell policy set. The rules are intentionally tight: GET/POST only, TLS terminated, and restricted to the openclaw and node binaries — nothing else in the sandbox can reach Natoma.

3. Register the MCP servers. Build a JSON map of {"linear": {...}, "notion": {...}}, each entry a {type: http, url, headers: {Authorization: Bearer <token>}}. Push the whole map into the sandbox's OpenClaw config with nemoclaw $SANDBOX config set --key mcp.servers. OpenClaw picks it up and exposes each server as a named MCP endpoint from that point on.

4. Verify. The script tails with openshell sandbox exec -- openclaw mcp list, which prints the MCP servers OpenClaw sees from inside the sandbox. You should get two lines back: linear and notion.

The agent inside the sandbox now sees two MCP servers: linear and notion. It can call any tool its Natoma agent identity is allowed to call. It cannot see the service credentials for either. It has no idea where the tools actually run. You are ready.

The payoff

Connect into the sandbox and start the agent:

nemoclaw natomaclaw-demo connect
openclaw tui

Prompt it with real work:

List this sprint's high-priority issues in Linear, summarize them, and append the summary as a new block on the engineering standup page in Notion.

The agent calls linear:list_issues through Natoma. Natoma authenticates the agent, resolves the Linear token from its vault, makes the real Linear API call, returns the result. Logged.

The agent calls notion:append_block_children with the summary. Same flow. Natoma checks the Cedar policy: this agent can append blocks to pages, but it cannot delete them. The Notion token resolves server-side. The block lands on the standup page. Logged.

Now ask the agent for something it should not be able to do:

Delete the Q2 retrospective page from Notion.

The agent tries. It selects the notion:delete_page tool. The request hits Natoma. Cedar evaluates. Deny. The agent gets a structured refusal back, the call never reaches Notion, and the audit log records exactly what happened:

natomaclaw-demo  ->  linear:list_issues             PERMITTED
natomaclaw-demo  ->  notion:append_block_children   PERMITTED
natomaclaw-demo  ->  notion:delete_page             DENIED (Cedar policy)

One agent identity. Three tool calls. Three policy decisions. Three audit records. Not stitched together from service logs after the fact. Evaluated at the protocol layer, in real time, by a system designed for it.

That is the tool boundary doing its job.

What this actually unlocks

This is not a toy. It is the seam an enterprise needs if it wants always-on autonomous agents running on NVIDIA hardware.

Credential-zero. The sandbox holds no tokens. If it is compromised, there is nothing to exfiltrate. No rotation scramble across ten services. Revoke the Natoma agent identity. Done.

Rotation with zero redeploys. Credentials rotate inside Natoma on whatever cadence your security team wants. The agent never notices.

One audit trail. One report for the auditor, covering every tool call across every service the agent touched, with the policy decision attached to each line. Not pieced together from GitHub audit events, Slack admin logs, and Jira access records.

Scopeable tool sets. A DGX Spark has limited context window at capacity. An agent scoped to twelve relevant tools reasons better than an agent staring at a five-hundred-tool catalog. Natoma profiles are how you enforce that scope.

Model-neutral. NemoClaw supports Nemotron, Anthropic, OpenAI, Gemini, Ollama, and vLLM. Natoma governs tool calls from any MCP-compatible client. The integration is at the infrastructure layer, not the model layer. Swap the model without touching governance. Swap the hardware without touching tool access.

Try it

If you have NVIDIA hardware and a Natoma account, the whole setup takes under fifteen minutes.

If you do not have NVIDIA hardware yet, Natoma runs in any environment: cloud, on-prem, VPC, desktop, endpoint. The tool boundary applies wherever the agent runs. Start with Natoma, add NemoClaw when the hardware arrives.

One governance layer. Any agent. Any environment.

Get started: natoma.app