Egress Filtering for AI Agents
Overview
Section titled “Overview”When running AI coding assistants inside DevContainers, you may want to restrict their network access to only the domains they need. The Egress Filter feature provides domain-based allowlisting using Squid proxy and iptables.
Why Filter AI Agent Traffic?
Section titled “Why Filter AI Agent Traffic?”AI coding assistants can make network requests — fetching documentation, downloading packages, or calling external APIs. In regulated or security-sensitive environments, you may need to:
- Limit API access: Ensure AI agents only communicate with their provider’s API endpoints
- Prevent data exfiltration: Block unauthorized outbound connections
- Audit network activity: Log all outbound requests through the proxy
- Comply with policies: Meet organizational security requirements for AI tool usage
Configuration for Claude Code
Section titled “Configuration for Claude Code”To allow Claude Code to function while filtering all other traffic:
{ "features": { "ghcr.io/infrashift/trusted-devcontainer-features/bun:latest": {}, "ghcr.io/infrashift/trusted-devcontainer-features/claude-code:latest": {}, "ghcr.io/infrashift/trusted-devcontainer-features/egress-filter:latest": { "allowed_domains": "api.anthropic.com,.anthropic.com,.github.com,.githubusercontent.com" } }, "containerEnv": { "ANTHROPIC_API_KEY": "${localEnv:ANTHROPIC_API_KEY}" }}Configuration for OpenAI Codex
Section titled “Configuration for OpenAI Codex”{ "features": { "ghcr.io/infrashift/trusted-devcontainer-features/bun:latest": {}, "ghcr.io/infrashift/trusted-devcontainer-features/openai-codex:latest": {}, "ghcr.io/infrashift/trusted-devcontainer-features/egress-filter:latest": { "allowed_domains": "api.openai.com,.openai.com,.github.com,.githubusercontent.com" } }, "containerEnv": { "OPENAI_API_KEY": "${localEnv:OPENAI_API_KEY}" }}Configuration for Both
Section titled “Configuration for Both”{ "features": { "ghcr.io/infrashift/trusted-devcontainer-features/bun:latest": {}, "ghcr.io/infrashift/trusted-devcontainer-features/claude-code:latest": {}, "ghcr.io/infrashift/trusted-devcontainer-features/openai-codex:latest": {}, "ghcr.io/infrashift/trusted-devcontainer-features/egress-filter:latest": { "allowed_domains": "api.anthropic.com,.anthropic.com,api.openai.com,.openai.com,.github.com,.githubusercontent.com" } }}How It Works
Section titled “How It Works”- Squid proxy is installed and configured with an allowlist of domains
- iptables rules redirect all outbound HTTP (port 80) and HTTPS (port 443) traffic through the local Squid proxy
- Environment variables (
http_proxy,https_proxy) are set so that tools automatically route through the proxy - A startup script (
egress-filter-start.sh) runs aspostStartCommandto apply iptables rules each time the container starts - Requests to non-allowed domains are blocked by Squid and return a deny page
Adding Package Registry Access
Section titled “Adding Package Registry Access”If you need to install packages during development, add the relevant registries:
{ "allowed_domains": "api.anthropic.com,.anthropic.com,.npmjs.org,.npmjs.com,registry.npmjs.org,.github.com,.githubusercontent.com,pypi.org,files.pythonhosted.org"}Capabilities Required
Section titled “Capabilities Required”The egress filter requires Linux capabilities NET_ADMIN and NET_RAW for iptables manipulation. These are automatically declared in the feature’s devcontainer-feature.json via capAdd.
Troubleshooting
Section titled “Troubleshooting”Proxy not intercepting traffic
Section titled “Proxy not intercepting traffic”Ensure the container was started with NET_ADMIN and NET_RAW capabilities. Check that egress-filter-start.sh ran successfully:
# Check iptables rulessudo iptables -t nat -L -n
# Check Squid statussudo systemctl status squid || sudo squid -k checkRequests being blocked unexpectedly
Section titled “Requests being blocked unexpectedly”Check the Squid access log to see what domains are being requested:
sudo tail -f /var/log/squid/access.logAdd any missing domains to the allowed_domains option with a . prefix for subdomain matching.