Add BPF/seccomp integration for Linux unix socket blocking and comprehensive testing infrastructure

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
David Dworken
2025-10-24 13:55:33 -07:00
parent 3fc21053a2
commit 23e9e22622
23 changed files with 3026 additions and 410 deletions

2
.dockerignore Normal file
View File

@@ -0,0 +1,2 @@
# Exclude local Claude settings to avoid conflicts in tests
.claude/settings.local.json

97
.github/workflows/integration-tests.yml vendored Normal file
View File

@@ -0,0 +1,97 @@
name: Integration Tests
on:
push:
branches: [ "**" ]
pull_request:
branches: [ "**" ]
jobs:
integration-tests:
name: Integration Tests (${{ matrix.os }} / ${{ matrix.arch }})
runs-on: ${{ matrix.runner }}
strategy:
fail-fast: false
matrix:
include:
- arch: x86-64
runner: ubuntu-latest
os: linux
# ARM64 Linux runners (ubuntu-24.04-arm) only work in public repositories
# This is a private repository, so commenting out:
# - arch: arm64
# runner: ubuntu-24.04-arm
# os: linux
- arch: x86-64
runner: macos-13
os: macos
- arch: arm64
runner: macos-14
os: macos
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '18'
- name: Setup Bun
uses: oven-sh/setup-bun@v2
with:
bun-version: latest
- name: Install system dependencies (Linux)
if: matrix.os == 'linux'
run: |
sudo apt-get update
sudo apt-get install -y bubblewrap libseccomp-dev gcc socat ripgrep apparmor-profiles
- name: Enable unprivileged user namespaces (Linux)
if: matrix.os == 'linux'
run: |
# Ubuntu 24.04+ restricts unprivileged user namespaces by default
# Set setuid bit on bwrap to allow namespace creation
echo "Setting setuid bit on bwrap..."
sudo chmod u+s $(which bwrap)
# Verify bwrap can create namespaces
echo "Testing bwrap namespace creation..."
bwrap --ro-bind / / --unshare-net true && echo "✓ bwrap namespace creation works" || echo "✗ bwrap namespace creation still fails"
- name: Install system dependencies (macOS)
if: matrix.os == 'macos'
run: |
brew install ripgrep
- name: Configure npm
env:
ARTIFACTORY_TOKEN: ${{ secrets.ARTIFACTORY_SECRET }}
run: |
cat > .npmrc << EOF
engine-strict=true
registry=https://artifactory.infra.ant.dev/artifactory/api/npm/npm-all/
//artifactory.infra.ant.dev/artifactory/api/npm/npm-all/:_authToken=${ARTIFACTORY_TOKEN}
EOF
- name: Install Node dependencies
run: npm install
- name: Build project
run: npm run build
- name: Run integration tests
run: npm run test:integration
- name: Upload test results
if: always()
uses: actions/upload-artifact@v4
with:
name: test-results-${{ matrix.os }}-${{ matrix.arch }}
path: |
test-results/
*.log
if-no-files-found: ignore

28
Dockerfile.test Normal file
View File

@@ -0,0 +1,28 @@
# Dockerfile for running integration tests in a Linux container
FROM oven/bun:1.1-debian AS base
# Install required system dependencies
RUN apt-get update && apt-get install -y python3 \
curl \
netcat-openbsd \
gcc \
libseccomp-dev \
libseccomp2 \
bubblewrap \
socat \
ripgrep \
ca-certificates \
&& rm -rf /var/lib/apt/lists/*
# Set working directory
WORKDIR /app
# Copy everything (including node_modules)
COPY . .
# Build the project
RUN bun run build
# Run integration tests
CMD ["bun", "run", "test:integration"]

View File

@@ -387,11 +387,24 @@ Watchman accesses files outside the sandbox boundaries, which will trigger permi
- Ubuntu/Debian: `apt-get install socat`
- Fedora: `dnf install socat`
- Arch: `pacman -S socat`
- `python3` - Required for applying seccomp filters (typically pre-installed on Linux)
- Ubuntu/Debian: `apt-get install python3`
- Fedora: `dnf install python3`
- Arch: `pacman -S python`
- `ripgrep` - Fast search tool for deny path detection
- Ubuntu/Debian: `apt-get install ripgrep`
- Fedora: `dnf install ripgrep`
- Arch: `pacman -S ripgrep`
**Optional Linux dependencies (for seccomp fallback):**
The package includes pre-generated seccomp BPF filters for x86-64 and arm architectures. These dependencies are only needed if you are on a different architecture where pre-generated filters are not available:
- `gcc` or `clang` - C compiler
- `libseccomp-dev` - Seccomp library development files
- Ubuntu/Debian: `apt-get install gcc libseccomp-dev`
- Fedora: `dnf install gcc libseccomp-devel`
- Arch: `pacman -S gcc libseccomp`
**macOS requires:**
- `ripgrep` - Fast search tool for deny path detection
- Install via Homebrew: `brew install ripgrep`
@@ -406,6 +419,15 @@ npm install
# Build the project
npm run build
# Build seccomp binaries (requires Docker)
npm run build:seccomp
# Run tests
npm test
# Run integration tests
npm run test:integration
# Type checking
npm run typecheck
@@ -416,6 +438,20 @@ npm run lint
npm run format
```
### Building Seccomp Binaries
The pre-generated BPF filters are included in the repository, but you can rebuild them if needed:
```bash
npm run build:seccomp
```
This script uses Docker to cross-compile seccomp binaries for multiple architectures:
- x64 (x86-64)
- arm64 (aarch64)
The script builds static generator binaries, generates the BPF filters (~104 bytes each), and stores them in `vendor/seccomp/x64/` and `vendor/seccomp/arm64/`. The generator binaries are removed to keep the package size small.
## Implementation Details
### Network Isolation Architecture
@@ -450,6 +486,31 @@ Filesystem restrictions are enforced at the OS level:
This model lets you start with broad read access but tightly controlled write access, then refine both as needed.
### Unix Socket Restrictions (Linux)
On Linux, the sandbox uses **seccomp BPF (Berkeley Packet Filter)** to block Unix domain socket creation at the syscall level. This provides an additional layer of security to prevent processes from creating new Unix domain sockets for local IPC (unless explicitly allowed).
**How it works:**
1. **Pre-generated BPF filters**: The package includes pre-compiled BPF filters for different architectures (x64, ARM64). These are ~104 bytes each and stored in `vendor/seccomp/`. The filters are architecture-specific but libc-independent, so they work with both glibc and musl.
2. **Runtime detection**: The sandbox automatically detects your system's architecture and loads the appropriate pre-generated BPF filter.
3. **Syscall filtering**: The BPF filter intercepts the `socket()` syscall and blocks creation of `AF_UNIX` sockets by returning `EPERM`. This prevents sandboxed code from creating new Unix domain sockets.
4. **Two-stage application using Python helper script**:
- Outer bwrap creates the sandbox with filesystem, network, and PID namespace restrictions
- Network bridging processes (socat) start inside the sandbox (need Unix sockets)
- Python helper script (apply-seccomp-and-exec.py) applies the seccomp filter via `prctl()`
- Python script execs the user command with seccomp active
- User command runs with all sandbox restrictions plus Unix socket creation blocking
**Security limitations**: The filter only blocks `socket(AF_UNIX, ...)` syscalls. It does not prevent operations on Unix socket file descriptors inherited from parent processes or passed via `SCM_RIGHTS`. For most sandboxing scenarios, blocking socket creation is sufficient to prevent unauthorized IPC.
**Minimal runtime dependencies**: Unlike traditional seccomp implementations that require `gcc`, `clang`, and `libseccomp-dev` at runtime, this approach bundles pre-generated BPF filters and uses a Python helper script with standard library `ctypes` to apply them via `prctl()`, eliminating compilation dependencies for end users. Requires Python 3 (typically already installed on Linux systems).
**Fallback mechanism**: If a pre-generated filter isn't available for your platform, the sandbox can fall back to runtime compilation (requires `gcc/clang` and `libseccomp-dev`).
### Violation Detection and Monitoring
When a sandboxed process attempts to access a restricted resource:

View File

@@ -13,9 +13,12 @@
},
"scripts": {
"build": "tsc",
"postbuild": "[ -d vendor ] && cp -r vendor dist/ || true",
"build:seccomp": "scripts/build-seccomp-binaries.sh",
"clean": "rm -rf dist",
"typecheck": "tsc --noEmit",
"test": "bun test",
"test:integration": "bun test test/sandbox/integration.test.ts",
"typecheck": "tsc --noEmit",
"lint": "eslint 'src/**/*.ts' --fix --cache --cache-location=node_modules/.cache/.eslintcache",
"lint:check": "eslint 'src/**/*.ts' --cache --cache-location=node_modules/.cache/.eslintcache",
"format": "prettier --write 'src/**/*.ts' --cache --log-level warn",
@@ -46,6 +49,7 @@
},
"files": [
"dist",
"vendor",
"README.md",
"LICENSE"
],

176
scripts/build-seccomp-binaries.sh Executable file
View File

@@ -0,0 +1,176 @@
#!/bin/bash
set -euo pipefail
# Build static seccomp binaries for Linux using Docker
# This creates self-contained binaries that don't require gcc/clang/libseccomp-dev at runtime
#
# Usage: ./scripts/build-seccomp-binaries.sh
#
# Output: Creates BPF filters in vendor/seccomp/{x64,arm64}/
#
# Note: BPF bytecode is architecture-specific but libc-independent,
# so we only need one BPF file per architecture (not separate glibc/musl versions)
echo "Building static seccomp binaries using Docker..."
# Get the script directory
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
ROOT_DIR="$( cd "$SCRIPT_DIR/.." && pwd )"
# Check if Docker is available
if ! command -v docker &> /dev/null; then
echo "Error: Docker is required but not installed"
exit 1
fi
# Source directory with C files
SOURCE_DIR="$ROOT_DIR/vendor/seccomp-src"
if [ ! -d "$SOURCE_DIR" ]; then
echo "Error: Source directory not found: $SOURCE_DIR"
echo "Make sure vendor/seccomp-src/ exists with the C source files"
exit 1
fi
# Define platforms to build
# Format: docker_platform:vendor_dir:base_image:image_version
# Note: We use Ubuntu (glibc) for building, but the resulting BPF bytecode
# is libc-independent and works with both glibc and musl
PLATFORMS=(
"linux/amd64:x64:ubuntu:22.04"
"linux/arm64:arm64:ubuntu:22.04"
)
# Function to build for a specific platform
build_platform() {
local docker_platform="$1"
local vendor_dir="$2"
local base_image="$3"
local image_version="$4"
local output_dir="$ROOT_DIR/vendor/seccomp/$vendor_dir"
local bpf_file="$output_dir/unix-block.bpf"
echo ""
echo "=========================================="
echo "Building for: $vendor_dir ($docker_platform)"
echo "=========================================="
# Check if BPF file already exists
if [ -f "$bpf_file" ]; then
echo "⊙ BPF file already exists, skipping build: $bpf_file ($(ls -lh "$bpf_file" | awk '{print $5}'))"
return 0
fi
# Create output directory
mkdir -p "$output_dir"
# Build using Ubuntu (glibc)
# Note: The resulting BPF bytecode is libc-independent
docker run --rm --platform "$docker_platform" \
-v "$SOURCE_DIR:/src:ro" \
-v "$output_dir:/output" \
"$base_image:$image_version" sh -c "
set -e
echo 'Installing build dependencies...'
apt-get update -qq
apt-get install -y -qq gcc libseccomp-dev file > /dev/null
echo 'Building seccomp-unix-block (requires libseccomp)...'
gcc -o /output/seccomp-unix-block /src/seccomp-unix-block.c \
-static -lseccomp \
-O2 -Wall -Wextra
echo 'Stripping debug symbols...'
strip /output/seccomp-unix-block
echo 'Setting permissions...'
chmod +x /output/seccomp-unix-block
echo 'Verifying binary...'
file /output/seccomp-unix-block
echo 'Testing static linkage...'
ldd /output/seccomp-unix-block 2>&1 || echo '(static binary - no dynamic dependencies)'
echo 'Binary size:'
ls -lh /output/seccomp-unix-block
" || {
echo "Error: Build failed for $vendor_dir"
return 1
}
# Verify binary exists
if [ ! -f "$output_dir/seccomp-unix-block" ]; then
echo "✗ Error: Binary not found in $output_dir"
return 1
fi
# Generate BPF filter using the seccomp-unix-block binary
echo "Generating BPF filter..."
local bpf_file="$output_dir/unix-block.bpf"
# Run the generator to create the BPF file
if ! "$output_dir/seccomp-unix-block" "$bpf_file" 2>&1; then
echo "✗ Error: Failed to generate BPF filter"
return 1
fi
# Verify BPF file was created
if [ ! -f "$bpf_file" ]; then
echo "✗ Error: BPF file not created"
return 1
fi
echo "✓ BPF filter generated: $(ls -lh "$bpf_file" | awk '{print $5}')"
# Remove the generator binary (we only need the BPF file)
echo "Removing generator binary to save space..."
rm -f "$output_dir/seccomp-unix-block"
# Verify final state
if [ -f "$bpf_file" ]; then
echo "✓ Success: BPF filter ready for $vendor_dir"
return 0
else
echo "✗ Error: BPF file not found in $output_dir"
return 1
fi
}
# Build for all platforms
echo "Starting multi-platform seccomp binary builds..."
echo ""
FAILED_PLATFORMS=()
for platform_spec in "${PLATFORMS[@]}"; do
IFS=':' read -r docker_platform vendor_dir base_image image_version <<< "$platform_spec"
if ! build_platform "$docker_platform" "$vendor_dir" "$base_image" "$image_version"; then
FAILED_PLATFORMS+=("$vendor_dir")
fi
done
# Summary
echo ""
echo "=========================================="
echo "Build Summary"
echo "=========================================="
if [ ${#FAILED_PLATFORMS[@]} -eq 0 ]; then
echo "✓ All platforms built successfully!"
echo ""
echo "Generated BPF filters:"
find "$ROOT_DIR/vendor/seccomp" -name "*.bpf" | sort
echo ""
echo "Total size:"
du -sh "$ROOT_DIR/vendor/seccomp"
echo ""
echo "BPF filter sizes:"
find "$ROOT_DIR/vendor/seccomp" -name "*.bpf" -exec ls -lh {} \; | awk '{print $9 ": " $5}'
exit 0
else
echo "✗ Build failed for: ${FAILED_PLATFORMS[*]}"
exit 1
fi

View File

@@ -65,6 +65,7 @@ function getDefaultConfig(): SandboxRuntimeConfig {
deniedDomains: [],
},
filesystem: {
allowRead: [],
denyRead: [],
allowWrite: [],
denyWrite: [],

View File

@@ -7,7 +7,7 @@ export type {
SandboxRuntimeConfig,
NetworkConfig,
FilesystemConfig,
IgnoreViolationsConfig,
IgnoreViolationsConfig as ViolationIgnoreConfig,
} from './sandbox/sandbox-config.js'
export {
@@ -17,10 +17,21 @@ export {
IgnoreViolationsConfigSchema,
} from './sandbox/sandbox-config.js'
// Schema types (for backward compatibility and internal use)
// Schema types and utilities
export type {
SandboxAskCallback,
FsReadRestrictionConfig,
FsWriteRestrictionConfig,
NetworkRestrictionConfig,
NetworkHostPattern,
SandboxConfig,
IgnoreViolationsConfig,
} from './sandbox/sandbox-schemas.js'
export { SandboxConfigSchema } from './sandbox/sandbox-schemas.js'
// Platform-specific utilities
export { hasLinuxSandboxDependenciesSync } from './sandbox/linux-sandbox-utils.js'
export type { SandboxViolationEvent } from './sandbox/macos-sandbox-utils.js'
// Utility functions
export { getDefaultWritePaths } from './sandbox/sandbox-utils.js'

View File

@@ -0,0 +1,541 @@
import { createHash } from 'node:crypto'
import { tmpdir } from 'node:os'
import { join, dirname } from 'node:path'
import { fileURLToPath } from 'node:url'
import * as fs from 'node:fs'
import { logForDebugging } from '../utils/debug.js'
import { spawnSync } from 'node:child_process'
import { memoize } from 'lodash-es'
/**
* Map Node.js process.arch to our vendor directory architecture names
* Returns null for unsupported architectures
*/
function getVendorArchitecture(): string | null {
const arch = process.arch as string
switch (arch) {
case 'x64':
case 'x86_64':
return 'x64'
case 'arm64':
case 'aarch64':
return 'arm64'
case 'ia32':
case 'x86':
// TODO: Add support for 32-bit x86 (ia32)
// Currently blocked because the seccomp filter does not block the socketcall() syscall,
// which is used on 32-bit x86 for all socket operations (socket, socketpair, bind, connect, etc.).
// On 32-bit x86, the direct socket() syscall doesn't exist - instead, all socket operations
// are multiplexed through socketcall(SYS_SOCKET, ...), socketcall(SYS_SOCKETPAIR, ...), etc.
//
// To properly support 32-bit x86, we need to:
// 1. Build a separate i386 BPF filter (BPF bytecode is architecture-specific)
// 2. Modify vendor/seccomp-src/seccomp-unix-block.c to conditionally add rules that block:
// - socketcall(SYS_SOCKET, [AF_UNIX, ...])
// - socketcall(SYS_SOCKETPAIR, [AF_UNIX, ...])
// 3. This requires complex BPF logic to inspect socketcall's sub-function argument
//
// Until then, 32-bit x86 is not supported to avoid a security bypass.
logForDebugging(
`[SeccompFilter] 32-bit x86 (ia32) is not currently supported due to missing socketcall() syscall blocking. ` +
`The current seccomp filter only blocks socket(AF_UNIX, ...), but on 32-bit x86, socketcall() can be used to bypass this.`,
{ level: 'error' },
)
return null
default:
logForDebugging(
`[SeccompFilter] Unsupported architecture: ${arch}. Only x64 and arm64 are supported.`,
)
return null
}
}
/**
* Check if Python 3 is available (synchronous)
* Python 3 is required for applying seccomp filters via the helper script
* Memoized to avoid repeated system calls
*/
export const hasPython3Sync = memoize((): boolean => {
try {
const result = spawnSync('python3', ['--version'], {
stdio: 'ignore',
timeout: 1000,
})
return result.status === 0
} catch {
return false
}
})
/**
* Check if seccomp dependencies are available (synchronous)
* Returns true if (gcc OR clang) AND libseccomp-dev are installed
* Memoized to avoid repeated system calls
*/
export const hasSeccompDependenciesSync = memoize((): boolean => {
try {
// Check for gcc or clang
const gccResult = spawnSync('which', ['gcc'], {
stdio: 'ignore',
timeout: 1000,
})
const clangResult = spawnSync('which', ['clang'], {
stdio: 'ignore',
timeout: 1000,
})
const hasCompiler = gccResult.status === 0 || clangResult.status === 0
if (!hasCompiler) {
return false
}
// Check for libseccomp by trying to compile the actual seccomp-unix-block.c file
// This is more reliable than checking for specific files since package
// installation paths vary across distributions
const sourceHash = getFilterGeneratorSourceHash()
// Write source to temp file
const sourcePath = writeSourceToTempFile('seccomp-unix-block', sourceHash)
if (!sourcePath) {
return false
}
const testBinary = join(
tmpdir(),
`seccomp-test-${process.pid}-${createHash('sha256').update(Math.random().toString()).digest('hex').substring(0, 8)}`,
)
try {
// Try to compile the real program
const compiler = gccResult.status === 0 ? 'gcc' : 'clang'
const compileResult = spawnSync(
compiler,
['-o', testBinary, sourcePath, '-lseccomp'],
{
stdio: 'ignore',
timeout: 5000,
},
)
// Clean up test binary
try {
fs.rmSync(testBinary, { force: true })
} catch {
// Ignore cleanup errors
}
return compileResult.status === 0
} catch {
// Clean up on error
try {
fs.rmSync(testBinary, { force: true })
} catch {
// Ignore cleanup errors
}
return false
}
} catch {
return false
}
})
/**
* Get the path to a pre-generated BPF filter file from the vendor directory
* Returns the path if it exists, null otherwise
*
* Pre-generated BPF files are organized by architecture:
* - vendor/seccomp/{x64,arm64}/unix-block.bpf
*/
function getPreGeneratedBpfPath(): string | null {
// Determine architecture
const arch = getVendorArchitecture()
if (!arch) {
logForDebugging(
`[SeccompFilter] Cannot find pre-generated BPF filter: unsupported architecture ${process.arch}`,
)
return null
}
logForDebugging(`[SeccompFilter] Detected architecture: ${arch}`)
// Try to locate the BPF file
// Path is relative to the compiled code location
const bpfPath = join(
dirname(fileURLToPath(import.meta.url)),
'..',
'..',
'vendor',
'seccomp',
arch,
'unix-block.bpf',
)
if (fs.existsSync(bpfPath)) {
logForDebugging(
`[SeccompFilter] Found pre-generated BPF filter: ${bpfPath} (${arch})`,
)
return bpfPath
}
logForDebugging(
`[SeccompFilter] Pre-generated BPF filter not found at ${bpfPath} (${arch})`,
)
return null
}
// Cache directory for compiled binaries
const CACHE_DIR = join(tmpdir(), 'claude', 'seccomp-cache')
/**
* Get the path to a source file in the vendor/seccomp-src directory
* Handles both development and production paths
*/
function getVendorSourcePath(filename: string): string {
// Path is relative to the compiled code location
// Development: dist/sandbox/generate-seccomp-filter.js
// Production: node_modules/@anthropic-ai/sandbox-runtime/dist/sandbox/generate-seccomp-filter.js
// Source files: vendor/seccomp-src/...
return join(
dirname(fileURLToPath(import.meta.url)),
'..',
'..',
'vendor',
'seccomp-src',
filename,
)
}
/**
* Read a source file from vendor/seccomp-src directory
* Returns null if the file doesn't exist
*/
function readVendorSource(filename: string): string | null {
const sourcePath = getVendorSourcePath(filename)
try {
if (!fs.existsSync(sourcePath)) {
logForDebugging(
`[SeccompFilter] Source file not found: ${sourcePath}`,
{ level: 'warn' },
)
return null
}
return fs.readFileSync(sourcePath, 'utf8')
} catch (err) {
logForDebugging(
`[SeccompFilter] Failed to read source file ${sourcePath}: ${err}`,
{ level: 'error' },
)
return null
}
}
/**
* Get the hash of the filter generator C source
*/
function getFilterGeneratorSourceHash(): string {
const source = readVendorSource('seccomp-unix-block.c')
if (!source) {
// Fallback hash if source file is missing
return 'missing'
}
return createHash('sha256')
.update(source)
.digest('hex')
.substring(0, 16)
}
/**
* Write C source code to a temporary file
* Returns the path to the temporary source file, or null on failure
*/
function writeSourceToTempFile(
name: string,
hash: string,
): string | null {
const sourcePath = join(CACHE_DIR, `${name}-${hash}.c`)
// Check if source file already exists (cached)
if (fs.existsSync(sourcePath)) {
return sourcePath
}
// Read source from vendor directory
const source = readVendorSource(`${name}.c`)
if (!source) {
logForDebugging(
`[SeccompFilter] Cannot write source file: source not found in vendor directory`,
{ level: 'error' },
)
return null
}
try {
// Create cache directory if it doesn't exist (recursive to create parent dirs)
fs.mkdirSync(CACHE_DIR, { recursive: true })
// Write the C source to the temp file
fs.writeFileSync(sourcePath, source, { encoding: 'utf8' })
logForDebugging(`[SeccompFilter] Wrote C source to ${sourcePath}`)
return sourcePath
} catch (err) {
logForDebugging(`[SeccompFilter] Failed to write source file: ${err}`, {
level: 'error',
})
return null
}
}
/**
* Compile the seccomp filter generator program
* Returns the path to the compiled binary or null on failure
*/
function compileSeccompGenerator(): string | null {
const sourceHash = getFilterGeneratorSourceHash()
const binaryPath = join(CACHE_DIR, `seccomp-unix-block-${sourceHash}`)
// Check if cached binary exists
if (fs.existsSync(binaryPath)) {
logForDebugging('[SeccompFilter] Using cached filter generator binary')
return binaryPath
}
logForDebugging('[SeccompFilter] Compiling seccomp filter generator...')
// Write source to temp file
const sourcePath = writeSourceToTempFile('seccomp-unix-block', sourceHash)
if (!sourcePath) {
return null
}
// Try gcc first, then clang
const compilers = ['gcc', 'clang']
for (const compiler of compilers) {
const result = spawnSync(
compiler,
['-o', binaryPath, sourcePath, '-lseccomp'],
{
stdio: 'pipe',
timeout: 30000, // 30 second timeout
},
)
if (result.status === 0) {
logForDebugging(
`[SeccompFilter] Successfully compiled filter generator with ${compiler}`,
)
return binaryPath
}
logForDebugging(
`[SeccompFilter] Filter generator compilation with ${compiler} failed: ${result.stderr?.toString() || 'unknown error'}`,
{ level: 'error' },
)
}
logForDebugging(
'[SeccompFilter] Failed to compile filter generator with any available compiler. ' +
'Ensure gcc or clang and libseccomp-dev are installed.',
{ level: 'error' },
)
return null
}
/**
* Get the path to the seccomp-unix-block generator binary
* Compiles the binary at runtime
*/
function getSeccompGeneratorPath(): string | null {
return compileSeccompGenerator()
}
/**
* Generate a seccomp BPF filter that blocks Unix domain socket creation
* Returns the path to the BPF filter file, or null if generation failed
*
* The filter blocks socket(AF_UNIX, ...) syscalls while allowing all other syscalls.
* This prevents creation of new Unix domain socket file descriptors.
*
* Security scope:
* - Blocks: socket(AF_UNIX, ...) syscall (creating new Unix socket FDs)
* - Does NOT block: Operations on inherited Unix socket FDs (bind, connect, sendto, etc.)
* - Does NOT block: Unix socket FDs passed via SCM_RIGHTS
* - For most sandboxing scenarios, blocking socket creation is sufficient
*
* Note: This blocks ALL Unix socket creation, regardless of path. The allowUnixSockets
* configuration is not supported on Linux due to seccomp-bpf limitations (it cannot
* read user-space memory to inspect socket paths).
*
* Requirements:
* - Pre-generated BPF filters included for x64 and ARM64
* - For other architectures: gcc or clang + libseccomp-dev for runtime compilation
*
* @returns Path to the BPF filter file, or null on failure
*/
export function generateSeccompFilter(): string | null {
// Check for Python 3 first - required for applying seccomp filters
if (!hasPython3Sync()) {
logForDebugging(
'[SeccompFilter] Python 3 is not available. Python 3 is required for applying seccomp filters via the helper script.',
{ level: 'error' },
)
return null
}
// Try pre-generated BPF filter first (fast path - no compilation needed)
const preGeneratedBpf = getPreGeneratedBpfPath()
if (preGeneratedBpf) {
logForDebugging('[SeccompFilter] Using pre-generated BPF filter')
return preGeneratedBpf
}
// Fall back to runtime generation (requires gcc/clang + libseccomp-dev)
logForDebugging(
'[SeccompFilter] Pre-generated BPF not available, falling back to runtime compilation',
)
// Get the generator binary (pre-built or compile it)
const binaryPath = getSeccompGeneratorPath()
if (!binaryPath) {
logForDebugging(
'[SeccompFilter] Cannot generate BPF filter: no pre-generated file and compilation failed. ' +
'Ensure gcc/clang and libseccomp-dev are installed for runtime compilation.',
{ level: 'error' },
)
return null
}
// Generate a unique filename for this filter
const filterPath = join(
tmpdir(),
`claude-seccomp-${process.pid}-${createHash('sha256').update(Math.random().toString()).digest('hex').substring(0, 8)}.bpf`,
)
logForDebugging(`[SeccompFilter] Generating BPF filter to ${filterPath}`)
// Run the compiled binary to generate the filter
const result = spawnSync(binaryPath, [filterPath], {
stdio: 'pipe',
timeout: 5000, // 5 second timeout
})
if (result.status !== 0) {
logForDebugging(
`[SeccompFilter] Failed to generate filter: ${result.stderr?.toString() || 'unknown error'}`,
{ level: 'error' },
)
return null
}
// Verify the filter file was created
if (!fs.existsSync(filterPath)) {
logForDebugging('[SeccompFilter] Filter file was not created', {
level: 'error',
})
return null
}
logForDebugging('[SeccompFilter] Successfully generated BPF filter via runtime compilation')
return filterPath
}
/**
* Clean up a seccomp filter file
* Note: Pre-generated BPF files from vendor/ are never deleted
*/
export function cleanupSeccompFilter(filterPath: string): void {
// Don't delete pre-generated BPF files from vendor/
if (filterPath.includes('/vendor/seccomp/')) {
logForDebugging('[SeccompFilter] Skipping cleanup of pre-generated BPF file')
return
}
// Only clean up runtime-generated files (in /tmp/)
try {
if (fs.existsSync(filterPath)) {
fs.rmSync(filterPath, { force: true })
logForDebugging(`[SeccompFilter] Cleaned up filter file: ${filterPath}`)
}
} catch (err) {
logForDebugging(`[SeccompFilter] Failed to clean up filter file: ${err}`, {
level: 'error',
})
}
}
/**
* Get the hash of the apply-seccomp Python script source
*/
function getApplySeccompScriptHash(): string {
const source = readVendorSource('apply-seccomp-and-exec.py')
if (!source) {
// Fallback hash if source file is missing
return 'missing'
}
return createHash('sha256')
.update(source)
.digest('hex')
.substring(0, 16)
}
/**
* Write the apply-seccomp Python script to the cache directory
* Returns the path to the script, or null on failure
*/
function writeApplySeccompScript(): string | null {
const scriptHash = getApplySeccompScriptHash()
const scriptPath = join(CACHE_DIR, `apply-seccomp-and-exec-${scriptHash}.py`)
// Check if script already exists (cached)
if (fs.existsSync(scriptPath)) {
logForDebugging('[SeccompFilter] Using cached apply-seccomp Python script')
return scriptPath
}
// Read source from vendor directory
const source = readVendorSource('apply-seccomp-and-exec.py')
if (!source) {
logForDebugging(
'[SeccompFilter] Cannot write Python script: source not found in vendor directory',
{ level: 'error' },
)
return null
}
try {
// Create cache directory if it doesn't exist
fs.mkdirSync(CACHE_DIR, { recursive: true })
// Write the Python script
fs.writeFileSync(scriptPath, source, {
encoding: 'utf8',
mode: 0o755, // Make executable
})
logForDebugging(`[SeccompFilter] Wrote apply-seccomp Python script to ${scriptPath}`)
return scriptPath
} catch (err) {
logForDebugging(
`[SeccompFilter] Failed to write apply-seccomp Python script: ${err}`,
{ level: 'error' },
)
return null
}
}
/**
* Get the path to the apply-seccomp-and-exec Python script
* This script applies a seccomp filter and execs a command, replacing the need
* for nested bwrap with --seccomp flag.
*
* The script is cached in the temp directory to avoid repeated writes.
*
* @returns Path to the Python script, or null on failure
*/
export function getApplySeccompExecPath(): string | null {
return writeApplySeccompScript()
}

View File

@@ -15,6 +15,13 @@ import type {
FsReadRestrictionConfig,
FsWriteRestrictionConfig,
} from './sandbox-schemas.js'
import {
generateSeccompFilter,
cleanupSeccompFilter,
hasSeccompDependenciesSync,
hasPython3Sync,
getApplySeccompExecPath,
} from './generate-seccomp-filter.js'
export interface LinuxNetworkBridgeContext {
httpSocketPath: string
@@ -36,21 +43,41 @@ export interface LinuxSandboxParams {
readConfig?: FsReadRestrictionConfig
writeConfig?: FsWriteRestrictionConfig
enableWeakerNestedSandbox?: boolean
allowAllUnixSockets?: boolean
}
// Cache for Linux sandbox dependencies check
let linuxDepsCache: boolean | undefined
// Track generated seccomp filters for cleanup on process exit
const generatedSeccompFilters: Set<string> = new Set()
let exitHandlerRegistered = false
/**
* Register cleanup handler for generated seccomp filters
*/
function registerSeccompCleanupHandler(): void {
if (exitHandlerRegistered) {
return
}
process.on('exit', () => {
for (const filterPath of generatedSeccompFilters) {
try {
cleanupSeccompFilter(filterPath)
} catch {
// Ignore cleanup errors during exit
}
}
})
exitHandlerRegistered = true
}
/**
* Check if Linux sandbox dependencies are available (synchronous)
* Returns true if bwrap, socat, and rg are installed, false otherwise
* Cached to avoid repeated system calls
* Returns true if bwrap, socat, and python3 are installed.
* Unless allowAllUnixSockets is enabled, also requires seccomp dependencies
* (gcc/clang and libseccomp-dev for non-x64/arm64 architectures).
*/
export function hasLinuxSandboxDependenciesSync(): boolean {
if (linuxDepsCache !== undefined) {
return linuxDepsCache
}
export function hasLinuxSandboxDependenciesSync(allowAllUnixSockets = false): boolean {
try {
const bwrapResult = spawnSync('which', ['bwrap'], {
stdio: 'ignore',
@@ -60,18 +87,22 @@ export function hasLinuxSandboxDependenciesSync(): boolean {
stdio: 'ignore',
timeout: 1000,
})
const rgResult = spawnSync('which', ['rg'], {
stdio: 'ignore',
timeout: 1000,
})
linuxDepsCache =
bwrapResult.status === 0 &&
socatResult.status === 0 &&
rgResult.status === 0
return linuxDepsCache
const hasBasicDeps = bwrapResult.status === 0 && socatResult.status === 0
// Python 3 is required for applying seccomp filters (unless Unix socket blocking is disabled)
if (!allowAllUnixSockets && !hasPython3Sync()) {
return false
}
// Also require seccomp dependencies unless allowAllUnixSockets is enabled
// Note: On x64/arm64, pre-generated BPF filters are available, so gcc/clang are not required
if (!allowAllUnixSockets) {
return hasBasicDeps && hasSeccompDependenciesSync()
}
return hasBasicDeps
} catch {
linuxDepsCache = false
return false
}
}
@@ -211,22 +242,68 @@ export async function initializeLinuxNetworkBridge(
/**
* Build the command that runs inside the sandbox.
* Sets up HTTP proxy on port 3128 and SOCKS proxy on port 1080
*
* If seccomp parameters are provided, uses two-stage filtering:
* 1. Start socat processes (without seccomp filter - they need Unix sockets)
* 2. Apply seccomp filter using Python script (apply-seccomp-and-exec.py)
* 3. Exec user command (with seccomp filter active)
*/
function buildSandboxCommand(
httpSocketPath: string,
socksSocketPath: string,
userCommand: string,
seccompFilterPath?: string,
): string {
// Use a single trap that kills all jobs on EXIT
// This avoids issues with $! variable expansion through shellquote
const innerScript = [
const socatCommands = [
`socat TCP-LISTEN:3128,fork,reuseaddr UNIX-CONNECT:${httpSocketPath} >/dev/null 2>&1 &`,
`socat TCP-LISTEN:1080,fork,reuseaddr UNIX-CONNECT:${socksSocketPath} >/dev/null 2>&1 &`,
'trap "kill %1 %2 2>/dev/null; exit" EXIT',
`eval ${shellquote.quote([userCommand])}`,
].join('\n')
]
return `bash -c ${shellquote.quote([innerScript])}`
// If seccomp filter is provided, use Python script to apply it
if (seccompFilterPath) {
// Two-stage approach:
// 1. Outer bwrap starts socat processes (can use Unix sockets)
// 2. Python script applies seccomp filter via prctl and execs user command
// 3. User command runs with seccomp active (Unix sockets blocked)
//
// Get the path to the apply-seccomp Python script
const applySeccompScript = getApplySeccompExecPath()
if (!applySeccompScript) {
logForDebugging(
'[Sandbox Linux] Failed to get apply-seccomp script, running command without seccomp',
{ level: 'warn' },
)
// Fallback: run user command directly without seccomp
const innerScript = [
...socatCommands,
`eval ${shellquote.quote([userCommand])}`,
].join('\n')
return `bash -c ${shellquote.quote([innerScript])}`
}
// Build command: python3 apply-seccomp-and-exec.py <filterPath> -- <userCommand>
const applySeccompCmd = shellquote.quote([
'python3',
applySeccompScript,
seccompFilterPath,
'--',
'bash',
'-c',
userCommand,
])
const innerScript = [...socatCommands, applySeccompCmd].join('\n')
return `bash -c ${shellquote.quote([innerScript])}`
} else {
// No seccomp filter - run user command directly
const innerScript = [
...socatCommands,
`eval ${shellquote.quote([userCommand])}`,
].join('\n')
return `bash -c ${shellquote.quote([innerScript])}`
}
}
/**
@@ -348,6 +425,49 @@ async function generateFilesystemArgs(
/**
* Wrap a command with sandbox restrictions on Linux
*
* UNIX SOCKET BLOCKING (TWO-STAGE SECCOMP):
* This implementation uses a two-stage seccomp approach to block Unix domain socket
* creation for user commands while allowing network infrastructure to function:
*
* Stage 1: Network infrastructure setup (NO seccomp filter)
* - Bubblewrap starts with isolated network namespace (--unshare-net)
* - Bubblewrap applies PID namespace isolation (--unshare-pid and --proc)
* - Socat processes start and connect to Unix socket bridges
* - These bridges forward traffic to the host's proxy servers
*
* Stage 2: User command execution (WITH seccomp filter)
* - Python script (apply-seccomp-and-exec.py) applies the BPF filter using prctl
* - Python script execs the user command with seccomp filter active
* - User command inherits all sandbox restrictions from bwrap
* - User command cannot create Unix sockets
*
* This solves the conflict between:
* - Security: Blocking arbitrary Unix socket access
* - Functionality: Network sandboxing requires Unix sockets for the proxy bridge
*
* The seccomp-bpf filter blocks socket(AF_UNIX, ...) syscalls, preventing:
* - Creating new Unix domain socket file descriptors
*
* Security limitations:
* - Does NOT block operations (bind, connect, sendto, etc.) on inherited Unix socket FDs
* - Does NOT prevent passing Unix socket FDs via SCM_RIGHTS
* - For most sandboxing use cases, blocking socket creation is sufficient
*
* The filter allows:
* - All TCP/UDP sockets (AF_INET, AF_INET6) for normal network operations
* - All other syscalls
*
* PLATFORM NOTE:
* The allowUnixSockets configuration is still not path-based on Linux (unlike macOS)
* because seccomp-bpf cannot inspect user-space memory. However, the two-stage
* approach allows network functionality to work alongside Unix socket creation blocking.
*
* Requirements for seccomp filtering:
* - Pre-generated BPF filters are included for x64 and ARM64
* - Python 3 with ctypes (standard library) for applying the filter
* - For other architectures: gcc or clang + libseccomp-dev for runtime BPF compilation
* Dependencies are checked by hasLinuxSandboxDependenciesSync() before enabling the sandbox.
*/
export async function wrapCommandWithSandboxLinux(
params: LinuxSandboxParams,
@@ -363,6 +483,7 @@ export async function wrapCommandWithSandboxLinux(
readConfig,
writeConfig,
enableWeakerNestedSandbox,
allowAllUnixSockets,
} = params
// Check if we need any sandboxing
@@ -371,96 +492,160 @@ export async function wrapCommandWithSandboxLinux(
}
const bwrapArgs: string[] = []
let seccompFilterPath: string | undefined = undefined
// By default, always unshare PID namespace and mount fresh /proc.
// If we don't have --unshare-pid, it is possible to escape the sandbox.
// If we don't have --proc, it is possible to read host /proc and leak information about code running
// outside the sandbox. But, --proc is not available when running in unprivileged docker containers
// so we support running without it if explicitly requested.
bwrapArgs.push('--unshare-pid')
if (!enableWeakerNestedSandbox) {
// Mount fresh /proc if PID namespace is isolated (secure mode)
bwrapArgs.push('--proc', '/proc')
}
try {
// ========== SECCOMP FILTER (Unix Socket Blocking) ==========
// Two-stage seccomp approach for network sandboxing using Python helper script:
// 1. Generate the BPF filter that blocks Unix sockets
// 2. Outer bwrap: starts socat processes (need Unix sockets for bridging)
// 3. Python script: applies seccomp filter via prctl and execs user command
// 4. User command runs with seccomp active (Unix sockets blocked)
//
// This allows network infrastructure to use Unix sockets while blocking them for user commands.
//
// NOTE: Seccomp filtering is only enabled when allowAllUnixSockets is false
// (when true, Unix sockets are allowed)
if (!allowAllUnixSockets) {
seccompFilterPath = generateSeccompFilter() ?? undefined
if (!seccompFilterPath) {
// Fail loudly - seccomp filtering is required for security
throw new Error(
'Failed to generate seccomp filter for Unix socket blocking. ' +
'This may occur on unsupported architectures or when required dependencies are unavailable. ' +
'Required: Python 3 with ctypes (standard library), and for non-x64/arm64 architectures: gcc/clang + libseccomp-dev. ' +
'To disable Unix socket blocking, set allowAllUnixSockets: true in your configuration.',
)
}
// ========== NETWORK RESTRICTIONS ==========
if (hasNetworkRestrictions) {
// Only sandbox if we have network config and Linux bridges
if (!httpSocketPath || !socksSocketPath) {
throw new Error(
'Linux network sandboxing was requested but bridge socket paths are not available',
// Track filter for cleanup and register exit handler
// Only track runtime-generated filters (not pre-generated ones from vendor/)
if (!seccompFilterPath.includes('/vendor/seccomp/')) {
generatedSeccompFilters.add(seccompFilterPath)
registerSeccompCleanupHandler()
}
logForDebugging(
'[Sandbox Linux] Generated seccomp BPF filter for Unix socket blocking',
)
} else if (allowAllUnixSockets) {
logForDebugging(
'[Sandbox Linux] Skipping seccomp filter - allowAllUnixSockets is enabled',
)
}
bwrapArgs.push('--unshare-net')
// By default, always unshare PID namespace and mount fresh /proc.
// If we don't have --unshare-pid, it is possible to escape the sandbox.
// If we don't have --proc, it is possible to read host /proc and leak information about code running
// outside the sandbox. But, --proc is not available when running in unprivileged docker containers
// so we support running without it if explicitly requested.
bwrapArgs.push('--unshare-pid')
if (!enableWeakerNestedSandbox) {
// Mount fresh /proc if PID namespace is isolated (secure mode)
bwrapArgs.push('--proc', '/proc')
}
// Bind both sockets into the sandbox
bwrapArgs.push('--bind', httpSocketPath, httpSocketPath)
bwrapArgs.push('--bind', socksSocketPath, socksSocketPath)
// ========== NETWORK RESTRICTIONS ==========
if (hasNetworkRestrictions) {
// Only sandbox if we have network config and Linux bridges
if (!httpSocketPath || !socksSocketPath) {
throw new Error(
'Linux network sandboxing was requested but bridge socket paths are not available',
)
}
// Add proxy environment variables
// HTTP_PROXY points to the socat listener inside the sandbox (port 3128)
// which forwards to the Unix socket that bridges to the host's proxy server
const proxyEnv = generateProxyEnvVars(
3128, // Internal HTTP listener port
1080, // Internal SOCKS listener port
)
bwrapArgs.push(
...proxyEnv.flatMap((env: string) => {
const firstEq = env.indexOf('=')
const key = env.slice(0, firstEq)
const value = env.slice(firstEq + 1)
return ['--setenv', key, value]
}),
)
bwrapArgs.push('--unshare-net')
// Add host proxy port environment variables for debugging/transparency
// These show which host ports the Unix socket bridges connect to
if (httpProxyPort !== undefined) {
// Bind both sockets into the sandbox
bwrapArgs.push('--bind', httpSocketPath, httpSocketPath)
bwrapArgs.push('--bind', socksSocketPath, socksSocketPath)
// Add proxy environment variables
// HTTP_PROXY points to the socat listener inside the sandbox (port 3128)
// which forwards to the Unix socket that bridges to the host's proxy server
const proxyEnv = generateProxyEnvVars(
3128, // Internal HTTP listener port
1080, // Internal SOCKS listener port
)
bwrapArgs.push(
'--setenv',
'CLAUDE_CODE_HOST_HTTP_PROXY_PORT',
String(httpProxyPort),
...proxyEnv.flatMap((env: string) => {
const firstEq = env.indexOf('=')
const key = env.slice(0, firstEq)
const value = env.slice(firstEq + 1)
return ['--setenv', key, value]
}),
)
// Add host proxy port environment variables for debugging/transparency
// These show which host ports the Unix socket bridges connect to
if (httpProxyPort !== undefined) {
bwrapArgs.push(
'--setenv',
'CLAUDE_CODE_HOST_HTTP_PROXY_PORT',
String(httpProxyPort),
)
}
if (socksProxyPort !== undefined) {
bwrapArgs.push(
'--setenv',
'CLAUDE_CODE_HOST_SOCKS_PROXY_PORT',
String(socksProxyPort),
)
}
}
if (socksProxyPort !== undefined) {
// ========== FILESYSTEM RESTRICTIONS ==========
const fsArgs = await generateFilesystemArgs(readConfig, writeConfig)
bwrapArgs.push(...fsArgs)
// Always bind /dev
bwrapArgs.push('--dev', '/dev')
// ========== COMMAND ==========
bwrapArgs.push('--', 'bash', '-c')
// If we have network restrictions, use the network bridge setup with two-stage seccomp
// Otherwise, just run the command directly
if (hasNetworkRestrictions && httpSocketPath && socksSocketPath) {
// Pass seccomp filter to buildSandboxCommand for Python script application
bwrapArgs.push(
'--setenv',
'CLAUDE_CODE_HOST_SOCKS_PROXY_PORT',
String(socksProxyPort),
buildSandboxCommand(
httpSocketPath,
socksSocketPath,
command,
seccompFilterPath,
),
)
} else {
bwrapArgs.push(command)
}
}
// ========== FILESYSTEM RESTRICTIONS ==========
const fsArgs = await generateFilesystemArgs(readConfig, writeConfig)
bwrapArgs.push(...fsArgs)
const wrappedCommand = shellquote.quote(['bwrap', ...bwrapArgs])
// Always bind /dev
bwrapArgs.push('--dev', '/dev')
const restrictions = []
if (hasNetworkRestrictions) restrictions.push('network')
if (hasFilesystemRestrictions) restrictions.push('filesystem')
if (seccompFilterPath) restrictions.push('seccomp(unix-block)')
// ========== COMMAND ==========
bwrapArgs.push('--', 'bash', '-c')
// If we have network restrictions, use the network bridge setup
// Otherwise, just run the command directly
if (hasNetworkRestrictions && httpSocketPath && socksSocketPath) {
bwrapArgs.push(
buildSandboxCommand(httpSocketPath, socksSocketPath, command),
logForDebugging(
`[Sandbox Linux] Wrapped command with bwrap (${restrictions.join(', ')} restrictions)`,
)
} else {
bwrapArgs.push(command)
return wrappedCommand
} catch (error) {
// Clean up seccomp filter on error
if (seccompFilterPath && !seccompFilterPath.includes('/vendor/seccomp/')) {
generatedSeccompFilters.delete(seccompFilterPath)
try {
cleanupSeccompFilter(seccompFilterPath)
} catch (cleanupError) {
logForDebugging(
`[Sandbox Linux] Failed to clean up seccomp filter on error: ${cleanupError}`,
{ level: 'error' },
)
}
}
// Re-throw the original error
throw error
}
const wrappedCommand = shellquote.quote(['bwrap', ...bwrapArgs])
const restrictions = []
if (hasNetworkRestrictions) restrictions.push('network')
if (hasFilesystemRestrictions) restrictions.push('filesystem')
logForDebugging(
`[Sandbox Linux] Wrapped command with bwrap (${restrictions.join(', ')} restrictions)`,
)
return wrappedCommand
}

View File

@@ -9,8 +9,8 @@ import {
decodeSandboxedCommand,
containsGlobChars,
} from './sandbox-utils.js'
import type { IgnoreViolationsConfig } from './sandbox-config.js'
import type {
IgnoreViolationsConfig,
FsReadRestrictionConfig,
FsWriteRestrictionConfig,
} from './sandbox-schemas.js'
@@ -48,6 +48,7 @@ export interface MacOSSandboxParams {
socksProxyPort?: number
needsNetworkRestriction: boolean
allowUnixSockets?: string[]
allowAllUnixSockets?: boolean
allowLocalBinding?: boolean
readConfig: FsReadRestrictionConfig | undefined
writeConfig: FsWriteRestrictionConfig | undefined
@@ -240,6 +241,7 @@ async function generateSandboxProfile({
socksProxyPort,
needsNetworkRestriction,
allowUnixSockets,
allowAllUnixSockets,
allowLocalBinding,
logTag,
}: {
@@ -249,6 +251,7 @@ async function generateSandboxProfile({
socksProxyPort?: number
needsNetworkRestriction: boolean
allowUnixSockets?: string[]
allowAllUnixSockets?: boolean
allowLocalBinding?: boolean
logTag: string
}): Promise<string> {
@@ -408,14 +411,17 @@ async function generateSandboxProfile({
profile.push('(allow network-outbound (local ip "localhost:*"))')
}
// Unix domain sockets for local IPC (SSH agent, Docker, etc.)
if (allowUnixSockets && allowUnixSockets.length > 0) {
if (allowAllUnixSockets) {
// Allow all Unix socket paths
profile.push('(allow network* (subpath "/"))')
} else if (allowUnixSockets && allowUnixSockets.length > 0) {
// Allow specific Unix socket paths
for (const socketPath of allowUnixSockets) {
const normalizedPath = normalizePathForSandbox(socketPath)
profile.push(`(allow network* (subpath ${escapePath(normalizedPath)}))`)
}
}
// If allowUnixSockets is undefined or empty array, Unix sockets are blocked by default
// If both allowAllUnixSockets and allowUnixSockets are false/undefined/empty, Unix sockets are blocked by default
// Allow localhost TCP operations for the HTTP proxy
if (httpProxyPort !== undefined) {
@@ -501,6 +507,7 @@ export async function wrapCommandWithSandboxMacOS(
socksProxyPort,
needsNetworkRestriction,
allowUnixSockets,
allowAllUnixSockets,
allowLocalBinding,
readConfig,
writeConfig,
@@ -520,6 +527,7 @@ export async function wrapCommandWithSandboxMacOS(
socksProxyPort,
needsNetworkRestriction,
allowUnixSockets,
allowAllUnixSockets,
allowLocalBinding,
logTag,
})

View File

@@ -7,6 +7,7 @@ import { z } from 'zod'
/**
* Schema for domain patterns (e.g., "example.com", "*.npmjs.org")
* Validates that domain patterns are safe and don't include overly broad wildcards
*/
const domainPatternSchema = z.string().refine(
(val) => {
@@ -22,7 +23,7 @@ const domainPatternSchema = z.string().refine(
if (val.startsWith('*.')) {
const domain = val.slice(2)
// After the *. there must be a valid domain with at least one more dot
// e.g., *.example.com is valid, *.com is not
// e.g., *.example.com is valid, *.com is not (too broad)
if (!domain.includes('.') || domain.startsWith('.') || domain.endsWith('.')) {
return false
}
@@ -31,7 +32,7 @@ const domainPatternSchema = z.string().refine(
return parts.length >= 2 && parts.every(p => p.length > 0)
}
// Reject any other use of wildcards
// Reject any other use of wildcards (e.g., *, *., etc.)
if (val.includes('*')) {
return false
}
@@ -40,7 +41,7 @@ const domainPatternSchema = z.string().refine(
return val.includes('.') && !val.startsWith('.') && !val.endsWith('.')
},
{
message: 'Invalid domain pattern. Must be a valid domain (e.g., "example.com") or wildcard (e.g., "*.example.com")',
message: 'Invalid domain pattern. Must be a valid domain (e.g., "example.com") or wildcard (e.g., "*.example.com"). Overly broad patterns like "*.com" or "*" are not allowed for security reasons.',
}
)
@@ -50,7 +51,7 @@ const domainPatternSchema = z.string().refine(
const filesystemPathSchema = z.string().min(1, 'Path cannot be empty')
/**
* Network configuration schema
* Network configuration schema for validation
*/
export const NetworkConfigSchema = z.object({
allowedDomains: z.array(domainPatternSchema).describe('List of allowed domains (e.g., ["github.com", "*.npmjs.org"])'),
@@ -61,16 +62,17 @@ export const NetworkConfigSchema = z.object({
})
/**
* Filesystem configuration schema
* Filesystem configuration schema for validation
*/
export const FilesystemConfigSchema = z.object({
allowRead: z.array(filesystemPathSchema).describe('Paths allowed for reading'),
denyRead: z.array(filesystemPathSchema).describe('Paths denied for reading'),
allowWrite: z.array(filesystemPathSchema).describe('Paths allowed for writing'),
denyWrite: z.array(filesystemPathSchema).describe('Paths denied for writing (takes precedence over allowWrite)'),
})
/**
* Configuration for ignoring specific sandbox violations
* Configuration schema for ignoring specific sandbox violations
* Maps command patterns to filesystem paths to ignore violations for.
*/
export const IgnoreViolationsConfigSchema = z.record(
@@ -79,7 +81,7 @@ export const IgnoreViolationsConfigSchema = z.record(
).describe('Map of command patterns to filesystem paths to ignore violations for. Use "*" to match all commands')
/**
* Main configuration schema for Sandbox Runtime
* Main configuration schema for Sandbox Runtime validation
*/
export const SandboxRuntimeConfigSchema = z.object({
network: NetworkConfigSchema.describe('Network restrictions configuration'),

View File

@@ -4,10 +4,7 @@ import type { SocksProxyWrapper } from './socks-proxy.js'
import { logForDebugging } from '../utils/debug.js'
import { getPlatform, type Platform } from '../utils/platform.js'
import * as fs from 'fs'
import type {
SandboxRuntimeConfig,
IgnoreViolationsConfig,
} from './sandbox-config.js'
import type { SandboxRuntimeConfig } from './sandbox-config.js'
import type {
SandboxAskCallback,
FsReadRestrictionConfig,
@@ -211,7 +208,7 @@ async function initialize(
if (enableLogMonitor && getPlatform() === 'macos') {
logMonitorShutdown = startMacOSSandboxLogMonitor(
sandboxViolationStore.addViolation.bind(sandboxViolationStore),
config.ignoreViolations,
config.ignoreViolations?.commands,
)
logForDebugging('Started macOS sandbox log monitor')
}
@@ -347,12 +344,16 @@ function getAllowUnixSockets(): string[] | undefined {
return config?.network?.allowUnixSockets
}
function getAllowAllUnixSockets(): boolean | undefined {
return config?.network?.allowAllUnixSockets
}
function getAllowLocalBinding(): boolean | undefined {
return config?.network?.allowLocalBinding
}
function getIgnoreViolations(): Record<string, string[]> | undefined {
return config?.ignoreViolations
return config?.ignoreViolations?.commands
}
function getEnableWeakerNestedSandbox(): boolean | undefined {
@@ -415,6 +416,7 @@ async function wrapWithSandbox(command: string): Promise<string> {
writeConfig: getFsWriteConfig(),
needsNetworkRestriction: true,
allowUnixSockets: getAllowUnixSockets(),
allowAllUnixSockets: getAllowAllUnixSockets(),
allowLocalBinding: getAllowLocalBinding(),
ignoreViolations: getIgnoreViolations(),
})
@@ -431,6 +433,7 @@ async function wrapWithSandbox(command: string): Promise<string> {
readConfig: getFsReadConfig(),
writeConfig: getFsWriteConfig(),
enableWeakerNestedSandbox: getEnableWeakerNestedSandbox(),
allowAllUnixSockets: getAllowAllUnixSockets(),
})
default:
@@ -553,7 +556,7 @@ function annotateStderrWithSandboxFailures(
command: string,
stderr: string,
): string {
if (!isSandboxingEnabled()) {
if (!config) {
return stderr
}
@@ -590,6 +593,7 @@ function getLinuxGlobPatternWarnings(): string[] {
// Check filesystem paths for glob patterns
const allPaths = [
...config.filesystem.allowRead,
...config.filesystem.denyRead,
...config.filesystem.allowWrite,
...config.filesystem.denyWrite,
@@ -628,7 +632,6 @@ export interface ISandboxManager {
getNetworkRestrictionConfig(): NetworkRestrictionConfig
getAllowUnixSockets(): string[] | undefined
getAllowLocalBinding(): boolean | undefined
getIgnoreViolations(): IgnoreViolationsConfig | undefined
getEnableWeakerNestedSandbox(): boolean | undefined
getProxyPort(): number | undefined
getSocksProxyPort(): number | undefined
@@ -659,7 +662,6 @@ export const SandboxManager: ISandboxManager = {
getNetworkRestrictionConfig,
getAllowUnixSockets,
getAllowLocalBinding,
getIgnoreViolations,
getEnableWeakerNestedSandbox,
getProxyPort,
getSocksProxyPort,

View File

@@ -281,7 +281,19 @@ export const NetworkConfigSchema = z
.array(z.string())
.optional()
.describe(
'Allow Unix domain sockets for local IPC (SSH agent, Docker, etc.). Provide an array of specific paths. Defaults to blocking if not specified',
'Allow Unix domain sockets for local IPC (SSH agent, Docker, etc.). Provide an array of specific paths. Defaults to blocking if not specified. ' +
'IMPORTANT: On Linux, this configuration is not supported.',
),
allowAllUnixSockets: z
.boolean()
.optional()
.describe(
'Allow all Unix domain socket connections without restrictions. ' +
'On Linux, this disables the seccomp filter that blocks Unix sockets and allows sandboxing without seccomp dependencies (gcc/clang/libseccomp-dev). ' +
'On macOS, this allows all Unix socket paths. ' +
'WARNING: This significantly reduces sandbox security by allowing arbitrary Unix socket connections. ' +
'Only enable if Unix socket access is required and the security trade-off is acceptable. ' +
'Default: false (secure).',
),
allowLocalBinding: z
.boolean()

View File

@@ -118,7 +118,10 @@ export function getDefaultWritePaths(): string[] {
'/dev/tty',
'/dev/dtracehelper',
'/dev/autofs_nowait',
'/tmp/claude',
'/private/tmp/claude',
path.join(homeDir, '.npm/_logs'),
path.join(homeDir, '.claude/debug'),
'.',
]
@@ -304,9 +307,9 @@ export function generateProxyEnvVars(
httpProxyPort?: number,
socksProxyPort?: number,
): string[] {
const envVars: string[] = [`SANDBOX_RUNTIME=1`]
const envVars: string[] = [`SANDBOX_RUNTIME=1`, `TMPDIR=/tmp/claude`]
// If no proxy ports provided, return empty array
// If no proxy ports provided, return minimal env vars
if (!httpProxyPort && !socksProxyPort) {
return envVars
}

View File

@@ -1,42 +0,0 @@
import { execFile } from 'child_process'
import { promisify } from 'util'
const execFilePromise = promisify(execFile)
/**
* Simple wrapper around execFile that doesn't throw on non-zero exit codes
* Simplified version for standalone sandbox use
*/
export async function execFileNoThrow(
file: string,
args: string[],
options: { timeout?: number; cwd?: string } = {},
): Promise<{ stdout: string; stderr: string; code: number }> {
try {
const result = await execFilePromise(file, args, {
timeout: options.timeout || 10000,
cwd: options.cwd,
maxBuffer: 10 * 1024 * 1024, // 10MB
})
return {
stdout: result.stdout,
stderr: result.stderr,
code: 0,
}
} catch (error: unknown) {
// execFile throws on non-zero exit, but we want to return the result
if (error && typeof error === 'object' && 'code' in error) {
return {
stdout: (error as { stdout?: string }).stdout || '',
stderr: (error as { stderr?: string }).stderr || '',
code: typeof error.code === 'number' ? error.code : 1,
}
}
// For other errors (like ENOENT), return error info
return {
stdout: '',
stderr: error instanceof Error ? error.message : String(error),
code: 1,
}
}
}

333
src/utils/settings.ts Normal file
View File

@@ -0,0 +1,333 @@
import * as fs from 'fs'
import * as path from 'path'
import * as os from 'os'
import { z } from 'zod'
import { mergeWith } from 'lodash-es'
import { SandboxConfigSchema } from '../sandbox/sandbox-schemas.js'
import { getPlatform } from './platform.js'
import { logForDebugging } from './debug.js'
// Tool name constants
export const WEB_FETCH_TOOL_NAME = 'WebFetch'
export const FILE_EDIT_TOOL_NAME = 'Edit'
export const FILE_READ_TOOL_NAME = 'Read'
/**
* Permission rule structure
*/
export type PermissionRule = {
toolName: string
ruleContent?: string
}
/**
* Zod schema for sandbox settings
*/
const SandboxSettingsSchema = z.object({
permissions: z
.object({
allow: z.array(z.string()).optional(),
deny: z.array(z.string()).optional(),
ask: z.array(z.string()).optional(),
})
.optional(),
sandbox: SandboxConfigSchema.optional(),
})
/**
* Minimal settings structure for sandbox
*/
export type SandboxSettings = z.infer<typeof SandboxSettingsSchema>
/**
* Setting source types
*/
export type SettingSource =
| 'userSettings'
| 'projectSettings'
| 'localSettings'
| 'policySettings'
| 'flagSettings'
export type EditableSettingSource =
| 'userSettings'
| 'projectSettings'
| 'localSettings'
// Session-level cache for settings
let sessionSettingsCache: SandboxSettings | null = null
// Store the --settings flag path
let flagSettingsPath: string | undefined
/**
* Set the path for flag-based settings (e.g., from --settings flag)
*/
export function setFlagSettingsPath(path: string | undefined): void {
flagSettingsPath = path
resetSettingsCache()
}
/**
* Get the managed settings file path based on platform
*/
function getManagedSettingsFilePath(): string {
switch (getPlatform()) {
case 'macos':
return '/Library/Application Support/ClaudeCode/managed-settings.json'
case 'windows':
return 'C:\\ProgramData\\ClaudeCode\\managed-settings.json'
default:
return '/etc/claude-code/managed-settings.json'
}
}
/**
* Get file path for a specific setting source
*/
export function getSettingsFilePathForSource(
source: SettingSource,
): string | undefined {
const cwd = process.cwd()
const homeDir = os.homedir()
switch (source) {
case 'userSettings':
return path.join(homeDir, '.claude', 'settings.json')
case 'projectSettings':
return path.join(cwd, '.claude', 'settings.json')
case 'localSettings':
return path.join(cwd, '.claude', 'settings.local.json')
case 'policySettings':
return getManagedSettingsFilePath()
case 'flagSettings':
return flagSettingsPath
}
}
/**
* Parse permission rule string into structured format
* Format: "ToolName(rule)" or "ToolName"
*/
export function permissionRuleValueFromString(
ruleString: string,
): PermissionRule {
const match = ruleString.match(/^([^(]+)(?:\(([^)]*)\))?$/)
if (!match) {
throw new Error(`Invalid permission rule format: ${ruleString}`)
}
const [, toolName, ruleContent] = match
return {
toolName: toolName?.trim() || '',
ruleContent: ruleContent?.trim(),
}
}
/**
* Load settings from a single file
*/
function loadSettingsFile(filePath: string): SandboxSettings | null {
try {
if (!fs.existsSync(filePath)) {
return null
}
const content = fs.readFileSync(filePath, 'utf-8')
if (content.trim() === '') {
return null
}
const data = JSON.parse(content)
// Validate with Zod
const result = SandboxSettingsSchema.safeParse(data)
if (!result.success) {
// Loud error to stderr
console.error(`\n❌ Settings validation error in: ${filePath}`)
console.error('Details:')
result.error.issues.forEach(issue => {
const pathStr = issue.path.length > 0 ? issue.path.join('.') : 'root'
console.error(` - ${pathStr}: ${issue.message}`)
})
console.error('')
// Also log for debugging
logForDebugging(
`Validation failed for ${filePath}: ${result.error.message}`,
{ level: 'error' },
)
return null
}
logForDebugging(
`Loaded from ${filePath}: ${JSON.stringify(result.data, null, 2)}`,
)
return result.data
} catch (error) {
// Loud error to stderr
console.error(`\n❌ Failed to parse settings file: ${filePath}`)
if (error instanceof SyntaxError) {
console.error(`JSON syntax error: ${error.message}`)
} else {
console.error(
`Error: ${error instanceof Error ? error.message : String(error)}`,
)
}
console.error('')
// Also log for debugging
logForDebugging(`Failed to read ${filePath}: ${error}`, {
level: 'error',
})
return null
}
}
/**
* Merge two arrays and deduplicate
*/
function mergeArrays<T>(arr1: T[], arr2: T[]): T[] {
return Array.from(new Set([...arr1, ...arr2]))
}
/**
* Deep merge two settings objects using lodash mergeWith
* Arrays are concatenated and deduplicated
* Objects are recursively deep merged
*/
function mergeSettings(
base: SandboxSettings,
override: SandboxSettings,
): SandboxSettings {
return mergeWith(base, override, (objValue: unknown, srcValue: unknown) => {
// Custom merge for arrays: concatenate and deduplicate
if (Array.isArray(objValue) && Array.isArray(srcValue)) {
return mergeArrays(objValue, srcValue)
}
// For non-arrays, let lodash handle the default deep merge behavior
return undefined
})
}
/**
* Reset the session-level settings cache
*/
export function resetSettingsCache(): void {
sessionSettingsCache = null
}
/**
* Get settings for a specific source
*/
export function getSettingsForSource(
source: SettingSource,
): SandboxSettings | null {
const settingsFilePath = getSettingsFilePathForSource(source)
if (!settingsFilePath) {
return null
}
return loadSettingsFile(settingsFilePath)
}
/**
* Update settings for a specific source
*/
export function updateSettingsForSource(
source: EditableSettingSource,
settings: SandboxSettings,
): void {
const filePath = getSettingsFilePathForSource(source)
if (!filePath) {
return
}
try {
// Create the directory if needed
const dir = path.dirname(filePath)
if (!fs.existsSync(dir)) {
fs.mkdirSync(dir, { recursive: true })
}
// Load existing settings
const existingSettings = loadSettingsFile(filePath) || {}
// Merge with new settings
const updatedSettings = mergeSettings(existingSettings, settings)
// Write to file
fs.writeFileSync(filePath, JSON.stringify(updatedSettings, null, 2) + '\n')
// Invalidate cache
resetSettingsCache()
} catch (error) {
logForDebugging(`Failed to write ${filePath}: ${error}`, {
level: 'error',
})
}
}
/**
* Load settings from disk without using cache
*/
function loadSettingsFromDisk(): SandboxSettings {
// Define setting sources in priority order (lowest to highest)
const sources: SettingSource[] = [
'userSettings',
'projectSettings',
'localSettings',
'policySettings',
]
// Add flagSettings if a path was provided
if (flagSettingsPath) {
sources.push('flagSettings')
}
let merged: SandboxSettings = {}
// Merge settings from each source
for (const source of sources) {
const settings = getSettingsForSource(source)
if (settings) {
merged = mergeSettings(merged, settings)
}
}
logForDebugging(`Final merged settings: ${JSON.stringify(merged, null, 2)}`)
return merged
}
/**
* Get merged settings from all sources with session-level caching
* Merges in priority order:
* 1. User settings (~/.claude/settings.json)
* 2. Project settings ($CWD/.claude/settings.json)
* 3. Local settings ($CWD/.claude/settings.local.json)
* 4. Policy settings (platform-specific managed settings)
* 5. Flag settings (from --settings flag if provided)
*
* Settings are cached for the session. Call resetSettingsCache() to invalidate.
*/
export function getSettings(): SandboxSettings {
// Use cached result if available
if (sessionSettingsCache !== null) {
return sessionSettingsCache
}
// Load from disk and cache the result
sessionSettingsCache = loadSettingsFromDisk()
return sessionSettingsCache
}
/**
* Get the filesystem implementation (for dependency injection/testing)
*/
export function getFsImplementation() {
return fs
}

View File

@@ -1,14 +1,16 @@
import { describe, it, expect, beforeAll, afterAll } from 'bun:test'
import { spawnSync } from 'node:child_process'
import { existsSync, unlinkSync, mkdirSync, rmSync } from 'node:fs'
import { spawnSync, spawn } from 'node:child_process'
import { existsSync, unlinkSync, mkdirSync, rmSync, statSync, readFileSync, writeFileSync, readdirSync } from 'node:fs'
import { tmpdir } from 'node:os'
import { join } from 'node:path'
import { join, dirname } from 'node:path'
import { fileURLToPath } from 'node:url'
import { getPlatform } from '../../src/utils/platform.js'
import { SandboxManager } from '../../src/sandbox/sandbox-manager.js'
import type { SandboxRuntimeConfig } from '../../src/sandbox/sandbox-config.js'
import { generateSeccompFilter } from '../../src/sandbox/generate-seccomp-filter.js'
/**
* Create a test configuration for the sandbox with example.com allowlisted
* Create a minimal test configuration for the sandbox with example.com allowed
*/
function createTestConfig(): SandboxRuntimeConfig {
return {
@@ -17,6 +19,7 @@ function createTestConfig(): SandboxRuntimeConfig {
deniedDomains: [],
},
filesystem: {
allowRead: [],
denyRead: [],
allowWrite: [],
denyWrite: [],
@@ -24,13 +27,93 @@ function createTestConfig(): SandboxRuntimeConfig {
}
}
function skipIfNotSupported(): boolean {
const platform = getPlatform()
return platform !== 'linux' && platform !== 'macos'
function skipIfNotLinux(): boolean {
return getPlatform() !== 'linux'
}
function isMacOS(): boolean {
return getPlatform() === 'macos'
// ============================================================================
// Helper Functions for BPF File Management
// ============================================================================
/**
* Temporarily hide BPF files to force JIT compilation
* Returns a map of file paths to their contents for later restoration
*/
function hideBpfFiles(): Map<string, Buffer> {
const backups = new Map<string, Buffer>()
// Hide BPF files from both vendor/ (source) and dist/vendor/ (runtime)
const seccompDirs = [
join(process.cwd(), 'vendor', 'seccomp'),
join(process.cwd(), 'dist', 'vendor', 'seccomp'),
]
for (const vendorSeccompDir of seccompDirs) {
if (!existsSync(vendorSeccompDir)) {
continue
}
// Find all BPF files in seccomp/*/unix-block.bpf
const archDirs = readdirSync(vendorSeccompDir, { withFileTypes: true })
.filter(dirent => dirent.isDirectory())
.map(dirent => dirent.name)
for (const arch of archDirs) {
const bpfPath = join(vendorSeccompDir, arch, 'unix-block.bpf')
if (existsSync(bpfPath)) {
// Backup file contents
const contents = readFileSync(bpfPath)
backups.set(bpfPath, contents)
// Delete the file
unlinkSync(bpfPath)
console.log(`Hidden BPF file: ${bpfPath}`)
}
}
}
return backups
}
/**
* Restore BPF files from backups
*/
function restoreBpfFiles(backups: Map<string, Buffer>): void {
for (const [path, contents] of backups.entries()) {
writeFileSync(path, contents)
console.log(`Restored BPF file: ${path}`)
}
}
/**
* Assert that the sandbox is using pre-compiled BPF files
*/
function assertPrecompiledBpfInUse(): void {
const bpfPath = generateSeccompFilter()
expect(bpfPath).toBeTruthy()
expect(bpfPath).toContain('/vendor/seccomp/')
expect(existsSync(bpfPath!)).toBe(true)
console.log(`✓ Verified using pre-compiled BPF: ${bpfPath}`)
}
/**
* Assert that the sandbox is using JIT-compiled BPF files
*/
function assertJitBpfInUse(): void {
const bpfPath = generateSeccompFilter()
expect(bpfPath).toBeTruthy()
expect(bpfPath).toContain('/tmp/claude-seccomp-')
expect(bpfPath).toContain('.bpf')
expect(existsSync(bpfPath!)).toBe(true)
// Verify it was recently created (within last 10 seconds)
const stats = statSync(bpfPath!)
const age = Date.now() - stats.mtimeMs
expect(age).toBeLessThan(10000)
console.log(`✓ Verified using JIT-compiled BPF: ${bpfPath}`)
}
// ============================================================================
@@ -44,12 +127,10 @@ describe('Sandbox Integration Tests', () => {
let socketServer: any = null
beforeAll(async () => {
if (skipIfNotSupported()) {
if (skipIfNotLinux()) {
return
}
console.log(`Running tests on ${getPlatform()}`)
// Create test directory
if (!existsSync(TEST_DIR)) {
mkdirSync(TEST_DIR, { recursive: true })
@@ -79,12 +160,12 @@ describe('Sandbox Integration Tests', () => {
socketServer.on('error', reject)
})
// Initialize sandbox with config
// Initialize sandbox
await SandboxManager.initialize(createTestConfig())
})
afterAll(async () => {
if (skipIfNotSupported()) {
if (skipIfNotLinux()) {
return
}
@@ -107,265 +188,573 @@ describe('Sandbox Integration Tests', () => {
await SandboxManager.reset()
})
describe('Unix Socket Restrictions', () => {
it('should block Unix socket connections', async () => {
if (skipIfNotSupported()) {
// ==========================================================================
// Scenario 1: With Pre-compiled BPF
// ==========================================================================
describe('With Pre-compiled BPF', () => {
beforeAll(() => {
if (skipIfNotLinux()) {
return
}
// Wrap command with sandbox
const command = await SandboxManager.wrapWithSandbox(
`echo "Test message" | nc -U ${TEST_SOCKET_PATH}`
)
const result = spawnSync(command, {
shell: true,
encoding: 'utf8',
timeout: 5000,
})
// Should fail due to sandbox blocking socket creation
const output = (result.stderr || result.stdout || '').toLowerCase()
// Different platforms/netcat versions report the error differently
// Linux: "operation not permitted" (seccomp)
// macOS: "operation not permitted" or "denied" (sandbox-exec)
const hasExpectedError = output.includes('operation not permitted') ||
output.includes('create unix socket failed') ||
output.includes('denied') ||
output.includes('sandbox') ||
output.includes('protocol wrong type') ||
output.includes('bad file descriptor')
// On macOS, the command might fail silently with no output if sandboxed
const didFail = result.status !== 0 || hasExpectedError
expect(didFail).toBe(true)
})
})
describe('Network Restrictions', () => {
it('should block HTTP requests to anthropic.com (not in allowlist)', async () => {
if (skipIfNotSupported()) {
return
}
// Use --max-time to timeout quickly, and --show-error to see proxy errors
const command = await SandboxManager.wrapWithSandbox(
'curl -s --show-error --max-time 2 https://www.anthropic.com'
)
const result = spawnSync(command, {
shell: true,
encoding: 'utf8',
timeout: 3000,
})
// The proxy blocks the connection, causing curl to timeout or fail
// Check that the request did not succeed
const output = (result.stderr || result.stdout || '').toLowerCase()
const didFail = result.status !== 0 || result.status === null
expect(didFail).toBe(true)
// The output should either contain an error or be empty (timeout)
// It should NOT contain successful HTML response
expect(output).not.toContain('<!doctype html')
expect(output).not.toContain('<html')
console.log('\n=== Testing with Pre-compiled BPF ===')
assertPrecompiledBpfInUse()
})
it('should allow HTTP requests to allowlisted domains', async () => {
if (skipIfNotSupported()) {
return
}
// Note: example.com is in the allowlist via createTestConfig()
const command = await SandboxManager.wrapWithSandbox(
'curl -s http://example.com'
)
const result = spawnSync(command, {
shell: true,
encoding: 'utf8',
timeout: 10000,
})
// Should succeed and return HTML
const output = result.stdout || ''
expect(result.status).toBe(0)
expect(output).toContain('Example Domain')
})
})
describe('Filesystem Restrictions', () => {
it('should block writes outside current working directory', async () => {
if (skipIfNotSupported()) {
return
}
// Use /etc which is definitely read-only on both platforms
const testFile = '/etc/sandbox-blocked-write.txt'
// Clean up if exists (it shouldn't)
if (existsSync(testFile)) {
unlinkSync(testFile)
}
const command = await SandboxManager.wrapWithSandbox(
`echo "should fail" > ${testFile}`
)
const result = spawnSync(command, {
shell: true,
encoding: 'utf8',
cwd: TEST_DIR,
timeout: 5000,
})
// The key thing is that the file should NOT have been created
expect(existsSync(testFile)).toBe(false)
// Should fail - command returns non-zero or contains error
const output = (result.stderr || result.stdout || '').toLowerCase()
// Error message varies by platform
// Linux: "read-only file system"
// macOS: "read-only" or "permission denied" or "operation not permitted"
const hasErrorOrFailed = result.status !== 0 ||
output.includes('read-only') ||
output.includes('permission denied') ||
output.includes('operation not permitted')
expect(hasErrorOrFailed).toBe(true)
})
it('should allow writes within current working directory', async () => {
if (skipIfNotSupported()) {
return
}
// Ensure test directory exists
if (!existsSync(TEST_DIR)) {
mkdirSync(TEST_DIR, { recursive: true })
}
const testFile = join(TEST_DIR, 'allowed-write.txt')
const testContent = 'test content from sandbox'
// Clean up if exists
if (existsSync(testFile)) {
unlinkSync(testFile)
}
const command = await SandboxManager.wrapWithSandbox(
`echo "${testContent}" > allowed-write.txt`
)
const result = spawnSync(command, {
shell: true,
encoding: 'utf8',
cwd: TEST_DIR,
timeout: 5000,
})
// Debug output if failed
if (result.status !== 0) {
console.error('Command failed:', command)
console.error('Status:', result.status)
console.error('Stdout:', result.stdout)
console.error('Stderr:', result.stderr)
console.error('CWD:', TEST_DIR)
console.error('Test file path:', testFile)
}
// Should succeed
expect(result.status).toBe(0)
expect(existsSync(testFile)).toBe(true)
// Verify content
const content = Bun.file(testFile).text()
expect(await content).toContain(testContent)
// Clean up
if (existsSync(testFile)) {
unlinkSync(testFile)
}
})
it('should allow reads from anywhere', async () => {
if (skipIfNotSupported()) {
return
}
// Try reading a common file that exists on both platforms
// Use .profile or .bash_profile on macOS, .bashrc on Linux
const testFiles = isMacOS()
? ['~/.profile', '~/.bash_profile', '~/.zshrc']
: ['~/.bashrc', '~/.profile']
let testedFile = null
for (const file of testFiles) {
const expandedPath = file.replace('~', process.env.HOME || '')
if (existsSync(expandedPath)) {
testedFile = file
break
describe('Unix Socket Restrictions', () => {
it('should block Unix socket connections with seccomp', async () => {
if (skipIfNotLinux()) {
return
}
}
if (!testedFile) {
console.log('Skipping read test: no suitable test file found in home directory')
return
}
// Wrap command with sandbox
const command = await SandboxManager.wrapWithSandbox(
`echo "Test message" | nc -U ${TEST_SOCKET_PATH}`
)
const command = await SandboxManager.wrapWithSandbox(
`head -n 5 ${testedFile}`
)
const result = spawnSync(command, {
shell: true,
encoding: 'utf8',
timeout: 5000,
})
const result = spawnSync(command, {
shell: true,
encoding: 'utf8',
timeout: 5000,
// Should fail due to seccomp filter blocking socket creation
const output = (result.stderr || result.stdout || '').toLowerCase()
// Different netcat versions report the error differently
const hasExpectedError = output.includes('operation not permitted') ||
output.includes('create unix socket failed')
expect(hasExpectedError).toBe(true)
expect(result.status).not.toBe(0)
})
})
describe('Network Restrictions', () => {
it('should block HTTP requests to non-allowlisted domains', async () => {
if (skipIfNotLinux()) {
return
}
const command = await SandboxManager.wrapWithSandbox(
'curl -s http://blocked-domain.example'
)
const result = spawnSync(command, {
shell: true,
encoding: 'utf8',
timeout: 5000,
})
const output = (result.stderr || result.stdout || '').toLowerCase()
expect(output).toContain('blocked by network allowlist')
})
// Should succeed
expect(result.status).toBe(0)
expect(result.stdout).toBeTruthy()
it('should block HTTP requests to anthropic.com (not in allowlist)', async () => {
if (skipIfNotLinux()) {
return
}
// Use --max-time to timeout quickly, and --show-error to see proxy errors
const command = await SandboxManager.wrapWithSandbox(
'curl -s --show-error --max-time 2 https://www.anthropic.com'
)
const result = spawnSync(command, {
shell: true,
encoding: 'utf8',
timeout: 3000,
})
// The proxy blocks the connection, causing curl to timeout or fail
// Check that the request did not succeed
const output = (result.stderr || result.stdout || '').toLowerCase()
const didFail = result.status !== 0 || result.status === null
expect(didFail).toBe(true)
// The output should either contain an error or be empty (timeout)
// It should NOT contain successful HTML response
expect(output).not.toContain('<!doctype html')
expect(output).not.toContain('<html')
})
it('should allow HTTP requests to allowlisted domains', async () => {
if (skipIfNotLinux()) {
return
}
// Note: example.com should be in the allowlist via .claude/settings.json
const command = await SandboxManager.wrapWithSandbox(
'curl -s http://example.com'
)
const result = spawnSync(command, {
shell: true,
encoding: 'utf8',
timeout: 10000,
})
// Should succeed and return HTML
const output = result.stdout || ''
expect(result.status).toBe(0)
expect(output).toContain('Example Domain')
})
})
describe('Filesystem Restrictions', () => {
it('should block writes outside current working directory', async () => {
if (skipIfNotLinux()) {
return
}
const testFile = join(tmpdir(), 'sandbox-blocked-write.txt')
// Clean up if exists
if (existsSync(testFile)) {
unlinkSync(testFile)
}
const command = await SandboxManager.wrapWithSandbox(
`echo "should fail" > ${testFile}`
)
const result = spawnSync(command, {
shell: true,
encoding: 'utf8',
cwd: TEST_DIR,
timeout: 5000,
})
// Should fail with read-only file system error
const output = (result.stderr || result.stdout || '').toLowerCase()
expect(output).toContain('read-only file system')
expect(existsSync(testFile)).toBe(false)
})
it('should allow writes within current working directory', async () => {
if (skipIfNotLinux()) {
return
}
// Ensure test directory exists
if (!existsSync(TEST_DIR)) {
mkdirSync(TEST_DIR, { recursive: true })
}
const testFile = join(TEST_DIR, 'allowed-write.txt')
const testContent = 'test content from sandbox'
// Clean up if exists
if (existsSync(testFile)) {
unlinkSync(testFile)
}
const command = await SandboxManager.wrapWithSandbox(
`echo "${testContent}" > allowed-write.txt`
)
const result = spawnSync(command, {
shell: true,
encoding: 'utf8',
cwd: TEST_DIR,
timeout: 5000,
})
// Debug output if failed
if (result.status !== 0) {
console.error('Command failed:', command)
console.error('Status:', result.status)
console.error('Stdout:', result.stdout)
console.error('Stderr:', result.stderr)
console.error('CWD:', TEST_DIR)
console.error('Test file path:', testFile)
}
// Should succeed
expect(result.status).toBe(0)
expect(existsSync(testFile)).toBe(true)
// Verify content
const content = Bun.file(testFile).text()
expect(await content).toContain(testContent)
// Clean up
if (existsSync(testFile)) {
unlinkSync(testFile)
}
})
it('should allow reads from anywhere', async () => {
if (skipIfNotLinux()) {
return
}
// Try reading from home directory
const command = await SandboxManager.wrapWithSandbox(
'head -n 5 ~/.bashrc'
)
const result = spawnSync(command, {
shell: true,
encoding: 'utf8',
timeout: 5000,
})
// Should succeed (assuming .bashrc exists)
expect(result.status).toBe(0)
// If .bashrc exists, should have some content
if (existsSync(`${process.env.HOME}/.bashrc`)) {
expect(result.stdout).toBeTruthy()
}
})
})
describe('Command Execution', () => {
it('should execute basic commands successfully', async () => {
if (skipIfNotLinux()) {
return
}
const command = await SandboxManager.wrapWithSandbox('echo "Hello from sandbox"')
const result = spawnSync(command, {
shell: true,
encoding: 'utf8',
timeout: 5000,
})
expect(result.status).toBe(0)
expect(result.stdout).toContain('Hello from sandbox')
})
it('should handle complex command pipelines', async () => {
if (skipIfNotLinux()) {
return
}
const command = await SandboxManager.wrapWithSandbox(
'echo "line1\nline2\nline3" | grep line2'
)
const result = spawnSync(command, {
shell: true,
encoding: 'utf8',
timeout: 5000,
})
expect(result.status).toBe(0)
expect(result.stdout).toContain('line2')
expect(result.stdout).not.toContain('line1')
})
})
})
describe('Command Execution', () => {
it('should execute basic commands successfully', async () => {
if (skipIfNotSupported()) {
// ==========================================================================
// Scenario 2: With JIT-compiled BPF
// ==========================================================================
describe('With JIT-compiled BPF', () => {
let bpfBackups: Map<string, Buffer> = new Map()
beforeAll(async () => {
if (skipIfNotLinux()) {
return
}
const command = await SandboxManager.wrapWithSandbox('echo "Hello from sandbox"')
console.log('\n=== Testing with JIT-compiled BPF ===')
const result = spawnSync(command, {
shell: true,
encoding: 'utf8',
timeout: 5000,
})
// Hide pre-compiled BPF files to force JIT compilation
bpfBackups = hideBpfFiles()
expect(result.status).toBe(0)
expect(result.stdout).toContain('Hello from sandbox')
// Reset sandbox to clear any cached BPF paths
await SandboxManager.reset()
await SandboxManager.initialize(createTestConfig())
// Verify JIT mode is active
assertJitBpfInUse()
})
it('should handle complex command pipelines', async () => {
if (skipIfNotSupported()) {
afterAll(async () => {
if (skipIfNotLinux()) {
return
}
const command = await SandboxManager.wrapWithSandbox(
'echo "line1\nline2\nline3" | grep line2'
)
// Restore pre-compiled BPF files
restoreBpfFiles(bpfBackups)
const result = spawnSync(command, {
shell: true,
encoding: 'utf8',
timeout: 5000,
// Reset sandbox again to restore normal behavior
await SandboxManager.reset()
await SandboxManager.initialize(createTestConfig())
})
describe('Pre-generated BPF Files', () => {
it('should generate BPF files at runtime when pre-compiled files are missing', async () => {
if (skipIfNotLinux()) {
return
}
// Generate BPF filter and verify it's in /tmp/
const bpfPath = generateSeccompFilter()
expect(bpfPath).toBeTruthy()
expect(bpfPath).toContain('/tmp/claude-seccomp-')
expect(existsSync(bpfPath!)).toBe(true)
console.log(`✓ Generated runtime BPF file: ${bpfPath}`)
// Verify it's a reasonable size (should be similar to pre-compiled)
const stats = statSync(bpfPath!)
expect(stats.size).toBeGreaterThan(50)
expect(stats.size).toBeLessThan(200)
console.log(`✓ BPF file is ${stats.size} bytes`)
})
})
describe('Unix Socket Restrictions', () => {
it('should block Unix socket connections with seccomp', async () => {
if (skipIfNotLinux()) {
return
}
// Wrap command with sandbox
const command = await SandboxManager.wrapWithSandbox(
`echo "Test message" | nc -U ${TEST_SOCKET_PATH}`
)
const result = spawnSync(command, {
shell: true,
encoding: 'utf8',
timeout: 5000,
})
// Should fail due to seccomp filter blocking socket creation
const output = (result.stderr || result.stdout || '').toLowerCase()
// Different netcat versions report the error differently
const hasExpectedError = output.includes('operation not permitted') ||
output.includes('create unix socket failed')
expect(hasExpectedError).toBe(true)
expect(result.status).not.toBe(0)
})
})
describe('Network Restrictions', () => {
it('should block HTTP requests to non-allowlisted domains', async () => {
if (skipIfNotLinux()) {
return
}
const command = await SandboxManager.wrapWithSandbox(
'curl -s http://blocked-domain.example'
)
const result = spawnSync(command, {
shell: true,
encoding: 'utf8',
timeout: 5000,
})
const output = (result.stderr || result.stdout || '').toLowerCase()
expect(output).toContain('blocked by network allowlist')
})
expect(result.status).toBe(0)
expect(result.stdout).toContain('line2')
expect(result.stdout).not.toContain('line1')
it('should block HTTP requests to anthropic.com (not in allowlist)', async () => {
if (skipIfNotLinux()) {
return
}
// Use --max-time to timeout quickly, and --show-error to see proxy errors
const command = await SandboxManager.wrapWithSandbox(
'curl -s --show-error --max-time 2 https://www.anthropic.com'
)
const result = spawnSync(command, {
shell: true,
encoding: 'utf8',
timeout: 3000,
})
// The proxy blocks the connection, causing curl to timeout or fail
// Check that the request did not succeed
const output = (result.stderr || result.stdout || '').toLowerCase()
const didFail = result.status !== 0 || result.status === null
expect(didFail).toBe(true)
// The output should either contain an error or be empty (timeout)
// It should NOT contain successful HTML response
expect(output).not.toContain('<!doctype html')
expect(output).not.toContain('<html')
})
it('should allow HTTP requests to allowlisted domains', async () => {
if (skipIfNotLinux()) {
return
}
// Note: example.com should be in the allowlist via .claude/settings.json
const command = await SandboxManager.wrapWithSandbox(
'curl -s http://example.com'
)
const result = spawnSync(command, {
shell: true,
encoding: 'utf8',
timeout: 10000,
})
// Should succeed and return HTML
const output = result.stdout || ''
expect(result.status).toBe(0)
expect(output).toContain('Example Domain')
})
})
describe('Filesystem Restrictions', () => {
it('should block writes outside current working directory', async () => {
if (skipIfNotLinux()) {
return
}
const testFile = join(tmpdir(), 'sandbox-blocked-write-jit.txt')
// Clean up if exists
if (existsSync(testFile)) {
unlinkSync(testFile)
}
const command = await SandboxManager.wrapWithSandbox(
`echo "should fail" > ${testFile}`
)
const result = spawnSync(command, {
shell: true,
encoding: 'utf8',
cwd: TEST_DIR,
timeout: 5000,
})
// Should fail with read-only file system error
const output = (result.stderr || result.stdout || '').toLowerCase()
expect(output).toContain('read-only file system')
expect(existsSync(testFile)).toBe(false)
})
it('should allow writes within current working directory', async () => {
if (skipIfNotLinux()) {
return
}
// Ensure test directory exists
if (!existsSync(TEST_DIR)) {
mkdirSync(TEST_DIR, { recursive: true })
}
const testFile = join(TEST_DIR, 'allowed-write-jit.txt')
const testContent = 'test content from sandbox with JIT BPF'
// Clean up if exists
if (existsSync(testFile)) {
unlinkSync(testFile)
}
const command = await SandboxManager.wrapWithSandbox(
`echo "${testContent}" > allowed-write-jit.txt`
)
const result = spawnSync(command, {
shell: true,
encoding: 'utf8',
cwd: TEST_DIR,
timeout: 5000,
})
// Debug output if failed
if (result.status !== 0) {
console.error('Command failed:', command)
console.error('Status:', result.status)
console.error('Stdout:', result.stdout)
console.error('Stderr:', result.stderr)
console.error('CWD:', TEST_DIR)
console.error('Test file path:', testFile)
}
// Should succeed
expect(result.status).toBe(0)
expect(existsSync(testFile)).toBe(true)
// Verify content
const content = Bun.file(testFile).text()
expect(await content).toContain(testContent)
// Clean up
if (existsSync(testFile)) {
unlinkSync(testFile)
}
})
it('should allow reads from anywhere', async () => {
if (skipIfNotLinux()) {
return
}
// Try reading from home directory
const command = await SandboxManager.wrapWithSandbox(
'head -n 5 ~/.bashrc'
)
const result = spawnSync(command, {
shell: true,
encoding: 'utf8',
timeout: 5000,
})
// Should succeed (assuming .bashrc exists)
expect(result.status).toBe(0)
// If .bashrc exists, should have some content
if (existsSync(`${process.env.HOME}/.bashrc`)) {
expect(result.stdout).toBeTruthy()
}
})
})
describe('Command Execution', () => {
it('should execute basic commands successfully', async () => {
if (skipIfNotLinux()) {
return
}
const command = await SandboxManager.wrapWithSandbox('echo "Hello from sandbox with JIT BPF"')
const result = spawnSync(command, {
shell: true,
encoding: 'utf8',
timeout: 5000,
})
expect(result.status).toBe(0)
expect(result.stdout).toContain('Hello from sandbox with JIT BPF')
})
it('should handle complex command pipelines', async () => {
if (skipIfNotLinux()) {
return
}
const command = await SandboxManager.wrapWithSandbox(
'echo "line1\nline2\nline3" | grep line2'
)
const result = spawnSync(command, {
shell: true,
encoding: 'utf8',
timeout: 5000,
})
expect(result.status).toBe(0)
expect(result.stdout).toContain('line2')
expect(result.stdout).not.toContain('line1')
})
})
})
})

View File

@@ -0,0 +1,595 @@
import { describe, it, expect, beforeAll, afterAll } from 'bun:test'
import { spawnSync } from 'node:child_process'
import { existsSync, statSync } from 'node:fs'
import { tmpdir } from 'node:os'
import { join } from 'node:path'
import { getPlatform } from '../../src/utils/platform.js'
import {
generateSeccompFilter,
cleanupSeccompFilter,
hasSeccompDependenciesSync,
getApplySeccompExecPath,
} from '../../src/sandbox/generate-seccomp-filter.js'
import {
wrapCommandWithSandboxLinux,
hasLinuxSandboxDependenciesSync,
} from '../../src/sandbox/linux-sandbox-utils.js'
function skipIfNotLinux(): boolean {
return getPlatform() !== 'linux'
}
function skipIfNotAnt(): boolean {
return process.env.USER_TYPE !== 'ant'
}
describe('Seccomp Dependencies', () => {
it('should check for seccomp dependencies', () => {
if (skipIfNotLinux()) {
return
}
const hasDeps = hasSeccompDependenciesSync()
expect(typeof hasDeps).toBe('boolean')
// If we have dependencies, we should have both compiler and libseccomp
if (hasDeps) {
const gccResult = spawnSync('which', ['gcc'], { stdio: 'ignore' })
const clangResult = spawnSync('which', ['clang'], { stdio: 'ignore' })
expect(gccResult.status === 0 || clangResult.status === 0).toBe(true)
}
})
it('should check for Linux sandbox dependencies', () => {
if (skipIfNotLinux()) {
return
}
const hasDeps = hasLinuxSandboxDependenciesSync()
expect(typeof hasDeps).toBe('boolean')
// Should always check for bwrap and socat
if (hasDeps) {
const bwrapResult = spawnSync('which', ['bwrap'], { stdio: 'ignore' })
const socatResult = spawnSync('which', ['socat'], { stdio: 'ignore' })
expect(bwrapResult.status).toBe(0)
expect(socatResult.status).toBe(0)
// For ANT users, should also check seccomp dependencies
if (process.env.USER_TYPE === 'ant') {
expect(hasSeccompDependenciesSync()).toBe(true)
}
}
})
it('should be memoized to avoid repeated checks', () => {
if (skipIfNotLinux()) {
return
}
// Call multiple times - should be fast due to memoization
const result1 = hasSeccompDependenciesSync()
const result2 = hasSeccompDependenciesSync()
const result3 = hasSeccompDependenciesSync()
expect(result1).toBe(result2)
expect(result2).toBe(result3)
})
})
describe('Seccomp Filter Generation', () => {
let filterPath: string | null = null
const generatedFilters: string[] = []
afterAll(() => {
// Clean up all generated filter files
for (const path of generatedFilters) {
try {
cleanupSeccompFilter(path)
} catch {
// Ignore cleanup errors
}
}
})
it('should generate a valid BPF filter file', () => {
if (skipIfNotLinux() || skipIfNotAnt()) {
return
}
if (!hasSeccompDependenciesSync()) {
return
}
filterPath = generateSeccompFilter()
if (filterPath) {
generatedFilters.push(filterPath)
}
expect(filterPath).toBeTruthy()
expect(filterPath).toMatch(/\.bpf$/)
expect(filterPath).toContain(tmpdir())
// Verify the file exists
expect(existsSync(filterPath!)).toBe(true)
// Verify the file has content (BPF bytecode)
const stats = statSync(filterPath!)
expect(stats.size).toBeGreaterThan(0)
// BPF programs should be a multiple of 8 bytes (struct sock_filter is 8 bytes)
expect(stats.size % 8).toBe(0)
})
it('should generate unique filter files on each call', () => {
if (skipIfNotLinux() || skipIfNotAnt()) {
return
}
if (!hasSeccompDependenciesSync()) {
return
}
const filter1 = generateSeccompFilter()
const filter2 = generateSeccompFilter()
if (filter1) generatedFilters.push(filter1)
if (filter2) generatedFilters.push(filter2)
expect(filter1).toBeTruthy()
expect(filter2).toBeTruthy()
// Should generate different filenames (timestamped)
expect(filter1).not.toBe(filter2)
})
it('should return null when dependencies are missing', () => {
if (skipIfNotLinux() || skipIfNotAnt()) {
return
}
if (hasSeccompDependenciesSync()) {
// Can't test this case if dependencies are available
return
}
const filter = generateSeccompFilter()
expect(filter).toBeNull()
})
it('should clean up filter files', () => {
if (skipIfNotLinux() || skipIfNotAnt()) {
return
}
if (!hasSeccompDependenciesSync()) {
return
}
const filter = generateSeccompFilter()
expect(filter).toBeTruthy()
expect(existsSync(filter!)).toBe(true)
cleanupSeccompFilter(filter!)
expect(existsSync(filter!)).toBe(false)
})
it('should handle cleanup of non-existent files gracefully', () => {
if (skipIfNotLinux()) {
return
}
const fakePath = '/tmp/nonexistent-filter.bpf'
expect(() => cleanupSeccompFilter(fakePath)).not.toThrow()
})
})
describe('Apply Seccomp Helper', () => {
it('should compile the apply-seccomp-and-exec helper', () => {
if (skipIfNotLinux() || skipIfNotAnt()) {
return
}
if (!hasSeccompDependenciesSync()) {
return
}
const helperPath = getApplySeccompExecPath()
expect(helperPath).toBeTruthy()
// Verify the file exists and is executable
expect(existsSync(helperPath!)).toBe(true)
const stats = statSync(helperPath!)
expect(stats.size).toBeGreaterThan(0)
// Check if file is executable (Unix permission check)
const mode = stats.mode
const isExecutable = (mode & 0o111) !== 0
expect(isExecutable).toBe(true)
})
it('should cache compiled helper binary', () => {
if (skipIfNotLinux() || skipIfNotAnt()) {
return
}
if (!hasSeccompDependenciesSync()) {
return
}
// Call multiple times - should return same cached path
const helper1 = getApplySeccompExecPath()
const helper2 = getApplySeccompExecPath()
expect(helper1).toBe(helper2)
})
it('should store helper in cache directory', () => {
if (skipIfNotLinux() || skipIfNotAnt()) {
return
}
if (!hasSeccompDependenciesSync()) {
return
}
const helperPath = getApplySeccompExecPath()
expect(helperPath).toBeTruthy()
const cacheDir = join(tmpdir(), 'claude', 'seccomp-cache')
expect(helperPath).toContain(cacheDir)
expect(helperPath).toContain('apply-seccomp-and-exec')
})
})
describe('USER_TYPE Gating', () => {
it('should only generate seccomp filters for ANT users', () => {
if (skipIfNotLinux()) {
return
}
if (!hasSeccompDependenciesSync()) {
return
}
if (process.env.USER_TYPE === 'ant') {
// ANT users should get seccomp filters
const filter = generateSeccompFilter()
expect(filter).toBeTruthy()
if (filter) {
cleanupSeccompFilter(filter)
}
} else {
// Non-ANT users - filter generation should still work for testing
// but won't be used in production sandbox commands
expect(true).toBe(true)
}
})
it('should only apply seccomp in sandbox for ANT users', async () => {
if (skipIfNotLinux()) {
return
}
if (!hasLinuxSandboxDependenciesSync()) {
return
}
const testCommand = 'echo "test"'
const wrappedCommand = await wrapCommandWithSandboxLinux({
command: testCommand,
hasNetworkRestrictions: false,
hasFilesystemRestrictions: false,
})
if (process.env.USER_TYPE === 'ant' && hasSeccompDependenciesSync()) {
// ANT users should have seccomp helper in command
expect(wrappedCommand).toContain('apply-seccomp-and-exec')
} else {
// Non-ANT users should not have seccomp
expect(wrappedCommand).not.toContain('apply-seccomp-and-exec')
}
})
})
describe('Socket Filtering Behavior', () => {
let filterPath: string | null = null
beforeAll(() => {
if (skipIfNotLinux() || skipIfNotAnt()) {
return
}
if (!hasSeccompDependenciesSync()) {
return
}
filterPath = generateSeccompFilter()
})
afterAll(() => {
if (filterPath) {
cleanupSeccompFilter(filterPath)
}
})
it('should block Unix socket creation (SOCK_STREAM)', async () => {
if (skipIfNotLinux() || skipIfNotAnt() || !filterPath) {
return
}
const testCommand = `python3 -c "import socket; s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM); print('Unix socket created')"`
const wrappedCommand = await wrapCommandWithSandboxLinux({
command: testCommand,
hasNetworkRestrictions: false,
hasFilesystemRestrictions: false,
})
const result = spawnSync('bash', ['-c', wrappedCommand], {
stdio: 'pipe',
timeout: 5000,
})
expect(result.status).not.toBe(0)
const stderr = result.stderr?.toString() || ''
expect(stderr.toLowerCase()).toMatch(
/permission denied|operation not permitted/,
)
})
it('should block Unix socket creation (SOCK_DGRAM)', async () => {
if (skipIfNotLinux() || skipIfNotAnt() || !filterPath) {
return
}
const testCommand = `python3 -c "import socket; s = socket.socket(socket.AF_UNIX, socket.SOCK_DGRAM); print('Unix datagram created')"`
const wrappedCommand = await wrapCommandWithSandboxLinux({
command: testCommand,
hasNetworkRestrictions: false,
hasFilesystemRestrictions: false,
})
const result = spawnSync('bash', ['-c', wrappedCommand], {
stdio: 'pipe',
timeout: 5000,
})
expect(result.status).not.toBe(0)
const stderr = result.stderr?.toString() || ''
expect(stderr.toLowerCase()).toMatch(
/permission denied|operation not permitted/,
)
})
it('should allow TCP socket creation (IPv4)', async () => {
if (skipIfNotLinux() || skipIfNotAnt() || !filterPath) {
return
}
const testCommand = `python3 -c "import socket; s = socket.socket(socket.AF_INET, socket.SOCK_STREAM); print('TCP socket created')"`
const wrappedCommand = await wrapCommandWithSandboxLinux({
command: testCommand,
hasNetworkRestrictions: false,
hasFilesystemRestrictions: false,
})
const result = spawnSync('bash', ['-c', wrappedCommand], {
stdio: 'pipe',
timeout: 5000,
})
expect(result.status).toBe(0)
expect(result.stdout?.toString()).toContain('TCP socket created')
})
it('should allow UDP socket creation (IPv4)', async () => {
if (skipIfNotLinux() || skipIfNotAnt() || !filterPath) {
return
}
const testCommand = `python3 -c "import socket; s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM); print('UDP socket created')"`
const wrappedCommand = await wrapCommandWithSandboxLinux({
command: testCommand,
hasNetworkRestrictions: false,
hasFilesystemRestrictions: false,
})
const result = spawnSync('bash', ['-c', wrappedCommand], {
stdio: 'pipe',
timeout: 5000,
})
expect(result.status).toBe(0)
expect(result.stdout?.toString()).toContain('UDP socket created')
})
it('should allow IPv6 socket creation', async () => {
if (skipIfNotLinux() || skipIfNotAnt() || !filterPath) {
return
}
const testCommand = `python3 -c "import socket; s = socket.socket(socket.AF_INET6, socket.SOCK_STREAM); print('IPv6 socket created')"`
const wrappedCommand = await wrapCommandWithSandboxLinux({
command: testCommand,
hasNetworkRestrictions: false,
hasFilesystemRestrictions: false,
})
const result = spawnSync('bash', ['-c', wrappedCommand], {
stdio: 'pipe',
timeout: 5000,
})
expect(result.status).toBe(0)
expect(result.stdout?.toString()).toContain('IPv6 socket created')
})
})
describe('Two-Stage Seccomp Application', () => {
it('should allow network infrastructure to run before filter', async () => {
if (skipIfNotLinux() || skipIfNotAnt()) {
return
}
if (!hasLinuxSandboxDependenciesSync()) {
return
}
// This test verifies that the socat processes can start successfully
// even though they use Unix sockets, because they run before the filter
const testCommand = 'echo "test"'
const wrappedCommand = await wrapCommandWithSandboxLinux({
command: testCommand,
hasNetworkRestrictions: false,
hasFilesystemRestrictions: false,
})
// Command should include both socat and the seccomp helper
if (hasSeccompDependenciesSync()) {
expect(wrappedCommand).toContain('socat')
expect(wrappedCommand).toContain('apply-seccomp-and-exec')
// The socat should come before the apply-seccomp-and-exec
const socatIndex = wrappedCommand.indexOf('socat')
const seccompIndex = wrappedCommand.indexOf('apply-seccomp-and-exec')
expect(socatIndex).toBeGreaterThan(-1)
expect(seccompIndex).toBeGreaterThan(-1)
expect(socatIndex).toBeLessThan(seccompIndex)
}
})
it('should execute user command with filter applied', async () => {
if (skipIfNotLinux() || skipIfNotAnt()) {
return
}
if (!hasLinuxSandboxDependenciesSync() || !hasSeccompDependenciesSync()) {
return
}
// User command tries to create Unix socket - should fail
const testCommand = `python3 -c "import socket; socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)"`
const wrappedCommand = await wrapCommandWithSandboxLinux({
command: testCommand,
hasNetworkRestrictions: false,
hasFilesystemRestrictions: false,
})
const result = spawnSync('bash', ['-c', wrappedCommand], {
stdio: 'pipe',
timeout: 5000,
})
// Should fail due to seccomp filter
expect(result.status).not.toBe(0)
})
})
describe('Sandbox Integration', () => {
it('should handle commands without network or filesystem restrictions', async () => {
if (skipIfNotLinux()) {
return
}
if (!hasLinuxSandboxDependenciesSync()) {
return
}
const testCommand = 'echo "hello world"'
const wrappedCommand = await wrapCommandWithSandboxLinux({
command: testCommand,
hasNetworkRestrictions: false,
hasFilesystemRestrictions: false,
})
// Should still wrap the command even without restrictions
expect(wrappedCommand).toBeTruthy()
expect(typeof wrappedCommand).toBe('string')
})
it('should wrap commands with filesystem restrictions', async () => {
if (skipIfNotLinux()) {
return
}
if (!hasLinuxSandboxDependenciesSync()) {
return
}
const testCommand = 'ls /'
const wrappedCommand = await wrapCommandWithSandboxLinux({
command: testCommand,
hasNetworkRestrictions: false,
hasFilesystemRestrictions: true,
})
expect(wrappedCommand).toBeTruthy()
expect(wrappedCommand).toContain('bwrap')
})
it('should include seccomp for ANT users with dependencies', async () => {
if (skipIfNotLinux()) {
return
}
if (!hasLinuxSandboxDependenciesSync()) {
return
}
const testCommand = 'echo "test"'
const wrappedCommand = await wrapCommandWithSandboxLinux({
command: testCommand,
hasNetworkRestrictions: false,
hasFilesystemRestrictions: false,
})
const isAnt = process.env.USER_TYPE === 'ant'
const hasSeccomp = hasSeccompDependenciesSync()
if (isAnt && hasSeccomp) {
expect(wrappedCommand).toContain('apply-seccomp-and-exec')
} else {
expect(wrappedCommand).not.toContain('apply-seccomp-and-exec')
}
})
})
describe('Error Handling', () => {
it('should handle cleanup errors gracefully', () => {
if (skipIfNotLinux()) {
return
}
// Try to clean up invalid paths
expect(() => cleanupSeccompFilter('')).not.toThrow()
expect(() => cleanupSeccompFilter('/invalid/path/filter.bpf')).not.toThrow()
expect(() => cleanupSeccompFilter('/tmp/nonexistent.bpf')).not.toThrow()
})
it('should handle multiple cleanup calls on same file', () => {
if (skipIfNotLinux() || skipIfNotAnt()) {
return
}
if (!hasSeccompDependenciesSync()) {
return
}
const filter = generateSeccompFilter()
if (!filter) {
return
}
cleanupSeccompFilter(filter)
// Second cleanup should not throw
expect(() => cleanupSeccompFilter(filter)).not.toThrow()
})
})

111
vendor/seccomp-src/apply-seccomp-and-exec.py vendored Executable file
View File

@@ -0,0 +1,111 @@
#!/usr/bin/env python3
"""
Apply seccomp filter and exec command
This helper script loads a compiled seccomp BPF filter, applies it to the
current process using prctl, and then execs the specified command. This enables
two-stage seccomp application: infrastructure code runs without the filter,
then the user command runs with the filter active.
Usage:
./apply-seccomp-and-exec.py <filter-file> -- <command> [args...]
The filter file should contain a compiled BPF program (struct sock_fprog).
"""
import sys
import os
import ctypes
import ctypes.util
# Constants
PR_SET_NO_NEW_PRIVS = 38
PR_SET_SECCOMP = 22
SECCOMP_MODE_FILTER = 2
# Define sock_filter structure (8 bytes)
class sock_filter(ctypes.Structure):
_fields_ = [
("code", ctypes.c_uint16),
("jt", ctypes.c_uint8),
("jf", ctypes.c_uint8),
("k", ctypes.c_uint32),
]
# Define sock_fprog structure
class sock_fprog(ctypes.Structure):
_fields_ = [
("len", ctypes.c_uint16),
("filter", ctypes.POINTER(sock_filter)),
]
def load_filter(path):
"""Load BPF filter from file"""
try:
with open(path, 'rb') as f:
data = f.read()
except IOError as e:
print(f"Error: Failed to open filter file {path}: {e}", file=sys.stderr)
sys.exit(1)
# Verify size is valid
filter_size = ctypes.sizeof(sock_filter)
if len(data) == 0 or len(data) % filter_size != 0:
print(f"Error: Invalid filter file size: {len(data)}", file=sys.stderr)
sys.exit(1)
# Parse filter data into array
num_filters = len(data) // filter_size
filter_array = (sock_filter * num_filters)()
ctypes.memmove(filter_array, data, len(data))
# Create fprog structure
prog = sock_fprog()
prog.len = num_filters
prog.filter = ctypes.cast(filter_array, ctypes.POINTER(sock_filter))
return prog, filter_array # Keep array alive
def main():
if len(sys.argv) < 4:
print(f"Usage: {sys.argv[0]} <filter-file> -- <command> [args...]", file=sys.stderr)
print("\nApplies seccomp filter and execs the command", file=sys.stderr)
sys.exit(1)
# Check for separator
if sys.argv[2] != '--':
print("Error: Expected '--' as second argument", file=sys.stderr)
sys.exit(1)
filter_path = sys.argv[1]
command_argv = sys.argv[3:]
# Load the BPF filter
prog, filter_array = load_filter(filter_path)
# Load libc
libc = ctypes.CDLL(ctypes.util.find_library('c'), use_errno=True)
# Set no_new_privs (required for unprivileged processes)
ret = libc.prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0)
if ret < 0:
errno = ctypes.get_errno()
print(f"Error: Failed to set no_new_privs: {os.strerror(errno)}", file=sys.stderr)
sys.exit(1)
# Apply the seccomp filter
ret = libc.prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, ctypes.byref(prog), 0, 0)
if ret < 0:
errno = ctypes.get_errno()
print(f"Error: Failed to apply seccomp filter: {os.strerror(errno)}", file=sys.stderr)
sys.exit(1)
# Filter is now active - exec the command
try:
os.execvp(command_argv[0], command_argv)
except OSError as e:
print(f"Error: Failed to exec {command_argv[0]}: {e}", file=sys.stderr)
sys.exit(1)
if __name__ == '__main__':
main()

97
vendor/seccomp-src/seccomp-unix-block.c vendored Normal file
View File

@@ -0,0 +1,97 @@
/*
* Seccomp BPF filter generator to block Unix domain socket creation
*
* This program generates a seccomp-bpf filter that blocks the socket() syscall
* when called with AF_UNIX as the domain argument. This prevents creation of
* Unix domain sockets while allowing all other socket types (AF_INET, AF_INET6, etc.)
* and all other syscalls.
*
* The filter is exported in a format compatible with bubblewrap's --seccomp flag.
*
* SECURITY LIMITATION - 32-bit x86 (ia32):
* TODO: This filter does NOT block socketcall() syscall, which is a security issue
* on 32-bit x86 systems. On ia32, the socket() syscall doesn't exist - instead,
* all socket operations are multiplexed through socketcall():
* - socketcall(SYS_SOCKET, [AF_UNIX, ...]) - can bypass this filter
* - socketcall(SYS_SOCKETPAIR, [AF_UNIX, ...]) - can bypass this filter
*
* To fix this, we need to add conditional rules that:
* 1. Check if socketcall() exists on the current architecture (32-bit x86 only)
* 2. Block socketcall(SYS_SOCKET, ...) when first arg of sub-call is AF_UNIX
* 3. Block socketcall(SYS_SOCKETPAIR, ...) when first arg of sub-call is AF_UNIX
*
* This requires inspecting the arguments passed to socketcall, which is more
* complex BPF logic. For now, 32-bit x86 is not supported.
*
* Compilation:
* gcc -o seccomp-unix-block seccomp-unix-block.c -lseccomp
*
* Usage:
* ./seccomp-unix-block <output-file>
*
* Dependencies:
* - libseccomp (libseccomp-dev package on Debian/Ubuntu)
*/
#include <errno.h>
#include <fcntl.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <seccomp.h>
#include <sys/socket.h>
#include <sys/stat.h>
#include <sys/types.h>
int main(int argc, char *argv[]) {
scmp_filter_ctx ctx;
int rc;
if (argc != 2) {
fprintf(stderr, "Usage: %s <output-file>\n", argv[0]);
return 1;
}
const char *output_file = argv[1];
/* Create seccomp context with default action ALLOW */
ctx = seccomp_init(SCMP_ACT_ALLOW);
if (ctx == NULL) {
fprintf(stderr, "Error: Failed to initialize seccomp context\n");
return 1;
}
/* Add rule to block socket(AF_UNIX, ...) */
/* socket() syscall signature: int socket(int domain, int type, int protocol) */
/* arg0 = domain (AF_UNIX = 1) */
rc = seccomp_rule_add(ctx, SCMP_ACT_ERRNO(EPERM), SCMP_SYS(socket), 1,
SCMP_A0(SCMP_CMP_EQ, AF_UNIX));
if (rc < 0) {
fprintf(stderr, "Error: Failed to add seccomp rule: %s\n", strerror(-rc));
seccomp_release(ctx);
return 1;
}
/* Export the filter to a file */
int fd = open(output_file, O_CREAT | O_WRONLY | O_TRUNC, 0600);
if (fd < 0) {
fprintf(stderr, "Error: Failed to open output file: %s\n", strerror(errno));
seccomp_release(ctx);
return 1;
}
rc = seccomp_export_bpf(ctx, fd);
if (rc < 0) {
fprintf(stderr, "Error: Failed to export seccomp filter: %s\n", strerror(-rc));
close(fd);
seccomp_release(ctx);
return 1;
}
/* Clean up */
close(fd);
seccomp_release(ctx);
return 0;
}

BIN
vendor/seccomp/arm64/unix-block.bpf vendored Normal file

Binary file not shown.

BIN
vendor/seccomp/x64/unix-block.bpf vendored Normal file

Binary file not shown.