A file-deny listed before its ancestor dir-deny in the denyRead array
was wiped: the /dev/null mask landed first, then the ancestor tmpfs
replaced it, then allowRead re-bound the project dir — file readable.
Normalize then sort by segment count before the mount loop. Ancestors
process first (tmpfs + re-binds), descendant file masks layer on top.
User-specified order no longer matters.
Two fixes for Linux denyRead precedence.
The #190 reorder (denyWrite after denyRead, so .git/hooks ro-binds
survive a tmpfs over an ancestor) introduced a regression: when the same
file is in both denyRead and denyWrite, denyRead's --ro-bind /dev/null
mask now lands before denyWrite's --ro-bind <host> <host>, which undoes
the mask. Track masked files and skip those dests when emitting
denyWriteArgs — the /dev/null bind already makes them read-only.
Separately: a file-level denyRead was silently skipped when allowRead
covered its parent directory (startsWith check matched). Narrow the
skip to exact matches so denyRead: ['.env'] + allowRead: ['.'] keeps
the .env deny.
Thanks to kyuz0 (#194) for reporting both.
* Isolate seccomp workload in nested PID namespace and block io_uring
apply-seccomp now creates a nested user+PID+mount namespace before applying
the seccomp filter. The user command runs as PID 2 under a non-dumpable PID 1
reaper, with /proc remounted so only the inner process tree is visible. This
prevents the sandboxed command from ptracing or patching the unfiltered bwrap
init, bash wrapper, or socat helpers via /proc/N/mem, regardless of the host's
kernel.yama.ptrace_scope setting. Namespace setup failure aborts rather than
silently degrading.
The BPF filter now also blocks io_uring_setup/enter/register. IORING_OP_SOCKET
(Linux 5.19+) creates sockets without going through socket(), and seccomp
cannot inspect SQEs in the shared ring, so denying ring creation entirely is
the only safe option.
The filter generator now accepts an optional target-arch argument so a single
builder can emit both x64 and arm64 filters. Prebuilt binaries and filters are
regenerated for both architectures.
* Pass CAP_SYS_ADMIN to apply-seccomp and clear ambient caps before exec
apply-seccomp needs CAP_SYS_ADMIN to unshare PID+mount namespaces. The
original approach obtained it via unshare(CLONE_NEWUSER), but on hosts
where an LSM restricts unprivileged user namespaces (Ubuntu 24.04 with
AppArmor defaults), the nested userns is created without capabilities
and the setgroups write fails.
bwrap now passes --cap-add CAP_SYS_ADMIN (scoped to its user namespace)
so apply-seccomp can unshare directly. The nested-userns path remains as
a fallback for standalone invocation.
apply-seccomp clears the ambient capability set after remounting /proc,
so the sandboxed command's execve drops to zero capabilities and cannot
umount /proc to reveal the outer mount underneath. Two new tests cover
CapEff=0 and umount denial.
* chore: bump version to 0.0.44
* Add --unshare-user so --cap-add works with setuid bwrap
Setuid bwrap rejects --cap-add from non-root because it would grant
real host capabilities. --unshare-user forces user-namespace mode so
the capability is scoped to that namespace and the flag is accepted.
* Disable AppArmor userns restriction in CI instead of using setuid bwrap
Setuid bwrap rejects --cap-add from non-root, so that path is a dead end.
Instead, disable kernel.apparmor_restrict_unprivileged_userns in CI so
apply-seccomp's nested-userns path works without any bwrap cooperation.
This matches what production Ubuntu 24.04 users need to do anyway, now
documented in the README.
* Exit inner init as soon as the worker exits
reap_until was waiting for all children including orphaned background
processes reparented to PID 1, which hung the sandbox when the user
command backgrounded something long-running and then exited. Return
immediately when the worker terminates; PID 1 exiting tears down the
namespace and SIGKILLs any stragglers.
* Defer bwrap mount point cleanup until all concurrent sandboxes finish
When two sandboxed commands run concurrently and one finishes first,
cleanupBwrapMountPoints() was deleting mount point files that the
still-running sandbox still depended on. Deleting the mountpoint's
dentry on the host detaches the bind mount in the child namespace
(the dentry is unhashed, so path lookup no longer finds the mount),
so the deny rule stops applying inside the still-running sandbox.
Add an active-sandbox counter: wrapCommandWithSandboxLinux()
increments it, cleanupBwrapMountPoints() decrements it and defers
file deletion until the counter reaches zero. A {force: true} option
bypasses the counter for process-exit and reset().
Also bumps version to 0.0.45.
---------
Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
* Isolate seccomp workload in nested PID namespace and block io_uring
apply-seccomp now creates a nested user+PID+mount namespace before applying
the seccomp filter. The user command runs as PID 2 under a non-dumpable PID 1
reaper, with /proc remounted so only the inner process tree is visible. This
prevents the sandboxed command from ptracing or patching the unfiltered bwrap
init, bash wrapper, or socat helpers via /proc/N/mem, regardless of the host's
kernel.yama.ptrace_scope setting. Namespace setup failure aborts rather than
silently degrading.
The BPF filter now also blocks io_uring_setup/enter/register. IORING_OP_SOCKET
(Linux 5.19+) creates sockets without going through socket(), and seccomp
cannot inspect SQEs in the shared ring, so denying ring creation entirely is
the only safe option.
The filter generator now accepts an optional target-arch argument so a single
builder can emit both x64 and arm64 filters. Prebuilt binaries and filters are
regenerated for both architectures.
* Pass CAP_SYS_ADMIN to apply-seccomp and clear ambient caps before exec
apply-seccomp needs CAP_SYS_ADMIN to unshare PID+mount namespaces. The
original approach obtained it via unshare(CLONE_NEWUSER), but on hosts
where an LSM restricts unprivileged user namespaces (Ubuntu 24.04 with
AppArmor defaults), the nested userns is created without capabilities
and the setgroups write fails.
bwrap now passes --cap-add CAP_SYS_ADMIN (scoped to its user namespace)
so apply-seccomp can unshare directly. The nested-userns path remains as
a fallback for standalone invocation.
apply-seccomp clears the ambient capability set after remounting /proc,
so the sandboxed command's execve drops to zero capabilities and cannot
umount /proc to reveal the outer mount underneath. Two new tests cover
CapEff=0 and umount denial.
* chore: bump version to 0.0.44
* Add --unshare-user so --cap-add works with setuid bwrap
Setuid bwrap rejects --cap-add from non-root because it would grant
real host capabilities. --unshare-user forces user-namespace mode so
the capability is scoped to that namespace and the flag is accepted.
* Disable AppArmor userns restriction in CI instead of using setuid bwrap
Setuid bwrap rejects --cap-add from non-root, so that path is a dead end.
Instead, disable kernel.apparmor_restrict_unprivileged_userns in CI so
apply-seccomp's nested-userns path works without any bwrap cooperation.
This matches what production Ubuntu 24.04 users need to do anyway, now
documented in the README.
* Exit inner init as soon as the worker exits
reap_until was waiting for all children including orphaned background
processes reparented to PID 1, which hung the sandbox when the user
command backgrounded something long-running and then exited. Return
immediately when the worker terminates; PID 1 exiting tears down the
namespace and SIGKILLs any stragglers.
* Fix allowRead carve-outs when denyRead covers filesystem root
denyRead: ['/'] + allowRead: [<project>] denied everything on both
platforms (follow-up to #166, closes#10).
macOS: (deny file-read* (subpath "/")) blocks the root inode itself; no
allowWithinDeny subpath covers "/", so dyld SIGABRTs before exec. Emit
(allow file-read* (literal "/")) so path traversal through root works.
Exposes `ls /` dirent names but no subtree contents.
Linux: two issues. --tmpfs / wipes every prior mount (ro-bind /, write
binds, denyWrite ro-binds), and the carve-out prefix check
startsWith('/' + '/') never matches. Expand a root deny into its
children (minus /proc, /dev, /sys) so the existing per-dir tmpfs +
re-bind logic applies. Also re-bind any allowWrite paths that land
under a tmpfs'd deny dir (previously they went read-only), and buffer
denyWrite ro-binds until after denyRead processing so .git/hooks
protection survives a tmpfs over an ancestor.
* Dedup denyWrite entries post-normalization to prevent bwrap failure
Two denyWrite entries that converge to the same path after
normalizePathForSandbox() produced a duplicate
--ro-bind /dev/null <dest>. On the second bind, <dest> is a char device
(the first bind's mount); bwrap's ensure_file() only short-circuits on
S_ISREG, so it falls through to creat() and fails on the now-read-only
mount. Every sandboxed command errors out.
Common trigger: same path specified in two config surfaces that feed
into denyWrite, e.g. a file-edit permission deny and a
sandbox.filesystem.denyWrite entry for the same file.
linuxGetMandatoryDenyPaths() already does [...new Set(denyPaths)] but
that's pre-normalization — ~/.foo and the expanded absolute path differ
as strings there.
* Wire allow-read and wrap-with-sandbox tests into CI
test:integration only listed integration.test.ts explicitly; these two
files were never run despite containing sandbox-exec/bwrap integration
tests. The root-deny and dedup tests added in this PR need CI coverage.
* Fix Linux test assumptions: seccomp binary path, ro-bind src+dest count
Root deny hides the repo's vendor/seccomp/ dir, so apply-seccomp
can't load inside the sandbox and bash returns 127. Bypass seccomp
with allowAllUnixSockets: true — socket blocking is orthogonal here.
Dedup assertion was off by 2x: --ro-bind <p> <p> contains <p> twice.
The fix works (was 4 occurrences, now 2).
* Narrow allowRead skip check to writes actually re-bound under this tmpfs
The skip check introduced in 3cdb468 was too broad: it skipped any
allowPath under any allowWrite, not just writes that were re-bound in
the current tmpfs iteration. With allowWrite as an ancestor of denyRead
(e.g. allowWrite: [~], denyRead: [~/.ssh], allowRead: [~/.ssh/known_hosts])
the write path isn't wiped and isn't re-bound, but the skip check still
matched — known_hosts was left sitting in the empty tmpfs.
Narrow the predicate: only skip if the write path itself is under the
tmpfs'd deny dir (i.e. it was re-bound just above).
* Add upstream/parent HTTP proxy support to sandbox
When the sandbox runs in an environment that requires an HTTP proxy for
outbound internet access (e.g. corporate networks), the sandbox's own
HTTP and SOCKS proxies must chain through that upstream rather than
connecting directly.
- New parent-proxy module: config resolution (explicit config falling
back to HTTP_PROXY/HTTPS_PROXY/NO_PROXY env), NO_PROXY matching with
hostname-suffix and CIDR support, and a CONNECT-tunnel helper
- HTTP proxy: direct-path CONNECT and plain requests now tunnel through
the parent when configured; NO_PROXY and loopback still bypass
- SOCKS proxy: custom connection handler routes through parent HTTP
CONNECT instead of direct net.connect()
- Config schema: new network.parentProxy field with http/https/noProxy
- Tests: unit tests for resolution/NO_PROXY/URL selection plus an e2e
tunnel test verifying requests chain through a recording parent proxy
* Address review: security fixes, net.BlockList, unify CONNECT paths
Security fixes:
- Validate destHost in openConnectTunnel to prevent CRLF injection via
SOCKS5 DOMAINNAME (allowlist bypass + credential theft vector)
- Forward reconstructed absolute-URI (not raw req.url) to parent proxy,
closing URL-parser differential bypass
- Strip hop-by-hop and proxy-authorization headers before forwarding
- Redact userinfo from parent proxy URLs in debug logs
Correctness fixes:
- Handle proxy close during CONNECT handshake (no longer hangs forever)
- Add 30s timeout and 16KB header cap on CONNECT negotiation
- Strip brackets from IPv6 URL.hostname before netConnect/tlsConnect
- Skip port-stripping for IPv6 literals in NO_PROXY entries
- Reject empty/malformed CIDR suffixes (10.0.0.0/ no longer becomes /0)
- Bracket IPv6 destHost in CONNECT authority-form
- Omit SNI servername when proxy host is an IP literal
- Anchor status-line regex (accept 2xx, reject 200 in reason-phrase)
- Extend loopback bypass to full 127/8 and v4-mapped ::ffff:127/104
Refactors:
- Replace hand-rolled CIDR matching (parseCidr/ipInCidr/bitsMatch/
expandV6, ~70 lines) with net.BlockList
- Extract generic openConnectTunnel helper; mitm and parent CONNECT
paths now share one implementation (~70 lines deduplicated)
- http-proxy.ts CONNECT handler reduced to a single try/await/pipe
block for all three routes (mitm/parent/direct)
Tests: 33 unit + 2 integration, including regression tests for each
security fix.
* Second-round review fixes: null-byte bypass, socket leaks, IPv6 CONNECT
CRITICAL:
- Block null-byte allowlist bypass: SOCKS5 DOMAINNAME is raw bytes;
'evil.com\x00.allowed.com' passed .endsWith('.allowed.com') but DNS
truncated at the null and connected to evil.com. Now validated via
isValidHost at both the SOCKS ruleset validator and filterNetworkRequest
(defence in depth).
HIGH:
- Pause socket before removing data listener in openConnectTunnel: leaving
the stream flowing meant unshift'd trailing bytes (TLS ServerHello) could
be dropped before the caller's pipe() attached.
- Abort upstream dial on client disconnect (both HTTP and SOCKS): previously
leaked in-flight sockets when the client RST'd mid-CONNECT. New dialDirect
helper gives the direct path the same 30s timeout as tunnelled paths.
- Tear down proxyReq on client close in the HTTP request handler; destroy
res on mid-stream error instead of leaving it hung.
- Parse IPv6 CONNECT targets ([::1]:443) correctly; split(':') was
returning 400 for all IPv6.
- Forward the CONNECT head buffer to upstream (pipelined TLS ClientHello
was being dropped).
MEDIUM:
- Strip bracketed IPv6 and store bare IPs in NO_PROXY as BlockList entries
so they actually match.
- Accept schemeless HTTP_PROXY (proxy.corp:3128) like curl; reject
non-http(s) schemes with a clear error.
- CONNECT is always treated as HTTPS for HTTPS_PROXY selection (was
port==443 only).
- Plain HTTP no longer falls back to HTTPS_PROXY (matches curl).
- Drop unused _port param from shouldBypassParentProxy; document that
NO_PROXY port suffixes are host-matched only.
- Add content-length to hop-by-hop strip list (TE.CL desync hardening).
- Strip headers named in Connection: per RFC 7230.
LOW:
- Use removeListener instead of removeAllListeners('close').
- Fix misleading comment in parseNoProxy CIDR branch.
Tests: +11 regression tests (null-byte, CRLF, schemeless URL, Connection
header stripping, content-length stripping).
* Third-round review: zone-ID bypass, CL header, error-on-abort race
CRITICAL:
- Block IPv6 zone-ID allowlist bypass: '::ffff:127.0.0.1%x.github.com'
passed isIP() (Node accepts dotted zone IDs), passed
.endsWith('.github.com'), then connected to 127.0.0.1 when the OS
discarded the bogus scope. isValidHost now rejects '%' outright, and
matchesDomainPattern refuses wildcard-match on IP literals as a second
layer.
Correctness:
- Revert content-length from hop-by-hop set. It's end-to-end per RFC 7230;
stripping it forced chunked encoding on all forwarded bodies, breaking
HTTP/1.0 upstreams and CL-requiring servers. Node's llhttp already blocks
the TE+CL smuggling vector this was meant to guard against.
- Attach error handler before upstream.destroy() on client-abort: the
openConnectTunnel resolver removes its own error listener, so a late RST
could fire unhandled and crash the process.
- proxyAuthHeader: catch malformed percent-encoding rather than throwing
synchronously into the SOCKS callback (decodeURIComponent('%ZZ') throws).
- SOCKS now treats all tunnels as HTTPS for parent-proxy selection (was
port==443 only), so SSH/git/etc route through HTTPS_PROXY correctly.
- dialDirect: handle 'close' event for parity with openConnectTunnel.
- NO_PROXY: parse '[v6]:port' form (bracket-strip was failing on ']' not
being last char).
- isValidHost: accept underscore for real-world DNS records (_dmarc,
_acme-challenge).
Tests: +2 regression tests for zone-ID bypass and underscore acceptance;
content-length test updated to assert preservation.
* Fourth-round review: inet_aton canonicalization, state lifecycle, response headers
MEDIUM:
- Add canonicalizeHost and apply before allowlist matching: WHATWG URL
normalizes inet_aton shorthand (127.1, 2130706433, 0x7f.0.0.1) and IPv6
compression so string comparisons agree with what getaddrinfo() will
dial. Without this, '2852039166' dodged a denylist entry for
'169.254.169.254' and obscured user prompts.
- Strip hop-by-hop headers on responses too (was request-only): prevents
upstream Proxy-Authenticate/Connection/Transfer-Encoding leaking to
the sandboxed client.
State lifecycle:
- updateConfig() now re-resolves parentProxy for consistency with config.
- reset() clears parentProxy alongside other module state.
- resolveParentProxy returns undefined when both URLs fail to parse,
rather than a husk object that logs misleadingly.
Robustness:
- Check req.socket.destroyed after filter await in the request handler
(client may have disconnected during the filter, leaking proxyReq).
- Guard res.writeHead(500) with headersSent check in the outer catch.
- SOCKS listen() now rejects on bind error instead of hanging.
- Wrap sendStatus('HOST_UNREACHABLE') in try/catch (socket-closed race).
Tests: +5 for canonicalizeHost covering inet_aton, IPv6, trailing dot.
* Apply review suggestions: dedupe ParentProxyConfig, handle password-only auth
- Import ParentProxyConfig from sandbox-config.ts instead of duplicating
the interface locally.
- proxyAuthHeader: check both username and password before returning
undefined — http://:secret@proxy is a valid URL with empty username.
* fix: set GIT_SSH_COMMAND on Linux so git over SSH resolves DNS via proxy
On Linux, the sandbox runs inside an isolated network namespace
(--unshare-net) with no DNS. Previously GIT_SSH_COMMAND was only set
on macOS (using BSD nc), so git push/fetch over SSH on Linux failed with
"Could not resolve hostname".
Use socat's PROXY: address type (HTTP CONNECT) against the HTTP proxy
bridge on port 3128. socat is already a required Linux dependency, and
PROXY: works on all socat versions (unlike SOCKS5-CONNECT which needs
>= 1.8.0).
Also bump version to 0.0.41 (and sync package-lock.json which had
drifted to 0.0.39).
Fixes#161
* test: add integration tests for git over SSH through sandbox proxy
Adds two tests covering the GIT_SSH_COMMAND fix for #161:
1. Verifies GIT_SSH_COMMAND is set inside the Linux sandbox and routes
through socat PROXY (HTTP CONNECT).
2. Runs git ls-remote over SSH against github.com and asserts DNS
resolution succeeds. Uses /dev/null as the SSH identity so the
expected outcome is 'Permission denied (publickey)' -- reaching
that error proves TCP connect + SSH handshake worked, while
'Could not resolve hostname' would indicate regression.
* feat: support argv0 in RipgrepConfig
Adds an optional argv0 field to RipgrepConfig for invoking multicall
binaries that dispatch based on argv[0]. When set, spawn() is used
instead of execFile() since execFile doesn't support overriding argv[0].
The existing execFile code path is unchanged when argv0 is not provided.
Also bumps version to 0.0.40.
* refactor: unify ripGrep on spawn + stream/consumers
Drop execFile and use spawn for both argv0 and non-argv0 paths:
- text() from node:stream/consumers collects stdout/stderr as Promise<string>
- spawn({ timeout }) handles the 10s timeout natively
- Promise.all([stdout, stderr, close]) reads naturally as async/await
Drops the 20MB maxBuffer cap — the only caller scans for dangerous files
at bounded depth, so runaway output is not a realistic concern.
Adds an optional allowRead field to the filesystem config that re-allows
read access within regions blocked by denyRead. allowRead takes
precedence over denyRead, which is intentionally the opposite of write
where denyWrite takes precedence over allowWrite. This enables
workspace-only filesystem access patterns like denyRead: ["/Users"],
allowRead: ["."] without breaking system paths.
macOS: emits additional (allow file-read*) rules after deny rules,
relying on Seatbelt's last-rule-wins semantics.
Linux: after mounting tmpfs over denied directories, re-binds allowed
subdirectories with --ro-bind so they become readable again.
The field is optional with no default behavior change — existing configs
work identically.
wrapWithSandbox() and generateFilesystemArgs() did not strip trailing
/** from allowWrite and denyWrite paths before generating bubblewrap
--bind mounts. fs.existsSync() failed on the literal glob path (e.g.
"/home/user/project/**") and silently skipped the mount, leaving the
directory read-only under --ro-bind / /.
macOS was unaffected because generateWriteRules() converts globs to
Seatbelt regex patterns. The same config produced correct results on
macOS but broken results on Linux.
The fix applies removeTrailingGlobSuffix() and containsGlobChars()
filtering to allowWrite and denyWrite in both wrapWithSandbox() (the
SandboxManager API path) and generateFilesystemArgs() (defense in
depth), matching the existing treatment of denyRead paths and the
logic in getFsWriteConfig().
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
When allowedDomains is set, the sandbox enters restricted network mode.
The previous implementation used (allow network* (subpath "/")) to allow
Unix sockets, but socket(AF_UNIX, SOCK_STREAM, 0) is a system-socket
operation that doesn't reference a filesystem path, so (subpath ...) can't
match it. This caused Gradle (FileLockContentionHandler), Docker, and other
tools that create Unix domain sockets to fail with:
java.net.SocketException: Operation not permitted
The fix uses three explicit Seatbelt rules instead:
1. (allow system-socket (socket-domain AF_UNIX)) - for socket() creation
2. (allow network-bind (local unix-socket ...)) - for bind() operations
3. (allow network-outbound (remote unix-socket ...)) - for connect() operations
This properly separates the socket creation syscall (which has no path
context) from the bind/connect operations (which reference paths).
Fixes: Gradle builds failing in sandbox with allowedDomains configured
Fixes: Docker socket failures in sandbox with allowedDomains configured
* security: warn and skip symlink write paths pointing outside boundaries
bwrap follows symlinks when doing bind mounts, so if a user configures
an allowWrite path that is a symlink pointing to an unexpected location,
that target location would become writable.
For example, if ./src is a symlink to /etc, configuring allowWrite: ['./src']
would make /etc writable through the symlink.
This change:
- Detects when a write path is a symlink pointing outside expected boundaries
- Prints a warning to inform the user
- Skips the path instead of making the unexpected target writable
Fixes potential symlink-based sandbox escape in write path configuration.
* test: add unit and integration tests for symlink write path detection, bump to 0.0.38
* fix: trim trailing slashes before symlink comparison in write path check
realpathSync never returns trailing slashes, but normalizedPath may have
one, causing a false mismatch that incorrectly treats the path as a
symlink and skips it. Strip trailing slashes before comparing.
Add test to reproduce the trailing slash issue.
---------
Co-authored-by: ollie-anthropic <ollie@anthropic.com>
When .git is a file (worktrees) or doesn't exist, the mandatory deny for
.git/hooks caused bwrap failures or turned .git into a /dev/null file.
Three fixes: (1) skip denies when a path ancestor is a file, (2) mount
empty directories instead of /dev/null for intermediate non-existent
components, (3) only add .git/hooks and .git/config denies when .git is
a directory. Also updates cleanup to handle empty directory mount points.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Modern runtimes like Java create IPv6 dual-stack sockets by default.
When binding such a socket to 127.0.0.1, the kernel represents the
address as ::ffff:127.0.0.1 (IPv4-mapped IPv6). macOS Seatbelt's
"localhost" filter only matches 127.0.0.1 and ::1, not the
IPv4-mapped variant, causing bind() to fail with EPERM.
Seatbelt only supports two host values in IP filters: "localhost"
and "*". Since we can't specify ::ffff:127.0.0.1 explicitly, change
to (local ip "*:*"). This is safe because the (local ip) filter
matches the LOCAL endpoint of connections — internet-bound traffic
originates from non-loopback interfaces, so it remains blocked by
the (deny default) rule.
Fixes: https://github.com/anthropics/claude-code/issues/18545
PR #80 hardened the sandbox by mounting /dev/null over non-existent deny
paths to prevent their creation, but this caused bwrap to leave empty
"ghost dotfiles" on the host (issue #85), which PR #91 reverted. This
re-introduces the protection with proper cleanup: mount points are
tracked and removed via cleanupBwrapMountPoints(). A new lightweight
cleanupAfterCommand() API is exposed on SandboxManager for callers to
invoke after each command, and the srt CLI calls it on child exit.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add test/utils/which.test.ts for Bun environment testing
- Add test/utils/which-node-test.mjs for Node.js fallback testing
- Update test/sandbox/linux-dependency-error.test.ts to mock
globalThis.Bun.which directly instead of using mock.module
- Update test/sandbox/seccomp-filter.test.ts to use whichSync
- Add src/utils/which.ts with whichSync function that uses Bun.which
in Bun runtime and falls back to spawnSync('which', ...) in Node.js
- Replace spawnSync('which', ...) calls with whichSync in:
- src/sandbox/linux-sandbox-utils.ts (dependency checks, shell lookup)
- src/sandbox/macos-sandbox-utils.ts (shell lookup)
- src/sandbox/sandbox-manager.ts (ripgrep check)
- src/utils/ripgrep.ts (hasRipgrepSync)
This avoids spawning a new process for 'which' lookups when running
in Bun, as Bun.which is a native built-in function.
The previous example called reset() immediately after spawning the
child process, which would shut down proxy servers before the child
process completes. This caused the sandboxed command to fail.
Move reset() inside the 'exit' event callback to ensure cleanup
happens after the child process terminates.
This function runs `npm root -g` which spawns a subprocess. Since the result
is stable for the process lifetime and the function is called from multiple
fallback paths, caching avoids redundant process spawns.
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>