The WASM Sandbox Model

WebAssembly is a bytecode format that executes inside a virtual machine embedded in the browser (or a standalone runtime like Wasmtime or WasmEdge). The security model has several layers:

  • Type safety: Every WASM function has a declared type signature. Indirect function calls are validated against the expected type at the call site β€” you cannot call a function with the wrong argument types.
  • Structured control flow: WASM has no arbitrary jumps (no equivalent of jmp to an arbitrary address). Control flow is structured β€” loops, blocks, and branches. Return-oriented programming within WASM is much harder because there are no arbitrary instruction sequences to chain.
  • Validation: Every WASM module undergoes a validation pass before execution. A module that fails validation is rejected. This eliminates many exploit primitives that rely on invalid machine code.
  • Memory isolation: WASM memory is distinct from the JavaScript heap and engine internals. A bug in WASM cannot directly corrupt the browser's V8 heap or access JavaScript objects (without explicit imports).
  • Same-origin policy: WASM modules run in the security context of their originating page. Cross-origin access restrictions apply as with JavaScript.

Linear Memory: Power and Risk

WASM uses a flat, linear memory model β€” a contiguous byte array that the module can read and write freely within its allocated range. This is what allows C/C++ code compiled to WASM to manage its own heap and stack. But it also means that bugs in the WASM code can corrupt data within that linear memory space.

The critical point: within linear memory, there is no ASLR (Address Space Layout Randomisation). The WASM heap, the WASM stack, and any data structures laid out in linear memory are at deterministic offsets from the base. An attacker who finds a buffer overflow within WASM linear memory has a predictable memory layout to target β€” they know exactly where the stack begins, where data structures are allocated, and where the return addresses within the shadow stack lie.

This matters for C/C++ code compiled to WASM using Emscripten or similar toolchains. Legacy C code with buffer overflow vulnerabilities does not become memory-safe by being compiled to WASM β€” the overflow still exists, and within linear memory the layout is more predictable than in native code with ASLR. The overflow is contained within the WASM sandbox, but it can corrupt the WASM application's data and potentially manipulate its behaviour.

No ASLR in WASM linear memory: If you compile C/C++ code to WASM to "sandbox" it from memory safety concerns, you are isolating it from the host process β€” but you are not eliminating the memory corruption bugs within the WASM module. Those bugs are now easier to exploit because the layout is predictable.

Spectre and Timing Attacks

WASM was one of the primary vectors through which Spectre attacks were demonstrated to be practical in browsers. The attack uses speculative execution to leak data from memory that the code should not be able to access β€” but WASM's performance characteristics make the timing measurements required for Spectre more accurate than with JavaScript.

Browser vendors responded by reducing timer resolution across all web APIs (performance.now() granularity was reduced, SharedArrayBuffer was temporarily disabled), and by enabling site isolation (each cross-origin site in its own process). These mitigations make Spectre attacks in practice much harder to execute reliably β€” but not impossible.

The residual risk for WASM specifically: high-performance WASM applications that implement their own timing loops (for cryptographic operations, for game engines) create side channels that may have higher resolution than reduced-precision browser APIs. Security-sensitive WASM code should be reviewed for timing side channels in the same way as native cryptographic implementations.

WASM as a Malware Vector

WASM modules can contain fully functional malware β€” the browser sandbox constrains what they can do to the host OS, but within the browser they can use all available JavaScript APIs. Cryptomining modules were the first widely-deployed WASM malware: compressed WASM modules with obfuscated code running Monero mining algorithms silently in visitor browsers, using CPU resources without visible indication.

WASM-based malware evades traditional signature-based detection because WASM bytecode is binary and obfuscatable in ways that differ from JavaScript. Deobfuscation tools for WASM malware analysis are less mature than JavaScript equivalents. A malicious WASM module can also download and execute JavaScript dynamically, use the full browser API surface, make network requests, access cookies and local storage, and interact with the DOM.

Observed in the wild: WASM-based cryptominers, keyloggers that hook DOM events, click-fraud bots running in WASM workers, and credential-harvesting modules embedded in compromised CMS plugins are all documented. The binary format provides minimal obfuscation that delays analysis.

WASM is also increasingly used as a loader: the WASM module's primary function is to download and evaluate conventional JavaScript payloads or to decrypt embedded shellcode. The WASM module itself is benign-looking β€” it's the downloaded payload that performs the malicious action. This splits the malware across two separate files and makes static analysis of the WASM module alone insufficient.

WASI and Server-Side Security

WASI (WebAssembly System Interface) extends WASM to server-side execution with controlled access to host resources (files, network, clocks). WASM runtimes like Wasmtime, WasmEdge, and Wasmer implement WASI and are increasingly used for sandboxing server-side plugins, edge functions, and microservices.

The capability model is the key security property: a WASM module running under WASI only has access to the host resources explicitly granted by the host process. File system access is capability-gated β€” the module can only access directories that were explicitly opened and passed to it. Network access requires an explicit socket capability.

Security risks in WASI deployments: overly permissive capability grants (giving the module access to the entire filesystem when it only needs one directory), bugs in the WASM runtime itself (Wasmtime has had several security-relevant CVEs), and supply chain risks in the WASM module being executed (same as any third-party code).

Detection and Defenses

  • Content Security Policy for WASM: The 'wasm-unsafe-eval' CSP directive controls whether WASM can be compiled dynamically. If your application does not use dynamic WASM compilation, exclude this source. For known WASM modules, use integrity attributes with the module's hash.
  • WASM module scanning: Tools like wasm-decompile and the Binaryen toolchain can decompile WASM to readable IR for static analysis. Automated scanning for known cryptomining patterns and obfuscation techniques is available in commercial WAF and bot detection products.
  • Subresource Integrity for WASM: If loading WASM modules from CDNs or third parties, use integrity attributes on the fetch call to verify the module hash. This prevents a compromised CDN from serving a modified module.
  • Memory safety for compiled WASM: When compiling C/C++ to WASM, enable AddressSanitizer (ASAN) in development and use safe stack / stack canaries in production builds. Emscripten supports these.
  • WASI capability minimisation: Grant the minimum set of WASI capabilities required. Audit capability grants as carefully as you audit IAM policies.

The bottom line: WASM is safer than native code for untrusted module execution β€” the isolation guarantees are real. The risks are in misunderstanding what "safe" means: memory corruption within WASM linear memory, timing side channels, and WASM as a delivery mechanism for conventional web-based attacks.