What unsafe Does
Rust's safety guarantees β no dangling pointers, no data races, no use-after-free, no buffer overflows β are enforced by the borrow checker at compile time. The borrow checker rejects code that cannot be statically proven safe. This is the correct approach to memory safety, but it has a cost: some legitimate operations that are difficult to express in safe Rust require stepping outside the borrow checker's model.
The unsafe keyword provides that escape hatch. Inside an unsafe {} block, five additional capabilities are available: raw pointer dereferencing, calling unsafe functions (including C FFI), accessing or modifying mutable static variables, implementing unsafe traits, and accessing union fields. All five bypass specific safety rules the borrow checker enforces.
unsafe is not a hint, it is a contract: When you write unsafe code, you are asserting to the compiler that you have verified the safety invariants the compiler cannot check. If that assertion is wrong β if your manual verification is incorrect β the result is undefined behaviour, and undefined behaviour in Rust can corrupt memory, produce incorrect results, or enable exploitation, exactly as in C.
The important nuance is that unsafe code in a crate can produce unsoundness that affects safe calling code. If a safe public API is implemented using unsafe code that violates invariants, callers who write only safe Rust can trigger undefined behaviour. This is called a soundness bug β the crate's safe API is unsound because the unsafe implementation is wrong.
unsafe in the Wild
Rust's standard library itself contains a significant amount of unsafe code β this is expected and heavily audited. More relevant for application security teams are unsafe blocks in third-party crates. Analysis of the top 500 crates on crates.io consistently finds that a substantial fraction contain unsafe code β often in performance-critical paths, FFI wrappers, or low-level data structure implementations.
The pattern that creates the most security risk is unsafe code that is correct for the inputs the original author tested but fails for edge cases that can be triggered by external input. An unsafe indexing optimisation that skips bounds checking is correct when the index is computed from trusted data but becomes a buffer read when the index is derived from user input that exceeds the bounds the author assumed.
FFI Boundaries
Foreign Function Interface code β Rust calling C libraries or being called from C β is almost universally unsafe. The type and memory safety guarantees of both languages only apply within each language; at the boundary, the programmer must manually ensure that data layouts match, lifetimes are honoured, and that the C code's ownership expectations are met. FFI boundary bugs are a significant source of Rust CVEs in crates that wrap C libraries.
Common unsafe Bug Classes
- Incorrect lifetime in unsafe transmutation:
std::mem::transmutereinterprets bytes as a different type, bypassing type checking. Incorrect transmutes that extend the lifetime of a reference beyond its actual validity produce use-after-free conditions. The borrow checker would catch this in safe code; transmute silently allows it. - Aliasing violations with raw pointers: Rust's aliasing rules (one mutable reference XOR any number of shared references) prevent data races in safe code. Raw pointer code that creates multiple mutable aliases to the same memory β common in intrusive data structure implementations β can produce undefined behaviour including data corruption and race conditions in multi-threaded code.
- Uninitialized memory reads:
MaybeUninit::assume_init()is unsafe because it asserts that uninitialized memory is valid for a type. If the code path that initialises the memory can be skipped (early return, error path), calling assume_init produces undefined behaviour through reading garbage memory. - Missing bounds checks in slice operations: Safe Rust slice indexing panics on out-of-bounds access. Unsafe slice methods like
get_uncheckedskip this check. If the index is externally influenced, this is a classic buffer over-read vulnerability with potential for information disclosure. - Drop order invariants: Unsafe code that manually controls memory layout must correctly handle drop order. Incorrect assumptions about when destructors run can produce use-after-free if a value is accessed after it has been dropped but before the memory is reclaimed.
Undefined Behaviour in Rust
Rust's undefined behaviour rules are similar to C/C++ in scope: undefined behaviour gives the compiler license to generate any code, including code that does not resemble the source at all. LLVM, which Rust uses as a backend, applies aggressive optimisations that assume undefined behaviour cannot occur β which means undefined behaviour in source code can produce entirely unintuitive compiled output.
The Miri tool (the Rust interpreter for detecting undefined behaviour) can detect most categories of UB in test code. However, UB that only occurs on specific input values or in specific thread interleaving patterns may not be triggered during testing. This is the same fundamental problem as C/C++ UB β extensive testing does not guarantee the absence of UB in unsafe code.
Auditing unsafe Code
The security review process for unsafe Rust code should be more thorough than for safe Rust, with specific attention to the following questions for every unsafe block:
- What invariant is the programmer asserting? Every unsafe operation asserts something that the compiler cannot check. Make that assertion explicit in a comment. If you cannot state the invariant, you cannot verify it is upheld.
- Can external input influence the safety of the invariant? If the unsafe block handles data that can be externally influenced (user input, network data, file content), verify that all paths from external input to the unsafe operation either validate the input or are provably safe regardless of input value.
- What is the failure mode if the invariant is violated? Classify the unsafe block by its worst-case failure mode: panic (best case), data corruption, information disclosure, or code execution. Higher-risk categories warrant more rigorous review.
- Can the unsafe code be removed? With modern Rust and the standard library, many uses of unsafe code from older crates can be replaced with safe equivalents that have been added to the language or standard library since. The best unsafe code review finding is eliminating the unsafe block.
Mitigations
- Run Miri on all unsafe code in test suites: Miri is the most effective tool for detecting undefined behaviour in unsafe Rust. Add
cargo miri testto your CI pipeline. It will not catch all UB (particularly UB that depends on specific runtime conditions) but it catches a significant fraction at low cost. - Use cargo-geiger to quantify unsafe usage in dependencies: cargo-geiger reports unsafe line counts for every crate in your dependency tree. This gives you visibility into how much of your dependency surface uses unsafe code and helps prioritise security review effort.
- Fuzz unsafe code paths with cargo-fuzz: Fuzzing is particularly effective for finding the edge cases that cause unsafe invariants to break. Focus fuzzing effort on public APIs whose implementations contain unsafe code and on FFI boundary code.
- Prefer crates with
#![forbid(unsafe_code)]: When evaluating dependencies, prefer crates that commit to no unsafe code with theforbidattribute. This is not always possible for low-level infrastructure crates, but for business logic and application-layer dependencies it is a reasonable requirement. - Audit unsafe code at dependency update time: When a dependency updates its unsafe code, review the change. Security vulnerabilities in unsafe Rust code are often introduced in "performance improvements" that remove safety checks.
- Use AddressSanitizer and ThreadSanitizer in testing: Compile with
-Z sanitizer=addressand-Z sanitizer=thread(nightly Rust) to catch memory safety errors and data races that Miri might miss, particularly in concurrent code.
Writing Rust eliminates an enormous class of memory safety bugs compared to C or C++. But the unsafe keyword is a genuine exit from those guarantees, not a cosmetic annotation. Treat unsafe blocks in your codebase and your dependencies with the same scrutiny you would apply to equivalent C code β because when the invariants are wrong, the result is equivalent to C.