TOCTOU: Time-of-Check to Time-of-Use
TOCTOU (Time-of-Check to Time-of-Use) is the canonical race condition pattern. The application checks a condition (does this user have enough credits? is this coupon code unused? is the user's balance positive?) and then, in a separate operation, acts on the result. Between the check and the action, the state can change β or another concurrent request can have performed the same check before either has had the chance to update the state.
The classic web application form: two requests arrive simultaneously for the same resource. Request A checks: "is this discount code used? No." Request B checks: "is this discount code used? No." Both receive "No" because the state has not yet been updated. Both proceed to apply the discount. Both succeed. The discount has been applied twice to the same code.
Limit Bypass Attacks
Any per-user limit enforced with a read-check-write pattern is potentially exploitable. This includes: invitation quotas ("you can invite 5 users"), API rate limits implemented in application code, file upload counts, concurrent session limits, and trial period resource caps.
The exploitation technique is straightforward. Automate N parallel requests using concurrent.futures, JavaScript Promise.all(), Burp Suite's turbo intruder, or any HTTP client that supports concurrent requests. Time them to arrive at the server simultaneously. If the check and the write are not atomic, some fraction of the parallel requests will pass the check before the state is updated.
Financial impact in bug bounty: Race conditions in payment systems β where a user can withdraw more than their balance by firing concurrent withdrawal requests β are routinely rated Critical (CVSS 9.0+) and pay out some of the largest bug bounty rewards in financial and fintech programmes.
Database-Level Races
Even when application-level logic appears correct, database isolation levels determine whether concurrent transactions see each other's uncommitted writes. The default isolation level in MySQL and PostgreSQL is READ COMMITTED, which means a transaction sees committed data from other transactions at the point of each individual read β not at the start of the transaction.
Under READ COMMITTED, if Transaction A reads the coupon, Transaction B reads the same coupon before A commits, and both then update it, both succeed. The second update silently overwrites the first. This is the non-repeatable read problem.
SERIALIZABLE isolation prevents this β it ensures that concurrent transactions execute as if they had been run sequentially. But it comes with overhead and the risk of serialisation failures that require retry logic. The surgical alternative is to use SELECT FOR UPDATE or equivalent pessimistic locking: the first reader acquires a row-level lock, blocking concurrent readers until the transaction completes.
The pattern is to push the conditional logic into the UPDATE statement itself and check the affected row count. Zero rows affected means the condition was false at update time β another concurrent request won the race. This works correctly regardless of application-level concurrency because the atomicity guarantee is provided by the database.
The HTTP/2 Single-Packet Attack
James Kettle's 2023 research (PortSwigger) demonstrated that HTTP/2 multiplexing dramatically improves the reliability of race condition exploitation. In HTTP/1.1, sending parallel requests requires multiple TCP connections, and network jitter between connections introduces timing variance that often causes one request to arrive significantly before others.
HTTP/2 multiplexes multiple requests over a single TCP connection. By buffering multiple request frames and sending them in a single TCP packet, you can guarantee that all requests arrive at the server simultaneously β within a single network round-trip. The server's scheduler receives all requests at the exact same moment, maximising the window for concurrency bugs to trigger.
This technique is built into Burp Suite's "Single-packet attack" feature. It works against any HTTP/2 endpoint and makes previously "unreliable" race conditions reliably exploitable. The implication: race conditions that were theoretically exploitable but practically difficult are now straightforwardly exploitable from a laptop.
Detection and Testing
Testing for race conditions requires sending multiple concurrent requests and observing whether any bypass occurred. Tools:
- Burp Suite Repeater / Turbo Intruder β send N identical requests in parallel, use single-packet attack for HTTP/2 targets
- Python concurrent.futures β script concurrent requests;
ThreadPoolExecutorwithsubmit()and synchronized start using aBarrier - Golang goroutines with sync.WaitGroup β many teams write dedicated race condition test harnesses in Go for maximum concurrency
Code review indicators: look for patterns where a SELECT is followed by conditional logic followed by an UPDATE, INSERT, or DELETE without the check being inside an atomic database operation. Any read-modify-write sequence where the read and write are separate queries is a candidate.
The Right Fixes
- Atomic database operations: Move the condition into the UPDATE/INSERT statement. Use affected row count to detect concurrent modification.
- Database-level unique constraints: For one-per-user resources, a UNIQUE constraint on (resource_id, user_id) makes duplicate inserts fail at the database level regardless of application logic.
- Pessimistic locking with SELECT FOR UPDATE: For complex multi-step operations where the condition cannot be expressed in a single UPDATE, hold a row lock for the duration of the transaction.
- Redis atomic operations: For distributed rate limiting and counter enforcement, use Redis
INCR+EXPIREpatterns or Lua scripts that execute atomically. Do not implement rate limits with GET-then-SET. - Idempotency keys: For payment and booking flows, require a client-generated idempotency key with each request. The server deduplicates based on the key β even if a request is replayed or races, the outcome is applied once.
Testing requirement: Add race condition tests to your integration test suite using concurrent HTTP clients. A test that fires 20 parallel coupon-redemption requests for a single-use coupon and asserts exactly 1 succeeded is worth more than any SAST rule against this class of bug.