DNS Rebinding: How the Same-Origin Policy Gets Bypassed
The browser's Same-Origin Policy (SOP) is the primary mechanism that prevents a page loaded from evil.com from reading responses from internal-api.company.com. SOP keys on the tuple of (scheme, hostname, port) β requests to different origins are opaque by default.
DNS rebinding undermines SOP not by breaking the policy itself, but by changing what the policy thinks the origin is. The attack proceeds in two phases:
- Initial resolution: The victim visits a page at
attacker.com. The DNS response forattacker.comlegitimately points to the attacker's server (e.g., 203.0.113.10). The attacker's server serves a JavaScript payload. - Rebinding: The DNS record's TTL expires (set to an extremely short value, often 0 or 1 second). The attacker's DNS server then changes the A record for
attacker.comto point to a private IP β 192.168.1.1, for example. The JavaScript is still running in the browser under the originattacker.com.
Now when the JavaScript makes an XHR to http://attacker.com/api/data, the browser resolves attacker.com again β and this time gets the internal IP. The request goes to the internal host at 192.168.1.1. The SOP is not violated from the browser's perspective because the origin hostname hasn't changed. The response is readable by the JavaScript.
TTL manipulation is the key lever. DNS resolvers are supposed to cache records for at least the TTL duration, but many stub resolvers and browsers re-resolve on demand. Attackers exploit this to trigger re-resolution exactly when needed.
Full Attack Walkthrough
Let's walk through a concrete scenario: a developer has a local service β say a Jupyter notebook server, a local Kubernetes dashboard, or a development API β running on localhost:8080 without authentication, because "it's only local".
rebind.attacker.com with a DNS server they control. The record initially resolves to their external server (203.0.113.10) with TTL=1.http://rebind.attacker.com via a phishing link. The initial resolution returns 203.0.113.10. The page loads a JavaScript payload that: sets a timer for 2 seconds, then begins polling http://rebind.attacker.com:8080/api/.rebind.attacker.com to 127.0.0.1. When the browser re-resolves the name (TTL expired), it gets 127.0.0.1.http://rebind.attacker.com:8080/api/ is directed to 127.0.0.1:8080. The browser sees this as a same-origin request. The response is returned to the JavaScript.Targeting Specific Internal Ranges
Beyond localhost, attackers can target predictable internal addresses. Home routers are typically at 192.168.0.1 or 192.168.1.1 and often have unauthenticated admin interfaces. Corporate internal services at 10.0.0.0/8 ranges can be probed if the victim is on a corporate network.
Tools like Singularity of Origin automate DNS rebinding attacks and include pre-built payloads for common targets (Jupyter, Spring Boot Actuator, Kubernetes dashboards, IoT admin UIs).
Using DNS Rebinding to Bypass SSRF Protections
Server-side SSRF protections typically work by resolving the target URL's hostname and checking whether the resulting IP is in a blocklist (RFC1918, loopback, link-local). If it is, the request is blocked before being sent.
DNS rebinding bypasses this check through a time-of-check to time-of-use (TOCTOU) race at the DNS layer:
The attacker's DNS server serves a valid public IP for the first resolution (passing the check), then returns the target private IP for the second resolution (used for the actual connection). The window between the two is typically milliseconds, but attackers can use DNS servers that respond differently based on a counter or timing signal.
The correct defence is to resolve the hostname exactly once, verify the IP, and then connect to that verified IP directly β bypassing DNS entirely for the connection step. In Python: resolve with socket.getaddrinfo, verify, then use the raw IP in the connection.
Correct SSRF-Safe Resolution
Cloud Metadata Service Attacks
The AWS Instance Metadata Service (IMDS) is available at 169.254.169.254. This link-local address is accessible from any process running on an EC2 instance and returns the instance's IAM role credentials without authentication. It is the primary target of SSRF attacks against cloud infrastructure.
IMDSv1 had no authentication: any HTTP GET to http://169.254.169.254/latest/meta-data/iam/security-credentials/ returned the role name, and a second request returned the actual credentials (AccessKeyId, SecretAccessKey, Token). Many production applications remain vulnerable.
IMDSv2 requires a PUT request with a X-aws-ec2-metadata-token-ttl-seconds header to first obtain a session token, which must then be passed in subsequent requests. This prevents simple SSRF exploitation because most SSRF vulnerabilities only allow GET requests or don't allow custom headers.
IMDSv2 is not universally enforced. Unless the HttpTokens setting is explicitly set to required on the instance or at the account level, IMDSv1 still works alongside IMDSv2. Audit your instances.
GCP and Azure Equivalents
Google Cloud's metadata server is at 169.254.169.254 and also at metadata.google.internal. It requires a Metadata-Flavor: Google header, which provides some SSRF protection because standard URL-fetch vulnerabilities don't add custom headers. However, server-side SSRF via code that explicitly sets headers remains viable.
Azure's IMDS is at 169.254.169.254 and requires an Metadata: true header. Same caveat applies.
DNS Hijacking: Persistent Infrastructure Compromise
Where DNS rebinding is a transient, session-scoped attack targeting browser victims, DNS hijacking is a persistent infrastructure attack targeting DNS records themselves. The attacker modifies authoritative DNS records β changing A records, MX records, or NS delegations β to redirect traffic permanently.
Attack Vectors
The most common paths to DNS hijacking are:
- Registrar account compromise: Phishing or credential-stuffing the domain registrar account allows changing nameserver delegations. All traffic for the domain routes through attacker-controlled nameservers indefinitely.
- DNS provider API key theft: Route53, Cloudflare, and other DNS providers expose APIs. A leaked API key with zone-write permissions allows arbitrary record modification without touching the registrar.
- Subdomain takeover: DNS records pointing to deprovisioned cloud resources (an old S3 bucket, an old Heroku app) can be claimed by anyone who provisions the same resource name. The CNAME still resolves; the new owner controls the endpoint.
- Registrar transfer abuse: Attackers use social engineering or forged documentation to transfer domain ownership to a different registrar under their control.
Subdomain Takeover at Scale
Subdomain takeover deserves particular attention because it is both common and automatable. The pattern:
Tools like subjack, nuclei (with takeover templates), and dnsReaper automate discovery of vulnerable CNAMEs across an organisation's entire DNS footprint.
Cloud DNS-Specific Risks
Cloud environments introduce DNS risks that don't exist in traditional infrastructure.
Split-Horizon DNS and VPC Resolution
AWS Route53 Private Hosted Zones create split-horizon DNS: internal names resolve to private IPs within a VPC, and the same or different names resolve to public IPs outside it. Misconfigured associations can expose internal-only endpoints to external DNS queries, or fail to route internal traffic through private endpoints.
A common misconfiguration: a private hosted zone for internal.company.com not associated with all VPCs that need it. Resources in those VPCs fall through to public DNS and either fail to resolve or reach public endpoints that shouldn't be used for internal traffic.
IAM Permissions for DNS
Route53 zone write access should be treated as sensitive as S3 bucket write access or EC2 control plane access. A compromised IAM role with route53:ChangeResourceRecordSets can redirect any hostname in a hosted zone β including MX records (to intercept email), TXT records (to create fraudulent DKIM/SPF entries or take over domain verification), and A/CNAME records (to redirect web traffic).
DNS over HTTPS and Visibility Loss
DoH bypasses traditional DNS monitoring. Endpoints that use DoH (Firefox and Chrome do by default in some configurations) will resolve names through a DoH provider rather than the corporate resolver. This blinds DNS-based security controls and logging. Enterprise DNS filtering solutions need to handle DoH explicitly β either by blocking known DoH providers or by deploying a managed DoH endpoint.
Defences Against DNS Rebinding
DNS Rebind Protection in Resolvers
Many DNS resolvers (dnsmasq, BIND, pfSense) have a rebind-protection or stop-dns-rebind option that drops responses where a public-domain name resolves to a private IP. This is the most effective single control:
Host Header Validation
Services on internal hosts should validate the Host header. When a rebinding attack occurs, the browser sends requests with the attacker's domain name in the Host header β not the actual internal hostname. Rejecting requests with unexpected Host values blocks rebinding:
Private Network Access (Chrome Policy)
The Private Network Access specification (formerly CORS-RFC1918) adds a preflight check before a public-origin page can access private-network hosts. Chrome has shipped this for localhost as of Chrome 94 and for private ranges in subsequent versions. Internal services that need to be accessible from browsers must opt in via the Access-Control-Allow-Private-Network: true response header β and should only do so with deliberate intent.
Securing Cloud DNS
- Enable MFA on registrar accounts and DNS provider accounts. These are crown jewels.
- Use registrar lock (also called domain lock or clientTransferProhibited) to prevent unauthorised transfers.
- Scope Route53 IAM permissions to specific hosted zones using resource-level conditions.
- Audit CNAME records quarterly for references to deprovisioned cloud resources.
- Enable DNSSEC for critical domains to make record tampering detectable.
Detection Patterns
Detecting rebinding attacks in progress is difficult from the server side β the traffic looks like normal browser requests. Detection works better at the DNS and network layers.
DNS Logging and Anomaly Detection
Log all DNS queries at the resolver level. Alert on:
- Domains with TTL values of 0 or 1 second β legitimate domains rarely use these.
- Domains that resolve to a public IP initially and then to a private IP within a short window (minutes).
- Domains registered very recently (less than 24 hours old) being queried by multiple internal clients.
Subdomain Takeover Monitoring
Automate CNAME dangling-record detection as part of your asset inventory process. DNS Reaper and Nuclei both have CI-friendly output modes:
DNS is infrastructure, not plumbing. The assumption that "DNS just works" is how organisations end up with six-month-old CNAME records pointing to deprovisioned services that an attacker claimed last Tuesday.