What Cryptographic Failures Really Means
In the OWASP Top 10 2017, this category was called "Sensitive Data Exposure" β which sounds like a consequence, not a cause. The 2021 rename to "Cryptographic Failures" is more accurate: the failure is in how (or whether) data is cryptographically protected. The exposure is just what happens downstream.
Cryptographic failures cover a wide range of issues: using no encryption where you should, using broken algorithms that attackers can crack, mismanaging encryption keys, misconfiguring TLS, and hashing passwords with algorithms designed for speed rather than resistance. These aren't exotic vulnerabilities β they're extremely common, and in our experience they show up in the majority of large codebases we scan.
OWASP A02:2021: Cryptographic Failures moved up from third place to second in 2021, reflecting how often it appears in breach reports. It's behind only Broken Access Control.
The Most Dangerous Mistakes
These are the patterns we see most frequently in production codebases β and the ones that lead directly to breaches when exploited.
Using MD5 or SHA-1 for passwords
MD5 and SHA-1 are general-purpose hash functions built for speed. A modern GPU can crack MD5 hashes at billions of guesses per second using precomputed rainbow tables or brute force. If you're storing passwords as MD5 hashes, an attacker who gets your database will have plaintext passwords within hours.
import hashlib # WRONG β MD5 is not a password hash, it's broken for this use case def store_password(password: str) -> str: return hashlib.md5(password.encode()).hexdigest() # Also wrong β SHA-256 is still a fast hash, not suitable for passwords def store_password_sha(password: str) -> str: return hashlib.sha256(password.encode()).hexdigest()
Hardcoded encryption keys
Encryption is only as strong as your key management. A hardcoded key in source code is effectively a known key β anyone with repo access (including attackers who breach your Git history) has it.
# Hardcoded key β this ends up in version control SECRET_KEY = "my-super-secret-key-1234" DATABASE_ENCRYPTION_KEY = "aes-key-hardcoded-here" # Also a problem: predictable keys import os SECRET_KEY = os.environ.get("SECRET_KEY", "fallback-dev-key") # fallback used in prod
Cleartext connection strings
Database credentials and API keys in cleartext config files, environment variable dumps in logs, or connection strings embedded in code are a category of their own that frequently overlaps with crypto failures.
We've seen this repeatedly: During SAST scans of microservice architectures, it's common to find database passwords in cleartext inside Kubernetes ConfigMaps (not Secrets), or encryption keys logged at startup for "debugging purposes" that never got removed.
TLS/HTTPS Pitfalls
Just enabling HTTPS doesn't mean your TLS configuration is secure. The common pitfalls go beyond certificate management.
Accepting invalid certificates
Developers often disable certificate validation to get things working in dev β and that code makes it to production.
import requests # WRONG β disables certificate verification entirely response = requests.get("https://api.example.com/data", verify=False) # Fixed β always verify, use custom CA bundle if needed response = requests.get("https://api.example.com/data", verify=True) # or with custom CA: verify="/path/to/ca-bundle.crt"
Using deprecated TLS versions
TLS 1.0 and 1.1 are deprecated (RFC 8996). They're vulnerable to POODLE, BEAST, and other protocol-level attacks. Your servers should only negotiate TLS 1.2 and 1.3. Check your Nginx/Apache/ALB config β old defaults can persist for years.
Weak cipher suites
Even on TLS 1.2, weak cipher suites (RC4, DES, export-grade ciphers, NULL cipher) can be negotiated if your config allows them. Use tools like SSL Labs to test your public endpoints and tighten cipher suite configuration explicitly.
Internal services matter too: Internal APIs, service meshes, and database connections often skip TLS entirely because "nobody can reach them from outside." Internal network compromise happens β mTLS for service-to-service communication is worth the investment.
Encryption at Rest
Encrypting data at rest protects against storage theft β someone walking away with a hard drive or cloud snapshot. The most common failure isn't skipping encryption entirely; it's encrypting at the wrong layer.
Volume-level vs field-level encryption
Disk encryption (LUKS, BitLocker, cloud-provider volume encryption) protects against physical theft but does nothing against an attacker who already has application access. For truly sensitive data β PII, payment card numbers, SSNs, health data β field-level encryption at the application layer gives you meaningful protection even if the database is compromised.
from cryptography.fernet import Fernet import os # Key should come from KMS or Vault, not here key = os.environ["FIELD_ENCRYPTION_KEY"].encode() cipher = Fernet(key) def encrypt_pii(value: str) -> str: return cipher.encrypt(value.encode()).decode() def decrypt_pii(token: str) -> str: return cipher.decrypt(token.encode()).decode()
Authenticated encryption
Use authenticated encryption modes (AES-GCM, ChaCha20-Poly1305) rather than AES-CBC or AES-ECB. ECB mode in particular is a classic mistake β it leaks patterns in the plaintext through identical ciphertext blocks. AES-GCM gives you both confidentiality and integrity in one primitive.
Password Hashing Done Right: bcrypt and Argon2
Passwords need a completely different class of hash function β one specifically designed to be slow and memory-intensive so that brute-force is computationally expensive even with dedicated hardware.
import bcrypt from argon2 import PasswordHasher # bcrypt β battle-tested, widely available def hash_password_bcrypt(password: str) -> str: salt = bcrypt.gensalt(rounds=12) # 12 rounds is the current minimum return bcrypt.hashpw(password.encode(), salt).decode() def verify_bcrypt(password: str, hashed: str) -> bool: return bcrypt.checkpw(password.encode(), hashed.encode()) # Argon2id β OWASP recommended, winner of Password Hashing Competition ph = PasswordHasher(time_cost=2, memory_cost=65536, parallelism=2) def hash_password_argon2(password: str) -> str: return ph.hash(password) def verify_argon2(password: str, hashed: str) -> bool: try: return ph.verify(hashed, password) except: return False
Argon2id is the current OWASP recommendation. bcrypt is an acceptable alternative for legacy systems. scrypt is also fine. If you're using PBKDF2, use at least 600,000 iterations with SHA-256. Never use MD5, SHA-1, or SHA-256 alone β they're not designed for this.
The work factor matters: bcrypt's cost parameter doubles the work with each increment. Cost 12 means 2^12 iterations. Bump this as hardware gets faster β modern hardware can handle cost 13-14 without noticeable UX impact at login time.
Key Management
Your encryption is only as secure as how you manage the keys. Poor key management is often what turns "encrypted data" into "data an attacker can easily decrypt."
- Use a secrets manager: AWS Secrets Manager, HashiCorp Vault, Azure Key Vault. Never store keys in code, config files, or environment variable files checked into source control.
- Rotate keys regularly: Plan for key rotation from day one β it's painful to retrofit. Support multiple active keys (current + N previous) so you can re-encrypt data progressively.
- Separate keys by purpose: Database encryption key, JWT signing key, session encryption key β these should all be different. Compromise of one shouldn't compromise everything.
- Audit key access: Who or what can retrieve decryption keys? Key access should be logged, alertable, and reviewed.
Envelope encryption: Cloud KMS services use envelope encryption β data is encrypted with a Data Encryption Key (DEK), and the DEK is encrypted with a Key Encryption Key (KEK) managed by the KMS. This way the long-lived key never touches your application memory directly.
Detection with SAST
Static analysis can identify cryptographic failures reliably because the bad patterns are concrete and code-visible β specific function calls, specific algorithm names, specific import paths.
What SAST catches well in this category:
- Use of
hashlib.md5(),hashlib.sha1()for security-sensitive operations - Hardcoded secrets: string literals that match key patterns, API key formats, or appear in assignment to variables named
secret,key,password verify=Falsein HTTP client calls- Use of DES, RC4, ECB mode in cipher construction
- Cleartext protocols:
http://connections for sensitive endpoints - Random number generation using
randommodule instead ofsecretsfor security purposes
Running SAST in CI means every pull request is checked. We've seen teams go from "periodic crypto audit" to "every PR checked" and catch things like a developer using random.token_hex() instead of secrets.token_hex() for session token generation β a subtle difference most reviewers would miss.
Prevention Checklist
- Classify your data β know which fields are sensitive and need field-level encryption vs which just need volume encryption
- Use Argon2id or bcrypt for passwords β never MD5, SHA-1, or SHA-256 alone
- Enforce TLS 1.2+ everywhere β including internal service-to-service traffic
- Always verify TLS certificates β no
verify=False, no InsecureSkipVerify - Use authenticated encryption β AES-GCM over AES-CBC, never AES-ECB
- Store secrets in a secrets manager β not in code, not in .env files committed to git
- Rotate keys β build rotation support from day one
- Use
secretsmodule for security tokens β notrandom - Run SAST in CI β catch crypto failures before they reach production
- Test TLS config with SSL Labs β for public-facing services regularly
Find Cryptographic Failures in Your Code
AquilaX SAST detects weak algorithms, hardcoded keys, and insecure TLS configuration across your entire codebase β automatically, on every pull request.
Start Free Scan