The comment shows up in code reviews all the time: “We moved it to env vars, we’re good.” A database password that had been sitting in source code is now in an environment variable. Everyone approves. The PRPull Requesta proposed code change submitted for review before it is merged into the main codebase merges. Case closed.
Except nothing actually changed. The secret still exists, readable, in plain text. It just lives somewhere else now.
Imagine someone tells you “don’t leave your house key under the doormat.” Smart. So instead, you write “key is taped behind the mailbox” on a sticky note. Then you leave that sticky note on your desk. You mention it in your team chat. It shows up in the build logs. You haven’t solved the problem. You’ve just moved the sticky note.
That’s what a .env file is. It’s a sticky note with your secret’s hiding spot written on it. The secret is out of the code. It’s in a .env file, or a build pipeline variable, or a container environment. Job done. Move on.
Except the secret didn’t disappear. It just moved. And the place it moved to isn’t any safer.
Why environment variables feel safe
The appeal of environment variables is that they separate configuration from code. Your source code gets shared and deployed everywhere. The secret stays local, on the machine, never checked in. The sticky note stays in your house instead of going to the office.
Secrets in source code end up in version control history permanently. Even if you delete the line, it’s in a previous commit. Anyone with repository access can find it. Bots actively scan public repositories for accidentally committed credentials, and they find them within seconds.
Environment variables solve that problem. But that’s where the protection ends.
The .env file problem
In practice, environment variables on a developer’s machine usually live in a .env file in the project root. Every framework has a library that reads this file and injects the values into the runtime environment.
The .env file is supposed to be in your .gitignore (a file that tells version control to skip certain files when saving code history). And it usually is. But “usually” isn’t “always.”
A quick search on any public code hosting platform will show you hundreds of thousands of .env files that were accidentally committed. Database URLs with passwords. Payment processor APIApplication Programming Interfacethe way two software systems talk to each otherLearn more in The Bouncer That Confuses Everyone → keys. Cloud provider credentials. All sitting in public repositories, search engines indexing them, available to anyone. That’s the sticky note blowing off your desk and landing on the sidewalk.
Even when the .env file is properly ignored, it’s still a plain-text file on every developer’s laptop. If the laptop is compromised, the credentials are compromised. If the developer copies the file to set up a new machine and sends it over team chat, the values are sitting on that chat provider’s servers now. If someone builds a container image in a directory that contains the .env file, the credentials get baked into that image (a snapshot of the application used to create new running copies).
But there’s a leak nobody saw coming. Developers using AI coding assistants grant these tools access to their project directories. If the assistant can read your codebase, it can read your .env file. That sticky note on your desk? Your AI assistant just read it too. Whether those values end up in logs, suggestions, or training data depends entirely on the tool’s data handling practices. Most developers never think to ask.
The .env file isn’t a security mechanism. It’s a convenience that happens to keep credentials out of one specific place. It does nothing to protect them everywhere else they end up.
So the secret escapes your laptop. The build system handles this better. Right?
Build pipelines
Credentials leak from build systems more than from the code itself. I’ve seen this at multiple companies, and it’s always the same boring mistake. These pipelines need to push to production, access databases, call external APIsApplication Programming Interfacethe way two software systems talk to each otherLearn more in The Bouncer That Confuses Everyone →. So teams store credentials in the pipeline’s settings, the same way you might save a password in your browser.
That’s a step up from hardcoding them. But build systems expose credentials more reliably than the code itself ever does.
Build logs are the biggest culprit. Environment variables show up in debugging output when a pipeline is misconfigured. A failing test dumps the full environment. Someone echoes a variable to confirm it was set. And just like that, the value is in the build log, which might be accessible to everyone on the team, or in some cases, publicly visible. The sticky note just fell off your desk and landed in the hallway. Anyone walking by can read it.
Most build platforms try to mask values in logs by replacing them with asterisks. But masking is best effort. If the value is Base64 encoded (reformatted into text that looks like gibberish but converts right back to the original), the masking might not catch it. The same goes for URL-encoded values, values split across multiple lines, or values embedded in a JSONJavaScript Object Notationa standard text format for structuring and exchanging data object.
And then there are the credentials that live in build pipelines for years. The cloud provider key that was added by an engineer who left the company three years ago. Nobody remembers what it’s for. Nobody dares to delete it because something might break. It’s never been rotated. It still works.
If the build system can’t keep the sticky note hidden, at least production is locked down. Right?
The illusion of encryption
Production is where the sticky note gets its most convincing disguise. Think of containers as sealed shipping containers for software: standardized packages that bundle an application and everything it needs to run. They typically receive credentials through environment variables injected at startup. They show up in container configuration files, in orchestration platform specs (tools that manage and coordinate groups of containers running together), in managed container service manifests. The credential gets set as an environment variable when the container starts.
The sticky note is now pinned inside a locked room. That’s better. But every process running inside that room can read it. The value shows up in the container’s internal memory, visible to anyone who knows where to look. It appears when you inspect the container’s configuration. If you’re running any other software alongside your application in the same environment, it can read every environment variable too.
Orchestration platforms have their own “secret” storage, but the name is misleading. The values are Base64 encoded, which, as we saw earlier, is just a formatting trick, not a security measure. Anyone with read access to the system’s configuration can decode them instantly. The label says “secret.” The reality says “slightly obscured plain text.”
When you stop writing things down
So where does the sticky note analogy break down? When you stop writing things down at all.
A dedicated secrets management system doesn’t leave credentials sitting in files, environment variables, or configuration specs. Your application reaches into a locked vault at the moment it needs the value, uses it, and the vault closes again. No sticky note. Nothing written down. Nothing left on a desk or in a hallway.
Building that kind of infrastructure takes real investment. You need access rules, operational knowledge, and the discipline to maintain it. But it addresses the actual problem instead of just moving the value to a different plain-text location.
What does that vault look like in practice? It stores values encrypted, scrambled and unreadable without the right key, even if someone reaches the storage itself. It controls access individually: not “everyone on the team” but each specific application or person that genuinely needs the credential. It rotates credentials automatically, so a leaked value has a short shelf life. And every access gets logged, so you can answer “who read this, and when?” at any moment.
If you can’t answer that question, you don’t have secrets management. You have storage with extra steps.
Not every team is ready to build that overnight. But there are stepping stones. Encrypting .env files means a stolen file is useless without the decryption key. Most cloud providers offer built-in secret storage with encryption and access controls at the platform level. And most build pipelines can run automated checks that catch accidentally exposed credentials before they reach production.
None of these are the vault. But all of them are better than a sticky note taped to the underside of a desk.
The uncomfortable question
The rule shouldn’t be “don’t put secrets in your code.” That’s a start, but it’s not enough. The rule should be “know where every secret lives, who can access it, when it was last rotated, and what happens if it’s compromised.”
Most teams can answer the first part. Very few can answer the rest.
How many credentials does your team have right now that nobody can account for? Keys set by people who left. Values that haven’t been rotated in years. .env files copied across laptops, pasted into chat threads, cached in tools no one’s audited. If one of those leaked tomorrow, would you even know which one it was?
Previously on Off White Paper: The Number Dispenser looked at what protects your system when too many requests show up at once. This post is about the credentials sitting behind those doors.