What a week. A handful of widely trusted open-source packages had supply chain compromises baked in. Half the industry is still figuring out what they were running. I have thoughts. They’ll keep.
But today, we’re talking about dialog boxes.
I timed myself once. One of those “Allow Access?” screens popped up and I clicked through it in two seconds. I didn’t read a single word.
I thought about that when a colleague asked me to help him figure out why his inbox was acting strange. Someone had access to his email. He hadn’t been hacked, not in the way you’d think. Turns out he’d authorized an app months earlier through one of those “Continue with Google” screens. He didn’t remember doing it. He definitely didn’t remember giving it permission to read every message in his inbox.
That’s the thing about these screens. You’ve clicked through them too. You’re signing up for a new app. It asks you to “Continue with Google.” A screen pops up. It says something like “This app wants to access your email address and profile information.” You click “Allow” before the page even finishes loading. Everyone does this. Every single time.
That screen is an OAuth (Open Authorization, a standard that lets one service grant limited access to another on your behalf) consent dialog. It’s supposed to be the moment you make an informed decision about what you’re sharing and with whom. In practice, it’s a speed bump that nobody slows down for.
Think of it like handing your car keys to a valet. You’re expecting them to park the car. But the claim ticket, if you’d actually read it, might say they can also open the trunk, adjust the mirrors, and keep a copy of the key for as long as they want. You didn’t read the ticket. You just wanted to get to dinner.
The fine print on the claim ticket
Behind that dialog, the app you’re signing into talks to an identity provider (the service that knows who you are, like Google, Microsoft, or GitHub). It asks for permission to access certain things on your behalf. The industry calls those “things” scopes.
A scope might be as narrow as “read your email address” or as broad as “manage your entire Google Drive.” One claim ticket says “park in the garage, engine off.” Another says “full vehicle access, including trunk, glovebox, and the right to make copies of the key.” The consent screen is supposed to show you which ticket the app is writing. Your job is to decide if that level of access feels reasonable.
But most users don’t know what a scope is. They don’t understand the difference between “read” and “manage.” They see a wall of text, recognize the logo of a company they trust, and click Allow.
My colleague is sharp. He works in tech. And he still didn’t realize the app he’d authorized could read his email. If he missed it, who’s catching it?
Every valet is printing “full access”
On the other side of the screen, developers often request more scopes than they actually need. Not out of malice. Out of convenience.
“We might need calendar access later, so let’s request it now.” “We need to read emails, but the APIApplication Programming Interfacethe way two software systems talk to each other bundles read and send into the same scope, so we’ll just request both.” “The documentation says we need this scope, but it’s not clear which one maps to our use case, so we’ll request all of them.”
It’s the equivalent of every valet company printing “full vehicle access” on every claim ticket, even when all they need to do is park the car. Over time, applications accumulate permissions the way a valet’s key drawer fills up with copies that should have been destroyed months ago. Nobody goes back to clean them out. Everything still works. Reducing scopes might break something. So the answer, as always, is to do nothing.
The consent screen sits between users and developers, quietly failing at the one job it was designed to do.
The vest looked real
You’ve handed your keys to so many valets at so many restaurants that you stopped reading the ticket years ago. Security researchers call it consent fatigue. It’s what happens when you ask people to make too many trust decisions: they stop making them. And attackers know this.
Phishing attacks that abuse the consent process work terrifyingly well. An attacker creates a legitimate-looking app, gives it a trustworthy name, and requests dangerous scopes like “read all your email” or “access your files.” The user sees a familiar-looking consent dialog, assumes it’s safe because it’s coming from Google or Microsoft, and clicks Allow. Someone in a valet vest standing outside a restaurant they don’t work for. The vest looks right. The ticket looks real. You hand over the keys without a second thought.
Pulling this off used to require real effort. Building a convincing app, designing a landing page, writing persuasive copy. Each fake valet operation was handcrafted. Today, large language models (the AI technology behind chatbots and content generators) let an attacker generate hundreds of unique, polished phishing apps with human-quality branding in an afternoon. The claim ticket still looks the same. But there are a lot more people in vests now.
The user just authorized a stranger to read their inbox. No password was stolen. No malware was installed. They did it themselves. The system designed to protect them held the door open.
That’s what happened to my colleague. An app that looked like a productivity tool, authorized through a screen he’d clicked through a thousand times before. Months later, someone was quietly reading his email.
The key drawer nobody cleans
When you click Allow, the app receives a token. An access token is a short-lived pass that lets the app act on your behalf right now. A refresh token works on a longer timeline: it’s a key that lets the app keep getting new passes without asking you again. Either way, that token represents your consent. As long as the app holds it, it can keep accessing your data.
Most people never revisit the apps they’ve authorized. Go to your Google account and look at “Third party apps with account access.” You’ll probably find apps you signed up for years ago, apps you forgot existed, apps that still have permission to read your email or access your files.
Every one of those is a valet that still has a copy of your car key. If any of them gets broken into, the attacker inherits every permission you granted. And unlike a stolen password, there’s no prompt telling you to change it. The access just continues, silently, until someone notices or the token finally expires. Most of them don’t.
What the claim ticket leaves off
Here’s what the claim ticket doesn’t tell you. It doesn’t say how long the valet keeps your key. It doesn’t say what happens if the valet company gets robbed. Good luck finding the difference between “read your email” and “read every email in your inbox, including password reset links and two-factor codes (those six-digit verification texts you get when logging in).” The ticket presents all of these as the same kind of decision. They’re not even close.
And it gets worse the closer you look. “Read and manage your Google Drive” as a single permission is like handing someone your keys and your title. Reading a specific folder is very different from deleting everything in your account. The ticket doesn’t make that distinction. So every user makes the same choice regardless of what they’re actually giving away.
Then there’s the part nobody thinks about: consent doesn’t expire. If you haven’t visited a restaurant in six months, they shouldn’t still have a copy of your key. But most authorized apps keep their tokens indefinitely, long after you’ve forgotten they exist. The screen communicates permissions without communicating risk. It lists what the valet can do without ever making you feel the weight of what you’re handing over.
Now scale that up. Organizations have no fleet view of every valet key ever issued across the company. Which apps were given keys, what do those keys unlock, how recently were they used, and has any valet started accessing parts of the car that weren’t on the original ticket? Most organizations can’t answer a single one of those questions. That’s not a gap in their security. That’s a gap in the system’s design.
The trust handshake nobody shakes
And this isn’t just about humans anymore. AI agents are beginning to go through this consent process on their own. Software that can browse, read, and take actions on your behalf without you in the loop. When an AI assistant asks for access to your Gmail or your calendar, who’s actually granting consent? You click Allow, but the entity using your data is a model, not a person. You’re handing the claim ticket to something that doesn’t know it’s in a parking garage. The consent framework was built for human-to-human trust. It has no answer for what happens when the thing on the other side of the screen isn’t human.
We built a system that asks for your permission, then trained you to give it without thinking. Now we’re building AI that won’t bother asking at all.
My colleague revoked the app’s access. But the app had been reading his email for six months before he thought to check. Six months of silence where everything looked normal.
I still don’t read those screens. I doubt you do either.
Previously on Off White Paper: The Flimsy Wristband and The Self-Signed Permission Slip explored the tokens that prove who you are. This post looks at the moment you decide to hand that proof to someone else.