Digital Safety

What an IDOR Bug Looks Like From the Inside: A Bug Bounty Walkthrough That Will Change How You Use Web Apps

Insecure Direct Object Reference (IDOR) is one of the most common and most damaging classes of web vulnerability in 2026. It is also one of the easiest to find. Here is how an IDOR is discovered from a bug bounty hunter's perspective, what it tells you about the apps you use every day, and how to protect yourself when developers get it wrong.

adhen prasetiyo
adhen prasetiyo
Burp Suite proxy interface showing an HTTP request with a numeric ID parameter being modified, illustrating the manual testing workflow used by bug bounty hunters to discover IDOR vulnerabilities
Burp Suite proxy interface showing an HTTP request with a numeric ID parameter being modified, illustrating the manual testing workflow used by bug bounty hunters to discover IDOR vulnerabilities

Most of the data leaks you read about in 2026 have the same root cause. It isn't a zero-day. It isn't a sophisticated nation-state actor. It's a missing if-statement.

The technical name is Insecure Direct Object Reference, or IDOR. It sits in the OWASP Top 10 under "Broken Access Control," which has been the number one category of web vulnerability in the world for several years running. I've been hunting for IDORs as a bug bounty researcher long enough that I can usually predict, within thirty seconds of opening a new application, where the IDORs are likely to live.

This article walks you through how an IDOR is discovered from my perspective, why this class of bug is so widespread, and what the discovery process should teach you about the apps you trust with your data. I'll keep it accessible — you don't need to be a developer to follow along.

What an IDOR actually is

Imagine a hospital that gives every patient a numbered file. Patient 1042 is you. Your file contains your medical history, your lab results, your insurance information.

The receptionist at the front desk has a rule: anyone who shows up and asks for file 1042 gets file 1042. The receptionist does not check ID. The receptionist does not check whether the person asking is patient 1042 or a stranger who just guessed a number.

That is IDOR. The application has a way of identifying records — by ID number, by username, by email, by some other identifier — and exposes those identifiers to the user, but does not check whether the user is authorized to access the specific record they're asking for. The lookup works. The authorization check is missing or broken.

In web terms, it usually looks like this. You log into your account on some service. You navigate to a page that shows your profile, and the URL says something like:

https://example.com/api/users/1042/profile

The number 1042 is your user ID. You open the developer tools, change the 1042 to 1041, and reload. If the application sends you back the profile of user 1041, including their email, their phone number, their billing address, and their order history — you have just found an IDOR.

That is the entire bug. There is no exotic exploitation. There is no buffer overflow. The application simply trusts the number you typed.

How a bug bounty hunter actually finds these

My workflow on a new target — let's say a SaaS application that has a public bug bounty program — looks roughly like this. I'll describe it generically rather than pointing at specific programs because of disclosure rules.

First, I create two accounts. Not one. Two. A bug bounty hunter without two accounts is a bug bounty hunter who is going to miss most of the IDORs. I sign up as Account A with one email, and Account B with a separate email, ideally on a different domain so the application doesn't link them. I make sure both accounts have some real data in them — a profile photo, a phone number, maybe some sample content. The data needs to be distinguishable. If A has a photo of a cat and B has a photo of a dog, and I see B's dog photo while logged in as A, I know exactly what just happened.

Second, I open Burp Suite or any HTTP proxy and route my browser traffic through it. This lets me see every request the application sends, with full detail. Most of what looks magical about web app pentesting is just looking at the actual HTTP traffic that's already flowing.

Third, I log in as Account A and use the application normally for fifteen or twenty minutes. I edit my profile. I upload a file. I send a message. I create some content. I delete something. While I do this, Burp is recording every request. At the end of the session, I have a complete map of what API endpoints exist and what parameters they take.

Fourth — this is the real work — I look at every request that contains a numeric ID, a UUID, an email address, a filename, or any other identifier. For each one, I ask one question: what happens if I change this identifier to point at Account B's data while still logged in as Account A?

This is where the bug almost always lives. A surprising number of production applications, including ones at companies you would expect to know better, fail this test on at least one endpoint.

Let me give you an example of what a successful IDOR finding looks like in practice.

A worked example, sanitized

A few years back I was looking at a content management application. Logged in as Account A, I uploaded a draft article with the URL:

GET /api/v2/drafts/9247?include=content

The response contained the full text of my draft, my author ID, and a few internal flags. Standard.

Then I changed 9247 to 9246. I expected a 403 Forbidden, or a 404 Not Found, or at worst an empty response. What I got was the full text of someone else's draft article. A draft from a real user account, complete with their author ID, their internal notes, and a timestamp showing they were actively editing it that morning.

I spent the next hour iterating through draft IDs in a small range, just enough to confirm the pattern was systematic and not a one-off configuration error. It was systematic. The /drafts/ endpoint was checking that I was logged in, but it was not checking that the draft I was requesting belonged to me.

I wrote up the report with reproduction steps, impact analysis, and a recommendation. The fix on the developer's end was three lines of code: before returning the draft, verify that the draft's author_id matches the requesting user's id. The vulnerability was deployed to production for an unknown number of months before I found it. The triage team confirmed the bug, paid the bounty, and rolled out a fix within 72 hours.

This is a typical IDOR story. Easy to find, easy to fix, devastating in scope while it exists.

Why this class of bug is so common

IDORs are everywhere because of how modern web frameworks work.

When a developer writes an API endpoint like "get this draft article," the framework makes the database part trivial. One line of code fetches the draft by ID. The authorization part — "and verify the user requesting this draft is allowed to see it" — has to be written separately, and there is no compiler that will warn you if you forget. The bug is a sin of omission rather than a sin of commission.

This is why IDORs cluster around features that were rushed, around endpoints added in the last few sprints before a deadline, around internal admin APIs that someone exposed accidentally, around mobile apps where the same backend serves both the user-facing and admin-facing UIs. The team building the feature is focused on making the feature work, not on the negative space around it.

It's also why IDORs in 2026 are no longer just numeric IDs. Modern apps use UUIDs for primary keys specifically to make ID guessing harder. But UUIDs don't actually fix IDOR — they just hide it. If an attacker can find a way to enumerate UUIDs (through a list endpoint, a public profile, an API search, or by being in the same chat group as the target), the underlying vulnerability is identical. UUID is obscurity, not security.

What this means for you as a regular user

You cannot personally fix IDORs in the apps you use. That is the developer's job. But the existence of this class of bug should change a few of your habits.

Do not assume that an account on a service is private just because it's password-protected. Sensitive data is often one missing if-statement away from being readable by any other authenticated user on the platform.

Minimize the data you upload. The principle of least exposure is your defense against bugs you don't know about. Don't store your KTP scan in your password manager's notes. Don't upload high-resolution photos of your driver's license to a delivery app's profile. The data that doesn't exist on the server cannot be exfiltrated when an IDOR ships.

Use different email addresses for different services. If an IDOR on Service X exposes your full profile, an attacker who finds your email there cannot immediately pivot to Service Y. Email aliases through Apple Hide My Email or SimpleLogin do this for free.

Follow the public security disclosures of services you depend on. When a company publicly discloses a fixed IDOR, that's the time to consider rotating any sensitive data you stored there during the affected period. Most companies don't do this disclosure well, but the ones that do — Cloudflare, GitHub, GitLab — set a standard worth paying attention to.

What this means if you build software

If you write code, the workflow that prevents IDORs is short.

In every endpoint that accepts an identifier as input, before returning data, ask: does the currently authenticated user have the right to access this specific object? Don't just rely on the fact that they're authenticated.

Use object-level permission helpers in your framework. Django has django-guardian. Rails has Pundit and CanCanCan. Modern Node.js stacks have CASL. These libraries make it harder to forget the authorization step because they make it the default rather than the optional layer.

Write tests that cover the unauthorized-access case. For every endpoint that takes an ID, write a test that creates two users, has User A try to access User B's resource, and asserts that the response is 403. If the test passes, the IDOR cannot ship. This single discipline kills entire categories of bugs before they reach production.

Add authorization checks at the database query layer when you can. Instead of "fetch draft 9247, then check ownership," write the query as "fetch draft 9247 where author_id = current_user.id". If the row doesn't exist for that user, the query returns nothing, and there is no path through the code that returns someone else's data.

A closing thought

The reason I find IDORs interesting, even after years of hunting them, is that they remind me how much of security is about the boring details. There are no brilliant exploits in most of my reports. There's a number that should have been checked and wasn't.

The applications you trust with your data are written by humans, on deadlines, in teams that turn over, on top of frameworks that make the easy thing easy and the secure thing optional. The next time you read a news story about a leak of "all user records" from a company, remember that the technical cause was almost certainly a missing if-statement on an API endpoint somebody added in a hurry.

That's not a reason to panic. It's a reason to choose carefully which services you give your data to, to keep your data minimal where you can, and to support — through bug bounties, through responsible reporting, through public pressure — the companies that take this class of bug seriously.

Enjoyed this article?

Share it with your network

Copied!
adhen prasetiyo

Written by

adhen prasetiyo

Adhen Prasetiyo is an independent security researcher and the editor of BioProfileMe. He writes about cybersecurity, online scams, privacy risks, account security, and practical digital safety for everyday users.