
I Found a Security Nightmare Inside a Company's 10-Year-Old PHP App
Benjamin Looi / May 5, 2026
Some friends told me their company was hiring. The pay sounded decent, the stack wasn't glamorous but it was steady work, and — most importantly — I'd get to see my friends every day at the office. I went through the interviews, felt good about them, and then came the salary conversation. The number they came back with was noticeably short of what I needed. We couldn't agree, and that should have been the end of the story.
It wasn't.
Because I'm a software engineer, and software engineers, when left unsupervised near a system they're curious about, will inevitably start poking around. That's not a character flaw. It's just how we're wired. We can't help it. We see a door and we wonder what's on the other side — not out of malice, but out of pure, incurable curiosity.
So I had a look. And what I found was, to put it gently, a relic.
The Application
The company runs an internal PHP application that's been serving thousands of employees for over ten years. Ten years is a long time. Think about where PHP was a decade ago. Think about what "best practices" meant back then. Now imagine that application has been maintained — loosely — ever since, by a rotating cast of developers who each left their own little landmines buried in the codebase before moving on.
Here's what I found:
// findings.log
✗ PHP version: severely outdated (end-of-life, no security patches)
✗ phpMyAdmin: publicly exposed, no IP restriction, no 2FA
✗ Password hashing: MD5 / plain SHA1 — circa 2005
✗ Directory listing: enabled on the public web root
✗ Hardcoded DB credentials and API keys in plain .php files
✗ Login form: no rate limiting, wide open to brute force
✗ SQL queries built by string concatenation — textbook injection bait
✗ User uploads stored in a public directory, no file type validation
⚠ Single server, no redundancy — one crash takes everything down
⚠ No staging environment — changes go directly to production
⚠ Deployments via FTP, or direct file edits on the live server
⚠ No version control. A shared network drive.
⚠ Copy-pasted logic repeated across dozens of files
✓ Application is running. Technically.
Let that list sink in for a moment. This isn't a side project on a developer's laptop. This is a live, production system that thousands of people log into every single working day. Their data — credentials, internal records, who knows what else — sits behind security practices the industry abandoned before some of today's junior developers were old enough to have a GitHub account.
A Closer Look at the Horrors
Credentials in Plain Sight
.php file holds the database username and password as plain text strings. Possibly the API key to a third-party service too. These aren't buried deep — they're often at the top of a file, copy-pasted from developer to developer across the years. If anyone with read access to those files ever had bad intentions, the entire database was already theirs to take.The Login Form Was an Open Invitation
There was no rate limiting on the login form. None. You could sit there and try ten thousand password combinations and the server would dutifully process every single one. Combined with the outdated hashing, a leaked credential dump would be trivially crackable. This isn't a theoretical risk — credential stuffing attacks are automated and indiscriminate. They don't need to know your company exists to find you.
SQL Injection, the Classic
The queries were built by stitching user input directly into SQL strings. Something like:
$query = "SELECT * FROM users WHERE username = '" . $_POST['username'] . "'";
This is the kind of code that appears in security tutorials — as the example of what not to do. It was everywhere. A single crafted input could have exposed, modified, or deleted the entire database. This vulnerability has been documented, warned about, and taught against since the late 1990s.
Files Uploaded. Validated? Never.
.php file and — depending on server configuration — execute it. That's not a stretch. That's a well-known, well-documented attack vector. The fix is a handful of lines. The oversight had been there for years.One Server. No Net.
The entire application ran on a single server. No load balancer, no failover, no redundancy. If that machine has a bad day — hardware fault, a botched update, a full disk — every one of those thousands of employees loses access. Instantly. Completely. And because deployments happened via FTP or live file edits on the production server, a developer pushing a change at the wrong moment was the disaster. There was no safety net because nobody had thought to build one.
The Code Itself
Beyond the security issues, the codebase told the story of a decade of quiet neglect. Logic was copied and pasted — not abstracted, not shared, just duplicated — across dozens of files, each copy drifting slightly from the others as different developers made different small changes over the years. When a bug existed in that shared logic, it existed in twenty places simultaneously, and fixing it meant hunting each one down manually.
There was no version control. Not an outdated Git setup, not an old SVN repository — nothing. Changes were tracked, if at all, by whoever happened to remember making them. The codebase had no history. There was no way to answer the question "what changed last Tuesday?" because nothing had ever recorded the answer.
The scariest part wasn't finding the problems. The scariest part was realising that to the people maintaining it, none of this felt unusual. This was just how the system was.
The Part Where I Told Someone
I wrote up my findings and brought them to the CEO. This felt like the right call — not the IT team directly, because the IT team was, in some ways, part of the story. The CEO was attentive, took it seriously, and asked me to put together a formal proposal for how to address it.
I outlined a phased modernisation plan: getting everything into version control immediately, containerising the environment, incrementally upgrading PHP, migrating to bcrypt for password hashing, locking down phpMyAdmin behind strict access controls, introducing a staging environment, and establishing a basic security review cycle. Nothing revolutionary — just the fundamentals.
The CEO forwarded my proposal to the IT team.
The IT team read it and confirmed they could implement everything in it.
This is where the story gets its punchline.
The Proposal Becomes a Negotiation Tactic
Because the IT team could supposedly do all of it, the company felt no urgency to bring in someone new at a higher package. My own proposal had been used as evidence that they didn't need me.
The thinking, as best I could reconstruct it: we already have people who can fix this, so we don't need to pay a premium for someone who identified that it needed fixing.
A note worth sitting with: the issues I documented had existed, undisturbed, for years. The team capable of fixing them had been there the whole time. The problem was never a lack of ability — it was a complete lack of awareness, prioritisation, and a culture that had simply stopped questioning why things were the way they were.
I left without an offer. Whether any of the fixes ever got implemented — I genuinely don't know.
What This Story Is Really About
Legacy systems don't rot dramatically. There's no alarm, no warning light. They decay gradually, over years, maintained by people who inherit something they didn't build and learn to work around its edges rather than inside its foundations. Every hack, every workaround, every "we'll fix this later" becomes load-bearing. The technical debt compounds quietly until the codebase isn't a product anymore — it's an ecosystem, one that everyone is afraid to disturb.
The real danger isn't even the exposed phpMyAdmin or the SQL injection or the hardcoded credentials, alarming as all of those are. The real danger is the knowledge gap — the growing distance between where the industry stands today and what the team maintaining the system believes is normal. When you've been swimming in the same water long enough, you stop noticing the temperature.
And this is, critically, not a technical failure. It's an organisational one.
Tech debt is invisible on a balance sheet. It doesn't show up as a line item. Leadership looks at a system that's "running fine" and sees no cost — because the cost hasn't arrived yet. It arrives in one of two ways: gradually, as the system becomes increasingly difficult and expensive to change, or suddenly, as a breach, an outage, or a compliance failure that nobody saw coming because nobody was looking.
What Should Have Happened (And Still Should)
Version control should have been non-negotiable on day one. Without it, you cannot know what changed, when, or who changed it. Every change to that codebase for ten years happened in the dark.
A staging environment is not a luxury. It is the minimum bar for responsible deployment. "We push to production and see what happens" is not a release process — it's gambling with a live system.
Credentials belong in environment variables, not source files. This is not advanced knowledge. It is covered in the first chapter of every modern web framework's documentation.
Parameterised queries have been the standard for SQL since the early 2000s. There is no longer any excuse, in any language, for concatenating user input into a database query.
Rate limiting a login form is a single middleware configuration. It takes minutes. It closes the door on an entire category of attack.
None of this is exotic knowledge. This is first-year web security curriculum. Which is exactly what makes finding it in a production system — serving thousands of people, every day — so quietly unsettling.
The organisation I visited isn't unusual. That's the point. Variations of this system exist inside thousands of companies right now — quietly serving employees, quietly accumulating risk, quietly waiting for the day someone with less benign intentions than a curious engineer decides to have a look.
.git folder anywhere in the project. Ask what happens if the server goes down tonight.The most expensive systems to fix are always the ones where nobody ever asked those questions.
Thanks for reading! 😁
