11 minutes
Playbook for Hardening Legacy PHP
This is my practical follow-up to my post on threat modeling legacy PHP in constrained environments.
That post is more about mindset, prioritization, and how to think about risk when the system is messy but the business relies on it.
This one is the hands-on version. It is the kind of outline I come back to at the start of a new project where the codebase is fragile, the DevOps story is rudimentary at best, and nobody is getting six months to clean things up before security work starts.
In smaller teams, the goal is usually to move fast, fix what is already known to be broken, and work out what actually needs attention first.
My first assessment
So my first pass on a legacy PHP system is usually an inventory pass.
Some of this can be assessed directly from the codebase, configuration, and host. Some of it I need to ask the team about, because ownership, deployment, and backup or recovery mechanisms usually do not live in the application itself.
I want a rough map of:
- PHP version
- PHP runtime details, including web vs CLI version, loaded extensions, and configuration differences
- web server type and version
- application, framework, or CMS version
- Composer packages and their versions
- database engine and version
- public entry points
- admin routes and privileged functionality
- scheduled jobs
- writable directories
- file upload paths
- outbound email setup
- external integrations, callback endpoints, and trust boundaries
- session storage behavior and where sessions actually live
- available logs and how to access them
- backup and restore mechanism
- TLS and certificate setup
- reverse proxy or load balancer behavior if the app sits behind one
- deployment method and rollback path
- where configuration and secrets actually live
- whether there is a staging or test environment and what state it is in
- which checks exist before a change goes live
- who actually owns deployment, credentials, and alerts
- where documentation is located, if there is any
This is basic, but it already tells me a lot about the state of things. Often nobody has the whole picture anymore. The people who used to know it may not even be there.
I am not looking for perfect documentation here. I just want to avoid assuming too much of what would be considered fundamental in a modern environment. Without that, it is easy to spend time assessing the wrong layer of risk.
I have seen environments with long conversations about “future architecture” while a backup archive sat under the web root and a forgotten admin script had no authentication. I have also seen log files grow for years without rotation or monitoring, gaps in version control, and old libraries nobody knew were still in use.
What I usually prioritize first
The exact order changes, but the pattern is pretty stable.
1. Reduce exposed surface area
Before adding anything fancy, I want less attack surface.
In the first week, simple removals and cleanup often buy the most risk reduction.
That often means:
- removing forgotten scripts and backups from web-accessible paths
- disabling debug routes and test endpoints
- restricting admin panels by IP where possible
- moving dangerous maintenance utilities out of public reach
- reviewing which directories are writable by the application
After enough years, convenience tends to produce the biggest security liabilities.
2. Fix the easy, high-impact authentication issues
If authentication is weak, everything behind it is weak.
On older systems, auth may be split across different parts of the application, and there may be many more entry points than anyone would design today.
Things I usually look at early:
- admin panel exposure
- password reset behavior
- session fixation and session regeneration
- shared accounts
- weak role boundaries
- default, weak, or rarely rotated credentials
Even if you cannot redo the identity layer, you can often still reduce the number of entry points, make them share the same authentication logic, add IP restrictions around admin areas, and tighten credential rules.
3. Get dependency and release visibility under control
On older PHP systems, dependencies are often less obvious than they should be. Composer was not part of every PHP workflow for a long time, especially before 2015, and libraries may simply have been unpacked and included manually.
I want to know:
- which packages are installed
- which are abandoned
- which are pinned to very old versions
- which are actually used
- whether the application depends on unsupported framework versions
- whether production was built from Composer, copied by hand, or assembled in some other creative way
Dependency blindness is not acceptable. If you cannot answer what is installed and how it gets to production, you are going to miss easily avoidable security issues.
4. Make degradation easier to notice
Detection matters even more in constrained environments because prevention is never perfect and there is usually no dedicated monitoring or security team waiting nearby.
I want visibility into:
- site reachability
- unusual HTTP failures
- recent application errors
- disk usage
- CPU and memory pressure
- log spikes
- job failures
- session anomalies
- certificate expiry
- outbound mail oddities
This is one reason I built tools like MATA: in many of these environments, full observability stacks were unrealistic, but a simple monitoring endpoint still goes a long way.
5. Verify backups
A backup that has never been restored is not enough.
For legacy apps, backup review is part of hardening. I want to know whether a rollback is actually possible and who can perform it, without relying on one person remembering a manual process from two years ago.
6. Review delivery and deployment
I am not looking for perfect platform engineering here. But if security is treated as a one-time fix instead of a continuous effort, we are missing the point.
That usually means:
- moving away from ad hoc file uploads toward a basic repeatable deployment path
- documenting where configuration and secrets live, and keeping them out of version control
- making sure code, config, and backup changes have an owner
- adding at least one cheap pre-release check or smoke test
This does not need to be fancy. It just needs to be reliable enough to not stand in the way of security updates.
If I only get one week
The schedule depends on access and team availability, but if I only get a short window, the first week would usually look something like this:
Day 1-2: build the map
- identify the app, framework, PHP version, and major dependencies
- map public entry points, admin routes, upload paths, and writable directories
- find scheduled jobs, backup jobs, and mail-sending paths
- identify who owns deployment, credentials, DNS/TLS, and receives alerts
- confirm how logs are accessed and whether there is any staging environment at all
The goal here is orientation and figuring out who to talk to.
# file operations and upload-related behavior
rg -n --glob '*.php' '(fopen|file_get_contents|unlink|move_uploaded_file|\$_FILES)'
Day 3: remove obvious exposure
- remove or block forgotten scripts, backups, and test files under the web root
- disable debug functionality in production
- restrict admin panels and maintenance utilities
- review the most dangerous writable paths
- check for
phpinfo()and similar footguns
This is often the easiest place to reduce risk quickly.
Day 4: review auth, dependencies, and secrets
- review the auth layer: login, password reset, and session handling
- find where secrets and configuration are stored
- create a dependency inventory and flag unsupported or abandoned components
- note how code and dependencies actually get into production
This usually shows whether the application risk is mostly code-level, operational, or both.
# include / require hotspots
rg -n --glob '*.php' '(include|require)(_once)?\s*\('
# session initialization
rg -n --glob '*.php' 'session_start\s*\('
Day 5: add visibility and verify recovery
- review application error logs and recurring job failures
- confirm backups, retention, and who can perform a restore
- walk through a restore or rollback path on paper
- write down the highest-priority next actions
Week one should produce a usable starting point for week two.
What I want at the end of week one
If the first week went reasonably well, a short report should allow somebody else to pick things up without starting from zero.
Usually that means:
- a dependency snapshot
- a list of public entry points and admin routes worth reviewing
- a list of writable directories and upload paths
- named owners for deployment, backups, TLS, and alerts
- a shortlist of immediate fixes
- a shortlist of follow-up automation tasks
- a rough note on what looks brittle
That is enough to drive the next round of work.
That next round often includes reviewing obvious data-flow and query risks such as direct use of request input, unsafe SQL construction, weak validation, and risky file handling.
# request input hotspots
rg -n --glob '*.php' '\$_(GET|POST|REQUEST|COOKIE|FILES)'
tracepack
I built tracepack, a small Go CLI for quickly scanning codebases with YAML profiles and saving the results as Markdown.
For this kind of legacy PHP assessment, that is useful in two ways:
- footprint gives a compact overview of the codebase
- summary runs reusable searches or commands and saves the output as a review bundle
The bundled default profile is php-legacy, so it is handy for quickly collecting things like request input hotspots, session handling, include and require relationships, file operations, and likely config or secret locations.
It is intentionally lightweight rather than a full static analyzer. The value is fast orientation, repeatable searches, and artifacts that are easy to review or share.
Repository: github.com/xarcdotdev/xarc-tracepack
Useful low-friction automation
In these environments, small automation that reduces blind spots is usually more valuable than ambitious automation that nobody maintains.
Useful examples:
nightly dependency inventory export
Record dependency versions somewhere predictable so changes and vulnerable packages are easier to spot.basic web root change detection
A checksum, file listing diff, or simple integrity check is often enough to notice unexpected changes in publicly served directories.certificate expiry alerts
Cheap, boring, and absolutely worth it wherever it makes sense.disk pressure and job failure alerts
Many incidents people first describe as “security problems” are really failures of robustness, visibility, and system hygiene.scheduled smoke tests for critical paths
A login path, admin path, checkout flow, or key API endpoint tested on a schedule can catch breakage early.mail volume or anomaly checks
Especially useful where old apps can be abused for spam or phishing and nobody notices until reputation damage shows up.backup job success or failure notification
It is better to know that last night’s backup failed before you need it.a minimal pre-release gate
Even one or two checks before deployment — for example,composer audit, a linter, or a smoke test — can keep easy mistakes out of production.
This is obviously not a full DevSecOps platform. But simple guardrails that are cheap and easy to keep running are usually the better fit here.
A cron job as simple as running composer audit can already improve visibility.
A minimal Bash example could look like this:
#!/usr/bin/env bash
set -u
cd /var/www/myapp || exit 1
if ! command -v composer >/dev/null 2>&1; then
exit 0
fi
if ! output="$(composer audit --no-interaction 2>&1)"; then
{
printf 'To: ops@example.com\n'
printf 'Subject: [legacy-php] composer audit findings on %s\n' "$(hostname)"
printf '\n'
printf 'Directory: %s\n\n' "$(pwd)"
printf '%s\n' "$output"
} | /usr/sbin/sendmail -t
fi
If needed, this can easily be scaled across multiple applications by wrapping the same idea around a small loop over known project directories or composer.lock files.
A practical hardening checklist
This is the kind of checklist I find useful for real-world legacy PHP systems.
Application
- Identify framework, CMS, or app version
- Inventory Composer dependencies
- Remove unused packages and plugins
- Compare web and CLI PHP versions, extensions, and relevant
php.inisettings - Identify public entry points and admin routes
- Disable debug mode in production
- Search for forgotten scripts, test files,
phpinfo()pages, and backups in the web root - Review file upload handling and executable upload risk
- Review password reset, session, and cookie behavior
- Review obvious data-flow and query risks around request input and SQL construction
- Verify access control around admin and privileged functionality
Host and deployment
- Review writable directories and permissions
- Verify TLS is current and auto-renewal works
- Identify cron jobs and scheduled scripts
- Document deployment method and rollback path
- Document where configuration and secrets live
- Review application database privileges and reduce them where possible
- Confirm deployment ownership, credential ownership, and alert ownership
- Confirm secrets are not stored carelessly in public or shared locations
- Review whether old releases or archives remain web-accessible
- Note whether staging or test exists and document its limitations
- Add at least one repeatable pre-release check or smoke test
Monitoring and detection
- Ensure application and server logs are accessible without ad hoc manual downloading
- Alert on HTTP downtime and repeated failures
- Alert on disk pressure
- Track certificate expiry
- Review application error logs regularly
- Track recent log spikes or new recurring errors
- Record dependency versions for change detection
- Alert on unexpected web root or dependency changes
- Monitor scheduled job failures
- Review outbound mail behavior for abuse indicators
Recovery
- Confirm backups exist
- Confirm what is included in backups
- Confirm retention period
- Test a restore path
- Document who can perform recovery and where credentials live
- Document a rollback path for application code and configuration
- Make sure recovery does not depend on one person’s memory
Process
- Decide which vulnerabilities or incidents trigger immediate action
- Define who gets alerted and how
- Define who owns releases, alerts, and incident response
- Keep a minimal incident checklist
- Record known deployment constraints and staging gaps
- Document known exceptions so future reviews stay realistic
2214 Words
2026-04-06 10:00 (Last updated: 2026-04-06 22:17)