Migrating from a Shared Hosting PHP App to Modern Infrastructure
A surprising number of working European businesses still run their core software on shared hosting accounts that were provisioned in the early 2010s. The application is PHP 7.2 (sometimes 5.6) running under suPHP on a Plesk control panel, deployed by dragging files into FileZilla, with the only "backup strategy" being whatever the hosting provider happens to do at three in the morning. There is no Git history. There are three folders called /admin_old/, /admin_old2/, and /admin_FINAL/. The only person who ever understood the deployment workflow took a job in Munich in 2019.
This is not a rare edge case. A meaningful share of the German Mittelstand, French B2B service companies, and family-owned European e-commerce operations runs exactly this kind of stack. The application generates real revenue. It also makes every change risky, every audit unanswerable, and every developer interview difficult.
A successful PHP shared hosting migration to modern infrastructure—containerized, version-controlled, deployed by a pipeline—does not require a rewrite. It requires a deliberate sequence of small, reversible steps that keep the application running throughout. This post walks through the playbook we apply at Wolf-Tech when modernizing this category of legacy PHP deployment.
Why Shared Hosting Becomes a Liability
The reason teams stay on shared hosting is straightforward: it works. The application serves customers, the monthly bill is small, and changing anything carries the risk of breaking the only revenue source the business has. Inertia is rational.
What changes the calculation is when the hidden costs catch up. Shared hosting accounts typically cap PHP versions at whatever the provider supports, which lags two or three major versions behind. By 2026, an account running PHP 7.4 has been on an unsupported runtime for over two years, and the security patches that the application depends on are no longer being shipped. Composer dependencies refuse to install because they require PHP 8.1+. Modern libraries are off the table.
Operationally, the lack of version control means that every change is a manual file edit on a production server, and the only way to know what changed is to compare timestamps. Rollbacks require a backup that may or may not exist. There is no staging environment because creating one would mean another hosting account to manage. Multiple developers cannot work on the codebase simultaneously without coordinating over chat to avoid overwriting each other's edits.
These limitations do not show up as a single dramatic failure. They show up as steadily slower feature delivery, growing reluctance to touch anything, and eventually an inability to hire competent developers because no senior engineer wants to work this way. By the time a business decides to modernize, the application has often accumulated years of accidental complexity that makes the migration harder than it would have been earlier.
Step One: Get the Codebase Under Version Control Before Changing Anything
The first move in any PHP shared hosting migration is to establish a source of truth for the code. Before anything else changes—no PHP version upgrade, no infrastructure work, no refactoring—the application needs to live in Git, with the production server as the canonical starting point.
The mechanics are mundane but worth doing carefully. Use SFTP or rsync to pull the entire document root and any related folders (cron scripts, includes outside the web root, configuration files) to a local working directory. Initialize a Git repository, create a .gitignore that excludes runtime folders (uploaded user files, cache directories, logs), and commit everything as the first commit with a message like "Initial import from production, 2026-04-22."
# Pull the application from shared hosting
rsync -avz --exclude='cache/' --exclude='logs/' --exclude='uploads/' \
user@oldhost.example.com:/var/www/vhosts/app/httpdocs/ ./app-import/
cd app-import
git init
git add .
git commit -m "Initial import from production, 2026-04-22"
git remote add origin git@github.com:company/legacy-app.git
git push -u origin main
The next commit should be a README.md documenting what was discovered during the import: the PHP version in production, the database server name and version, the cron jobs registered in the control panel, any third-party services the application calls, and any environment variables or configuration constants. This document is the migration's anchor—it records the world as it is before anyone tries to change it.
Crucially, do not refactor anything yet. The temptation to clean up the obvious mess—the duplicate folders, the dead code, the inline credentials—is strong. Resist it. Refactoring before you have a baseline makes it impossible to know whether a bug was introduced by the refactor or existed before. The codebase needs to be reproducible exactly as it ran in production before any cleanup begins.
Step Two: Containerize Without Changing the Application
The second step is to get the application running locally and in a controlled environment, identically to how it runs on the shared host. The right tool for this is Docker, configured to match the production runtime as closely as possible.
The containerize PHP step is where most modernization efforts go wrong by attempting too much at once. The goal here is not to upgrade PHP, switch web servers, or restructure the application. The goal is to produce a Docker image that runs the existing application unchanged—on the same PHP version, with the same extensions, against a database that mirrors production—so that the team can develop and test without touching the live server.
A pragmatic Dockerfile for this stage looks like this:
# Match the production PHP version exactly, even if it is old
FROM php:7.4-apache
# Install the same extensions the shared host had enabled
RUN apt-get update && apt-get install -y \
libpng-dev libjpeg-dev libfreetype6-dev libzip-dev unzip \
&& docker-php-ext-configure gd --with-freetype --with-jpeg \
&& docker-php-ext-install gd mysqli pdo_mysql zip opcache
# Match production php.ini settings (memory_limit, upload sizes, timezone)
COPY docker/php.ini /usr/local/etc/php/php.ini
# Enable Apache mod_rewrite if the legacy app uses .htaccess
RUN a2enmod rewrite
# Copy the application code
COPY . /var/www/html/
# Match file ownership the shared host used
RUN chown -R www-data:www-data /var/www/html
Run this with a docker-compose.yml that pairs the application container with a MySQL or MariaDB container at the same major version as production, seeded with a sanitized snapshot of the production database. The first time the application boots and serves a real page in this environment, the team has crossed a meaningful threshold: the application now exists somewhere other than the shared hosting account, and changes can be tested before deployment.
This is also the moment to discover the implicit dependencies that the shared host quietly provided. Missing PHP extensions, file permissions assumed by the application, paths hard-coded to the old hosting provider's directory structure, mail configuration that worked because the shared host had a local SMTP relay—all of these surface during containerization. Document each one and either fix it in the container configuration or note it as a deployment requirement for the new infrastructure.
Step Three: Set Up Parallel Modern Infrastructure
With the application containerized, the next step is to provision the target infrastructure. The destination depends on the application's scale and the team's operational maturity, but for most European mid-size PHP applications the right starting point is a single virtual server at a provider with European data centers (Hetzner, OVH, Scaleway, or AWS Frankfurt) running Docker, fronted by Nginx and Let's Encrypt-issued TLS certificates, with a managed database service for MySQL or PostgreSQL.
The architecture deliberately avoids over-engineering. Kubernetes, multi-region deployments, and serverless platforms are tempting but unnecessary at this stage. The migration's success depends on minimizing the number of variables changing simultaneously. A single VPS running Docker Compose is enormously more capable than the shared hosting account it is replacing, and it provides a foundation that can evolve toward more sophisticated architectures later.
The managed database is the one place where it is worth spending money rather than self-hosting. Backup, point-in-time recovery, version upgrades, and replication are operational concerns that managed services solve well and that nobody on a small team wants to handle manually at three in the morning. RDS, Hetzner's managed PostgreSQL, or DigitalOcean's managed MySQL all work well for this profile.
The migration of the database itself deserves explicit planning. For a small database (under 10 GB), a mysqldump export from the shared host followed by an import into the managed instance is sufficient. For larger databases or applications with low tolerance for write downtime, MySQL replication from the old database to the new one—running for a few days before cutover—lets the new instance catch up incrementally and reduces cutover time to seconds:
# On the shared host (if SSH access is available)
mysqldump --single-transaction --master-data=2 \
--databases app_production > app_dump.sql
# On the new managed database
mysql -h new-db.example.com -u admin -p < app_dump.sql
# Configure replication if cutover needs to be near-instant
# (provider-specific; consult managed DB documentation)
For shared hosts that do not allow SSH access (common with Plesk and cPanel), use the control panel's native backup tools or the database export interface. Verify the import by running representative queries from the application against the new database before any production traffic is involved.
Step Four: CI/CD That Replaces FTP Deployment
The single most impactful change in this migration is replacing manual FTP deploys with an automated CI/CD setup driven by Git. Once code lives in version control and runs in containers, deploying becomes a matter of building an image, pushing it to a registry, and instructing the production server to pull the new version.
GitHub Actions is the pragmatic choice for most teams because it integrates with the Git repository the codebase already lives in. A minimal workflow for a Dockerized PHP application:
# .github/workflows/deploy.yml
name: Build and Deploy
on:
push:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Log in to container registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push image
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: ghcr.io/company/legacy-app:${{ github.sha }}
- name: Trigger deployment
uses: appleboy/ssh-action@v1
with:
host: ${{ secrets.PROD_HOST }}
username: deploy
key: ${{ secrets.DEPLOY_SSH_KEY }}
script: |
cd /opt/app
export IMAGE_TAG=${{ github.sha }}
docker compose pull
docker compose up -d --no-deps app
docker image prune -f
This pipeline runs in a few minutes, produces a Docker image tagged with the commit SHA, and deploys it to the production server with zero manual intervention. Rollbacks become a one-line operation: SSH to the server, set IMAGE_TAG to the previous commit's SHA, and run docker compose up -d again.
The discipline this enforces is significant. Every change now lives in a Git commit with a message and an author. Every deployment is reproducible from the commit that produced it. Multiple developers can work on the codebase without overwriting each other. The team that previously dreaded any production change now ships several times per week.
Step Five: DNS Cutover With a Defined Rollback Window
The final step is moving production traffic from the shared hosting account to the new infrastructure. This is the only step that genuinely cannot be done partially—at some point, the DNS record needs to point at the new server.
The cutover plan that minimizes risk has four elements. First, lower the DNS TTL on the application's hostname to 300 seconds (5 minutes) at least 48 hours before the cutover, so that DNS changes propagate quickly and rollback is fast. Second, schedule the cutover for the lowest-traffic hour of the week, which for most European B2B applications is Sunday between 02:00 and 04:00 CET. Third, keep both environments running in parallel for at least 48 hours after the cutover, with the shared host kept warm and ready to receive traffic again if needed. Fourth, monitor error rates, response times, and database connections continuously during and after the cutover.
If the new database was set up as a replica, stop replication and promote it to primary at the cutover moment, then change the DNS record. If the database was migrated by mysqldump, take a brief maintenance window (typically 15–30 minutes) to do a final sync, then cut over.
The rollback condition needs to be defined in advance. A typical threshold is "if 5xx error rate exceeds 1% for more than 10 minutes, or median response time more than doubles, roll back to the shared host immediately." Defining this before the cutover prevents the panicked "should we roll back?" debate at 03:00 when judgment is at its worst.
Common Mistakes in Shared Hosting Migrations
The single most common mistake is upgrading PHP at the same time as moving infrastructure. Migrating from shared hosting to Docker and from PHP 7.4 to PHP 8.3 simultaneously combines two independent risks into one event. Do them sequentially: containerize on the existing PHP version first, cut over to the new infrastructure, then upgrade PHP as a separate, smaller migration with its own testing cycle.
The second common mistake is treating the migration as the moment to fix everything. Inline credentials, missing tests, the duplicate admin_old/ folders, the function called final_v3_real_use_this()—all of these are real problems that deserve to be addressed. None of them belong in the migration commit. Resolve to fix them after the application is running on modern infrastructure, where every fix can be deployed in minutes and rolled back if it breaks something.
The third common mistake is underinvesting in observability before cutover. The shared host probably had no real monitoring beyond uptime checks. The new infrastructure should ship with structured logging (Monolog writing JSON), basic metrics (server CPU, memory, response times), and uptime monitoring from at least one external location. Without this, a problem after cutover is invisible until customers report it.
A focused legacy code optimization review before starting the migration typically identifies most of these traps in advance and produces a sequenced plan that avoids them.
What Modernization Unlocks
The strategic value of completing this migration is not the new infrastructure itself—it is the changes the new infrastructure makes possible. With the application in Git and deployed by CI/CD, hiring a second developer becomes feasible without operational chaos. With Docker, creating a staging environment is a matter of running the same image against a separate database. With managed backups and point-in-time recovery, the founder can stop worrying about the 03:00 disk failure scenario.
These capabilities then enable the next phase of work: incremental refactoring, framework adoption (often Symfony for PHP applications at this scale), test coverage on critical business logic, and eventually the kind of feature velocity that the business has not seen since launch. None of this is possible while the application still lives on shared hosting. All of it becomes routine once the migration is complete.
If your business is running a revenue-generating PHP application on shared hosting and you are weighing the risk of modernization against the cost of continuing as-is, Wolf-Tech offers a structured migration assessment that produces a sequenced, low-risk plan tailored to your application's specific dependencies and constraints. Contact us at hello@wolf-tech.io or visit wolf-tech.io for a free initial consultation.

