PHP is dead. Long live PHP. Microservices are dead. Long live microservices.
Every few years, the discourse cycle resets. PHP is dying. Microservices are over-engineered. Serverless was a mistake. The pendulum swings, the hot takes accumulate, and somewhere in between the actual engineers are quietly shipping.
DomainDash (my multi-region SSL, uptime, and DNS monitoring SaaS) is built on PHP and microservices. It is also, I'd argue, one of the cleanest architectural decisions I've made. Here's why.
The problem with "PHP is dead"
The criticism isn't really about PHP. It's about the era PHP represented: monolithic apps, shared hosting, global mutable state, $_GLOBAL soup. Modern PHP (and specifically Laravel) is none of those things.
Laravel gives you a first-class job queue, a mature ORM, a clean service container, excellent WebSocket support via Reverb, and an ecosystem that moves fast without breaking things. When I need a control plane — something that orchestrates work, manages state, exposes APIs, handles auth, sends notifications, and maintains a relational model of the world — Laravel is exceptional at all of it.
The key word is control plane. Laravel doesn't do the heavy lifting in DomainDash. It decides what gets lifted, when, and by whom.
The problem with "microservices are dead"
The backlash here is also legitimate, but also misaimed. The failure mode of microservices isn't the pattern itself; it's the operational complexity people dragged in alongside it. Service meshes, distributed tracing across 14 repos, shared databases with ownership ambiguity, versioned inter-service contracts that nobody maintains. That's not microservices. That's microservices plus every anti-pattern simultaneously.
DomainDash has three microservices: an SSL checker, an uptime checker, and a DNS checker. Each is a Rust binary deployed as an AWS Lambda. They do one thing, they do it fast, and they have no persistent state. They are not a distributed system. They are functions with a network address.
Laravel as the control plane
The Laravel application runs on ECS, served by FrankenPHP (Caddy with a PHP runtime embedded directly in the binary), with Laravel Octane keeping the application bootstrapped in memory between requests. If your mental model of PHP is "spins up, handles request, dies", this isn't that. The application boots once, stays warm, and handles requests with latency characteristics closer to a Node.js or Go service than traditional PHP-FPM. The performance difference in practice is stark.
Laravel Horizon manages the queue workers. When a check is due (say, an SSL certificate check on example.com from three regions), Horizon dispatches jobs onto the queue. Each job encodes the check parameters, the target region, and a signed callback URL.
That's it from Laravel's perspective. It has scheduled work. It fired it off. It doesn't maintain a connection to Sydney or Frankfurt waiting for a result.
Rust Lambdas as the data plane
The checker microservices are Rust binaries in a Cargo workspace monorepo (ssl-checker, uptime-checker, dns-checker), each deployed as a Lambda function across up to five AWS regions.
Rust here is deliberate. Lambda cold starts matter less when your function is a ~5MB binary rather than a Node or Python runtime bootstrapping a dependency tree. The checkers are also doing real network I/O: TLS handshakes, DNS resolution, HTTP probing. Rust's async story with Tokio is genuinely excellent for this kind of work.
The Lambdas are invoked by the Horizon jobs via the AWS SDK. Each Lambda receives its check parameters, runs the probe from its regional vantage point, and then has a result it needs to get home.
The webhook ingest pattern
Here's the part I'm most pleased with: how results get back to Laravel.
The naive approach would be to have the Lambda write directly to the database. But that means Lambda needs a database connection. TimescaleDB is in a single region. You now have Lambdas in ap-southeast-2 opening TCP connections to eu-west-1, fighting connection limits, requiring complex VPC peering or RDS Proxy configuration, and coupling your data plane to your persistence layer.
Instead, each job includes a signed callback URL — an HMAC-authenticated webhook endpoint on the Laravel application. When the Lambda finishes its check, it POSTs the result to that URL and exits.
The webhook receiver does as little as possible. It validates the HMAC signature, deserialises the payload, pushes it onto the Horizon queue, and returns a 200. That's the entire request lifecycle for the web process, measured in single-digit milliseconds. The actual metric persistence, alert evaluation, and notification dispatch happen asynchronously in a separate ECS task running the Horizon worker.
This separation matters for scaling. The web service and the worker service scale independently on ECS. Inbound webhook traffic spikes — say, a batch of checks returning simultaneously — don't create backpressure on the web process. They queue. PHP's horizontal scaling story, which has always been one of its genuine strengths, applies cleanly here: more workers means more throughput, with no shared state to coordinate.
The HMAC signature approach is straightforward: the job signs the payload with a secret scoped to that check invocation, and the webhook receiver verifies it before processing. A replayed or forged webhook goes nowhere.
The benefits of this pattern compound:
- No multi-region database connections. The Lambdas are stateless and connectionless as far as persistence is concerned.
- No VPC complexity. Lambdas call a public HTTPS endpoint. That's it.
- Natural audit trail. The signed webhook payload is a first-class event. You can log it, replay it, or inspect it without touching the database.
- Laravel owns consistency. All writes go through one application layer. No distributed write logic to reason about.
What this isn't
This isn't a distributed system in the microservices-conference sense. There's no service discovery, no inter-service RPC, no shared schema. The Lambdas don't know about each other. They don't share state. They receive a job, do a network check, POST a result, and disappear.
It's closer to the Unix philosophy applied to cloud infrastructure: small tools that do one thing well, composed by a coordinator that understands the big picture. Laravel is make. The Lambdas are the compilers.
Why PHP, really
Because the alternative for the control plane would be a Node.js or Python application that does the same job with worse queuing primitives, a weaker ecosystem, and more ceremony. The parts of DomainDash that are genuinely hard — multi-region network probing, low-latency execution, binary protocol handling — are written in Rust. The parts that are about orchestration, state, users, and billing are written in Laravel, which has been solving those problems for over a decade.
PHP being "uncool" is one of the better moats a practical engineer can exploit. The talent pool is large, the framework is mature, the hosting options are abundant. The engineers who wrote PHP off because it wasn't the current cool thing are one fewer set of competitors to worry about.
The stack in summary
| Layer | Technology | Why |
|---|---|---|
| Control plane | Laravel + Horizon | Queue, state, auth, API, billing |
| Runtime | FrankenPHP + Caddy + Octane | Persistent process model, fast cold starts |
| Job dispatch | Horizon → AWS Lambda SDK | Scheduled, multi-region invocation |
| Data plane | Rust Lambdas (Cargo workspace) | Fast, stateless, regional execution |
| Result ingest | HMAC-signed webhooks → Horizon queue | No cross-region DB connections, lite web process |
| Worker | Separate ECS task (Horizon) | Independent scaling, async processing |
| Persistence | TimescaleDB (TigerData) | Time-series metrics, single region |
| Cache | ElastiCache Redis | Rate limits, session, alert dedup |
Dead languages and dead patterns have a funny habit of running the internet. DomainDash is a bet on boring technology doing interesting things — and so far, that bet is paying off.