This isn’t your grandma’s or granspa’s LAMP stack. This is WordPress hoisted into RAM like dropping a dragster engine into a Honda Civic, running so goddamn hard it forgets what a hard drive even is. Most devs spend their days duct-taping plugins and praying they they can move the needle on their PageSpeed score. Me? I boot entire stacks straight from the void, loaded into memory, ready to die and be reborn in milliseconds.
You don’t need this. Nobody needs this. But maybe you’re here because you’re sick of latency. Sick of excuses. Sick of infrastructure that doesn’t scream when it runs.
There are two variants of “Rick” There’s the “Fred Rogers, but for business” who cares a great deal about you as a person, who gets things done with a smile; then there’s “The Hat Man” – the one you see when you take too much benadryl, the unhinged entity menacingly lurking just on the edge of your peripheral vision. This particular creation is brought to you by The Hat Man.
If you dont want to read my mad ramblings, go grtaight to the Ansible Playbook for Running WordPress from RAM (On My GitHub)
Background
Over the heap of absolutely terrible projects I have had to endure in my career, and there have been many, absolutely nothing has tried my patience quite like WordPress sites that were inherited from previous developers. Absolute garbage heaps riddled neglect and skill issues.
As I’ve gone about making vehicle fleets from junkyards, I have learned a lot about performance turning WordPress from the lens of being a linux server administrator first and WordPress dev second. WordPress itself can be made to run fast, and there are plugins that can get you there – but real speed comes from tuning the server. I’ve had people fight me tooth and nail that cache is the only way to make WordPress fast. None of those people know the first thing about administrating farms of servers at scale. Just tossing a bunch of cache plugins at WordPress doesnt do shit, and doing so leaves a lot of performance tweaks on the table.
There is a vast array of hosting companies out there that specialize in WordPress hosting, but even they only take their stack performance tuning so far. The more prominent of the WordPress hosts basically set up your site to run headless. Sure, headless wordpress is great, it has a lot of performance advantages oevr tossing a wordpress site up on a shared hosting platform, but no matter how you slice it, server still has a role! That is where all the stuff that matters happens. This is 2025, people still need to fill out forms, and Lindsey or Steve still need to log in and update content, and if you sell stuff, the checkout process needs state.
Now, how do you make speed gains without trying to do surgery on your codebase with a chainsaw? Go ahead, take that old wordpress site you just inherited, or built, and watch what happens when you update some or all of the plugins. The site breaks so fucking hard that a black hole opens up and whole cities get sucked into the event horizon that once was you updating that wierd WooCommerce plugin to the latest version. Might as well have stuck a fork in an electrical outlet.
I have been through this cycle at least 100 times. Inherit shitty old WordPress site with enough technical debt to bankrupt small nations, and the client needs this thing to run fast yesterday. And these weren’t tiny little mom and pop operations, no, these were enterprise WordPress sites with custom integrations to things like ERP systems that went extinct when the chixilub asteroid made dinosaurs taste the fucking sun. Most of the time, I’ve had to sit on these sites and keep the lights on until either the whole site could be refactored, or until such time that they finish a dependent project in house to move away from some legacy system that is talking to their website. The agency I was working for that did this kind of work ran into undue financial hardship and I was one of the first cast into the pit, but recently I undertook a project that inspired me to stop thinking of the traditional stack in the way that we all as WordPress devs have become accustomed to. Sometimes it takes one of these moments to really rethink what you are doing, and why. I decided that I didnt want to bea sheep to the way of doing things and completely rething the stack. So I did, and I did it from the perspective of a sysadmin that builds for raw speed.
Why Should I Run WordPress From RAM?
TL;DR: Because latency is theft
RAM is orders of magnitude faster than even the fastest disk array. We’re talking nanoseconds versus milliseconds. That’s a million times faster, give or take, depending on how deep you are in the hardware rabbit hole. Your SSD, even the slick new NVMe drives, are still waiting for the bus while RAM is already halfway down the freeway with the engine redlined and the trunk full of dynamite. You ever try to serve a hot website off a cold disk? It’s like asking a junkie to run a marathon. RAM doesn’t wait. RAM doesn’t beg. RAM rips data out of the void and slams it into the CPU before the OS even knows what’s happening. Everything else is just latency and lies.
How does WordPress work? Easy. A request slams into your server at 443 like a drunk at last call. Apache, NGINX, Litespeed—whatever bouncer you’ve got posted at the door—grabs the request, nods, and shuffles it back to PHP. That’s the real muscle. PHP wakes up WordPress, assembles the requested page like a greasy diner short-order cook, and fires it back down the pipe to whoever knocked in the first place.
But here’s the rub: that innocent little page load just took a goddamn road trip. It hit the filesystem, spun up some disk reads, poked the database (which itself hits the disk again), and clawed its way into RAM just in time to get shoved into the CPU and belched out through the network card. Every request. Every single time.
And no, I don’t care if you’re rocking SSDs, NVMe, or some RAID cluster your cousin slapped together because he knows a thing or two about computers. Disk I/O is still a step. Still a delay. Still a bottleneck that drags your site through the mud every single time someone clicks. It’s not just inefficient, it’s hostile. And it must be eliminated.
So here’s my philosophy: fuck disk I/O. Run the whole damn thing from RAM. No more spinning plates. No more waiting rooms. Just load it into memory and let it scream. Don’t worry about persistence. We’ll handle that later. Maybe.
Overview Of The Tune
Let’s break down the tune.
This is not a just WordPress install. This is a brass-knuckle brawl between performance and everything that dares to slow it down. Every component of this system is deployed with intent not to conform, but to outperform. We’re not using Docker or Kubernetes. We’re not waiting on apt. We’re not clicking around in cPanel like it’s 2007. We’re provisioning bare metal with Ansible and building every critical part of the stack with a blowtorch and a soldering iron.
RAMDisk-backed WordPress
The WordPress root is not served from disk. It’s mounted in a tmpfs-backed RAMDisk created at provision time by Ansible. Why?
- Because traditional disk I/O – whether it’s SATA SSDs, NVMe, or even ZFS, it adds latency to every request.
- Because even well-cached files still require filesystem-level permission checks, read cycles, and sometimes fragmentation reassembly.
- Because if WordPress only lives in RAM, it executes like it’s possessed.
At provision, Ansible pulls your known-good site archive (a tarball from object storage like R2 or S3) and explodes it into a volatile tmpfs mount, something like /mnt/wordpress
. The NGINX root is then pointed directly at that RAMDisk. If the machine reboots, that data is gone. That’s not a bug, that’s the point. This site is built to die and boot fast, not to linger and decay.
Persistence happens outside the hot path. You back it up with a scheduled rclone sync job, or you don’t. Either way, your live path is 100% memory-bound, and it’s fast enough to embarrass commercial hosts.
RAMDisk-Backed MariaDB
Yes, you read that right. The database is in RAM too. No disk writes. No journaling. No safety net. Just pure, unadulterated transient speed.
We provision MariaDB to store all its data files—including ibdata
, logs, and table storage—on a dedicated tmpfs volume, usually mounted at something like /mnt/mariadb
. That means:
- All tables are read and written directly in memory
- There is zero disk I/O on queries or transactions
- Temporary tables never touch physical storage
- Indexes, joins, cache pages—everything exists only in RAM
Why Do This?
Because every disk write is a tax on throughput. Because page loads shouldn’t be bottlenecked by a storage controller. And because this stack is engineered for speed first, durability second – and even then, only if you ask nicely.
We are not relying on filesystem caching. We are building the database on top of memory itself.
How Do We Not Lose Everything?
We treat MariaDB like a live performance:
- Backups are externalized. Ansible wires in
rclone
ormysqldump
to sync hot snapshots to object storage (S3-compatible), either on a schedule or triggered manually. - Restores are idempotent. If the machine dies, Ansible reboots the RAMDisk, pulls the latest backup, and drops it back into place. Ten seconds later, you’re back in business.
- Binlogs can optionally go to disk. If you really want to enable minimal durability, you can mount
/var/log/mysql
to persistent disk while keeping the actual data files in RAM.
But if you’re doing this right, the goal isn’t to keep your database alive forever. It’s to serve the fastest damn dynamic pages WordPress has ever rendered.
Why It Matters
Traditional hosts split app logic from database storage. We collapse both into memory. The result is a WordPress stack where:
- PHP reads and writes to MySQL faster than most Redis cache layers
- Queries resolve before a traditional host has even hit the disk queue
- The stack cold boots from zero to full site in under a minute
Debian 12 on AMD EPYC (Or Xeon)
Debian 12 gives us a clean, stable, modern base with predictable behavior and long-term support. The default packages aren’t bleeding-edge, which is good. We don’t want a distro trying to be clever—we’re handling cleverness ourselves in Ansible.
Running on AMD EPYC is a deliberate choice. EPYC offers:
- High memory bandwidth – critical for RAM-based hosting.
- Large L3 caches – excellent for high-frequency page builds.
- Efficient power draw per core – especially important if you’re scaling horizontally.
- IOMMU and NUMA awareness – handy for fine-tuning performance if you grow this out into cluster territory.
Intel is fine, but EPYC hits that sweet spot for serious throughput without the thermal tantrums.
NGINX Manually Compiled
We do not apt install nginx
. We do not trust maintainers with our runtime. Instead, we:
- Pull source from the nginx-quiche branch, Cloudflare’s implementation of QUIC and HTTP/3.
- Compile with custom modules, including:
- Brotli: for pre-compressed static delivery, slashing payloads by 15–25% compared to gzip.
- ngx_devel_kit + ngx_http_lua_module: enables Lua scripting inside config files. Which I can use to inline CSS/JS, perform auth logic, or manipulate headers dynamically – without calling out to PHP.
- Strip away modules we don’t need: mail, stream, autoindex, etc. Smaller binary = faster startup and fewer CVEs.
The result is a lean, purpose-built binary tuned for:
- RAM-speed static asset delivery
- Low-latency TLS handshakes (HTTP/3 cuts out TCP entirely)
- Dynamic request logic without routing through WordPress
You also override the systemd unit file to point to /usr/local/nginx/sbin/nginx, so your custom binary doesn’t get stepped on by a package upgrade.
This approach makes your NGINX feel less like software and more like firmware.
Provision REM-Resident WordPress with Ansible
Ansible runs the show once—at provision time. It:
- Installs dependencies
- Downloads NGINX and module source code
- Compiles everything with your desired flags
- Writes
nginx.conf
,mime.types
,fastcgi_params
and the Lua logic file - Mounts
/mnt/wordpress
astmpfs
- Extracts the tarball from object storage into RAM
- Configures PHP-FPM and MariaDB
- Makes MariaDB run from RAM in its own RAMDisk also.
- Places the systemd unit overrides
There is no runtime orchestration, no reconfiguration happening behind your back. You declare what the box should be, and Ansible builds it once, like flashing a BIOS.
This approach makes your infrastructure predictable, declarative, and fast to recover. Blow away the node? No problem. Re-run the playbook and call it a day.