WordPress + Nginx FastCGI cache: full-page caching at the server level

Nginx can cache rendered WordPress HTML in shared memory and on disk and serve it to subsequent visitors without ever invoking PHP. This article walks through a production-ready FastCGI cache configuration: zone setup, the bypass rules that keep dashboards and carts uncached, stampede protection, purging via the Nginx Cache plugin, and how to read $upstream_cache_status to confirm the cache is doing its job.

A WordPress caching plugin generates HTML inside PHP and asks the web server to serve it. The Nginx FastCGI cache works one layer earlier: nginx caches the response from PHP-FPM in shared memory and on disk, then serves the next matching request straight from that cache without starting a single PHP worker. For an uncached request the chain is nginx -> PHP-FPM -> WordPress -> MySQL -> WordPress -> nginx -> visitor. For a cached request it is nginx -> visitor. PHP simply does not run. This guide configures that cache from scratch on a self-managed server, covers the bypass rules WordPress and WooCommerce need to stay correct, and explains how to purge it when content changes.

How Nginx FastCGI cache differs from a WordPress caching plugin

The two are not the same tool wearing different clothes. They sit at completely different layers of the request path, and confusing them is the single most common mistake I see when reviewing WordPress server setups.

A WordPress caching plugin (WP Rocket, W3 Total Cache, WP Super Cache, LiteSpeed Cache) runs inside PHP. WordPress boots, the plugin intercepts the response, and on a cache hit it writes a static HTML file or uses output buffering so a later request can short-circuit some of the PHP execution. Some plugins go further with .htaccess or nginx rewrite rules that bypass PHP entirely on a hit, but the cache file itself was generated by WordPress.

The Nginx FastCGI cache lives in nginx's ngx_http_fastcgi_module. It treats every PHP-FPM upstream response as cacheable HTTP content. When a request matches a cached entry, nginx serves it directly from a keys zone in shared memory plus a file on disk and never touches PHP-FPM. When it does not match, nginx forwards the request to PHP, captures the response, stores it for next time, and returns it. The caching plugin is doing application-layer caching. FastCGI caching is doing server-layer caching, before WordPress is even loaded. For a deeper view of where this layer sits relative to object caches and CDN edges, see how WordPress caching actually works.

Configure the cache zone in nginx.conf

The cache zone is declared once in the http context, not inside a server or location block. Add this near the top of /etc/nginx/nginx.conf or in a snippet under /etc/nginx/conf.d/:

fastcgi_cache_path /var/cache/nginx/wordpress
    levels=1:2
    keys_zone=WORDPRESS:100m
    inactive=60m
    max_size=1g
    use_temp_path=off;

What each parameter does, per the fastcgi_cache_path reference:

  • levels=1:2 creates a two-level directory hierarchy. Nginx hashes the cache key with MD5 and uses the last character of the hash as the first subdirectory and the two preceding characters as the second. A cached entry ends up at a path like /var/cache/nginx/wordpress/c/29/b7f54b2df7773722d382f4809d65029c. The hierarchy keeps any single directory from holding tens of thousands of files.
  • keys_zone=WORDPRESS:100m reserves 100 MB of shared memory for the key index. The nginx docs note that 1 MB holds roughly 8,000 keys, so 100 MB gives you headroom for around 800,000 cached URLs. This is the in-memory index; the actual cached responses live on disk.
  • inactive=60m evicts entries that have not been accessed in 60 minutes, regardless of whether they are still fresh by TTL. Entries that are accessed reset the timer.
  • max_size=1g caps total on-disk usage at 1 GB. Once that ceiling is hit, the nginx cache manager process evicts the least recently used entries.
  • use_temp_path=off writes temporary files directly into the cache directory instead of a separate /var/lib/nginx/tmp location. Without this, every cache write becomes a cross-filesystem rename, which can be slow and can fail across mount points.

Create the directory and give it to the user nginx runs as:

sudo mkdir -p /var/cache/nginx/wordpress
sudo chown www-data:www-data /var/cache/nginx/wordpress
sudo chmod 750 /var/cache/nginx/wordpress

On RHEL-family systems the user is nginx, not www-data. Adjust accordingly.

Define the cache key and the WordPress bypass rules

Inside the server block for your WordPress site, declare the key and a $skip_cache variable that flips to 1 whenever caching would be unsafe:

server {
    listen 443 ssl http2;
    server_name yoursite.nl www.yoursite.nl;
    root /var/www/yoursite.nl/public;
    index index.php;

    # Cache key: scheme + method + host + URI prevents cross-protocol
    # and cross-vhost collisions, and keeps GET and POST in separate slots.
    fastcgi_cache_key "$scheme$request_method$host$request_uri";

    set $skip_cache 0;

    # Never cache POST or non-empty query strings.
    if ($request_method = POST)        { set $skip_cache 1; }
    if ($query_string != "")           { set $skip_cache 1; }

    # Never cache the WordPress dashboard, login, XML-RPC,
    # the PHP entry points, feeds, or sitemap.
    if ($request_uri ~* "(/wp-admin/|/wp-login\.php|/xmlrpc\.php|wp-.*\.php|/feed/|sitemap.*\.xml)") {
        set $skip_cache 1;
    }

    # Never cache responses to logged-in users, recent commenters,
    # or anyone with a WooCommerce session.
    if ($http_cookie ~* "wordpress_logged_in|comment_author|woocommerce_items_in_cart|woocommerce_cart_hash|wp_woocommerce_session") {
        set $skip_cache 1;
    }

    # Never cache the WooCommerce cart, checkout, or my-account.
    if ($request_uri ~* "^/(?:cart|checkout|my-account)") {
        set $skip_cache 1;
    }

    # ... location blocks below
}

A few things deserve a closer look.

Why these cookies. WordPress sets wordpress_logged_in_<hash> for any authenticated visitor, comment_author_<hash> for anyone who recently left a comment, and WooCommerce sets woocommerce_items_in_cart, woocommerce_cart_hash, and wp_woocommerce_session_<hash> for anyone with cart state. If you cache the response for a logged-in admin once and serve it to anonymous visitors, every visitor sees the admin toolbar at the top of the page. If you cache a WooCommerce cart page, the next visitor inherits the previous shopper's cart count. Cookie-based bypass is the only reliable way to keep that from happening. Header-based bypass (where PHP sends a Cache-Control header) does not work here, because nginx has to make the bypass decision before it forwards the request to PHP.

Why exclude all query strings. The cache key already includes $request_uri, so two URLs that differ only in query string get separate cache entries. The reason to exclude them anyway is that WordPress uses query strings for previews, password-protected posts, search, paginated comments, and tracking parameters. Caching ?preview=true would let one editor's draft leak to public visitors. If you have a high-traffic site with stable tracking parameters (utm_source, etc.), strip them upstream rather than letting them fragment your cache.

Why exclude wp-.*\.php. This catches wp-cron.php, wp-trackback.php, wp-activate.php, and any other WordPress entry point that should never be cached because it produces side effects.

Activate the cache in the PHP location block

The cache is opted into per location, using fastcgi_cache to point at the keys zone declared in http. This assumes you already have a working location ~ \.php$ block that passes to PHP-FPM; if you are building the server block from scratch, start with WordPress on Nginx: server block configuration for the base permalink and security rules, then layer the cache directives below on top.

location ~ \.php$ {
    include fastcgi_params;
    fastcgi_pass unix:/run/php/php8.2-fpm.sock;
    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;

    # Activate the WORDPRESS cache zone defined in nginx.conf.
    fastcgi_cache              WORDPRESS;
    fastcgi_cache_valid        200      1h;
    fastcgi_cache_valid        301 302  10m;
    fastcgi_cache_valid        404      1m;

    # Honour the bypass variable.
    fastcgi_cache_bypass       $skip_cache;
    fastcgi_no_cache           $skip_cache;

    # Cache stampede protection.
    fastcgi_cache_lock         on;
    fastcgi_cache_lock_age     5s;

    # Serve stale on upstream failure and refresh in the background.
    fastcgi_cache_use_stale         error timeout updating http_500 http_503;
    fastcgi_cache_background_update on;

    # Debug header (restrict or remove in production).
    add_header X-Cache-Status $upstream_cache_status always;
}

fastcgi_cache_valid sets the TTL per response code. 200 1h caches successful responses for an hour. 404 1m caches not-found responses for a minute, which is short enough that a newly published post becomes visible quickly without overloading PHP with repeated 404 lookups during a probe scan. The add_header ... always flag makes nginx attach the header even on error responses, which is critical for debugging.

fastcgi_cache_bypass and fastcgi_no_cache are not the same directive. fastcgi_cache_bypass causes nginx to fetch from the upstream and skip serving from cache, but the response can still be stored. fastcgi_no_cache prevents storage in the first place. For WordPress you want both to evaluate the same $skip_cache variable so that a logged-in dashboard hit is neither served from cache nor poisons the cache for the next anonymous visitor.

PHP-FPM sizing changes when FastCGI caching is in front

This is the part teams underestimate. With a properly configured FastCGI cache, PHP only runs on cache misses: brand new URLs, the first hit after a TTL expiry, anything matching the bypass rules, and admin traffic. The math for pm.max_children shifts accordingly. The standard sizing formula in the PHP-FPM documentation is (available RAM - OS overhead) / average worker memory, but that formula assumes every request hits PHP. With FastCGI caching, the relevant number is peak uncached concurrency, not peak total concurrency.

For a 2 GB server with about 512 MB of OS and database overhead and roughly 50 MB per worker, the math without caching is (2048 - 512) / 50 ≈ 30 workers. With caching active, 10 to 15 workers is usually enough for a moderately trafficked site, because the only requests reaching PHP are admin work, the first hit after expiry, and the WooCommerce-style pages you have explicitly excluded. Set pm.max_requests = 500 to recycle workers periodically and keep slow memory leaks from accumulating in long-lived processes. For most WordPress setups pm = dynamic is the right choice; pm = ondemand saves RAM on low-traffic servers but pays cold-start latency on the first request after idle. For the full mechanics of pool exhaustion and the symptoms a saturated pool produces, see what PHP workers are and why they get exhausted in WordPress.

Verify the cache with X-Cache-Status

$upstream_cache_status is defined in ngx_http_upstream_module and has been available since nginx 0.8.3. It is populated for any upstream type, including FastCGI. Once you add the add_header X-Cache-Status $upstream_cache_status always; line, every response carries one of these values:

Value Meaning
MISS Not in cache; nginx fetched it from PHP and stored it.
HIT Served from cache. PHP did not run.
BYPASS A fastcgi_cache_bypass rule matched. PHP ran.
EXPIRED The entry was in cache but past its TTL. PHP re-rendered it.
STALE A stale entry was served because the upstream was unavailable.
UPDATING A stale entry was served while a background refresh ran.
REVALIDATED The entry was revalidated against PHP via a conditional request.

The verification flow is straightforward. From a different machine, request a public page twice and inspect the header:

curl -sI https://yoursite.nl/about/ | grep -i x-cache-status
# X-Cache-Status: MISS

curl -sI https://yoursite.nl/about/ | grep -i x-cache-status
# X-Cache-Status: HIT

You will know the cache is wired correctly when the second request returns HIT and you see no entry in your PHP-FPM access log for that URL. Hit https://yoursite.nl/wp-admin/ or any WooCommerce cart URL and the header should show BYPASS. Log in to WordPress and request a public post: BYPASS, because the wordpress_logged_in_* cookie matched.

Important: do not leave X-Cache-Status exposed publicly in production. It tells anyone who looks at your headers exactly which URLs are cached and which paths bypass the cache, which is useful reconnaissance for an attacker probing for cache poisoning. Either restrict the header to trusted IPs with a map directive, or remove it once you have verified the configuration.

Stampede protection: fastcgi_cache_lock and fastcgi_cache_lock_age

When a popular cached entry expires, every concurrent request for it would normally race to repopulate the cache, all hitting PHP simultaneously. On a busy site this is the "thundering herd" or "cache stampede" problem and it can take down a WordPress backend in seconds. Two directives prevent it.

fastcgi_cache_lock on was introduced in nginx 1.1.12. When enabled, only one request at a time is allowed to populate a given cache key. The other concurrent requests wait until either the cache is filled or the lock times out. fastcgi_cache_lock_age 5s, introduced in nginx 1.7.8, then says: if the request that holds the lock has not finished within five seconds, allow exactly one additional request through to PHP. This stops the cache from stalling indefinitely if the upstream PHP process hangs. The combination gives you single-flight cache fills with a safety valve.

There is a third directive, fastcgi_cache_lock_timeout, default 5s, which controls how long a waiting request waits before it gives up and is sent through to PHP uncached. Since nginx 1.7.8, responses from a request that broke through after the lock timeout are not stored in the cache. That is the right behaviour: if the lock timed out, something is wrong, and you do not want to cache a possibly malformed response.

The fastcgi_cache_use_stale updating and fastcgi_cache_background_update on pair (the latter introduced in nginx 1.11.10) is the second half of stampede defence. With both enabled, when an entry expires the first request triggers a background subrequest to PHP to refresh the entry, and meanwhile every concurrent request, including the one that triggered the refresh, gets the stale version immediately. Only the background subrequest sees PHP. The user-visible latency on cache expiry drops to zero.

There is a subtle gotcha with background updates: the background subrequest does not carry the original request's cookies or auth headers. If your bypass rules are not configured correctly and a cached entry was for a logged-in user, the background refresh will fetch the anonymous version and store it under what should have been the user-specific entry. Get the cookie-based bypass rules right before enabling background update.

Purging the cache when content changes

Nginx has no idea that a WordPress post was just published. The FastCGI cache has no native invalidation hook into WordPress. By default, the only ways an entry leaves the cache are: TTL expiry (fastcgi_cache_valid), inactivity eviction (inactive), LRU eviction at max_size, or explicit purge. If you publish a post and do nothing, visitors will not see it until the cache TTL passes. That is not a bug; it is the trade-off you accepted in exchange for never invoking PHP on cache hits.

There are two practical approaches to purging.

Option 1: the Nginx Cache plugin by Till Krüss. This is the path I recommend for most WordPress sites because it works with stock open-source nginx and requires no custom modules. The Nginx Cache plugin, version 1.0.7 as of November 2024, hooks into WordPress save events (post publish, post update, post delete) and purges the cache by deleting the contents of the cache directory through the WordPress Filesystem API. Configuration lives under Tools > Nginx in the WordPress dashboard: you point it at the cache zone path, in this example /var/cache/nginx/wordpress, and that is it.

The mechanism to understand: this plugin purges the entire cache zone, not individual URLs. Every content update wipes every cached page on the site. For a site with a small number of pages and infrequent updates, that is fine. For a high-traffic site with frequent publishing, it means you periodically lose the entire cache and the next wave of visitors all become cache misses simultaneously, which is exactly the stampede problem fastcgi_cache_lock exists to handle. For the plugin to work, PHP must be able to write into the nginx cache directory, which means either nginx and PHP-FPM run as the same user (the default on Debian and Ubuntu, both as www-data) or PHP's user has write access to the cache path. The Filesystem API also has to operate without prompting for FTP credentials, which means define('FS_METHOD', 'direct'); in wp-config.php if it is not already set.

Option 2: per-URL purging with ngx_cache_purge. Nginx Plus has a commercial fastcgi_cache_purge directive that takes a key and removes that exact entry. For open-source nginx, the third-party ngx_cache_purge module by FRiCKLE provides the same capability, but it requires recompiling nginx with the module statically linked. If you go this route, the plugin you want is one that calls a purge endpoint on nginx for specific URLs (the post that changed, the homepage, the category archives, the sitemap) instead of nuking everything. The complexity is real: you are now maintaining a custom nginx build, an upgrade path that has to rebuild the module against new nginx versions, and configuration that has to keep the WordPress and nginx sides in sync. Worth it for a high-traffic publication; overkill for a small business site.

You can verify the plugin's purge worked by hitting a cached URL after publishing a new post and checking the X-Cache-Status header. The first request after the purge should show MISS, the second HIT.

Monitoring cache hit ratio from access logs

The simplest way to know whether your cache is actually doing anything useful is to log $upstream_cache_status on every request and aggregate it. Add a custom log format in nginx.conf:

log_format cache_log '$remote_addr - $upstream_cache_status [$time_local] '
                     '"$request" $status $body_bytes_sent';

server {
    # ...
    access_log /var/log/nginx/wordpress-cache.log cache_log;
}

Reload nginx, let the log fill for a while, then count by status:

awk '{print $4}' /var/log/nginx/wordpress-cache.log | sort | uniq -c | sort -rn

A healthy WordPress site with FastCGI caching should show HIT dominating, with BYPASS corresponding to admin traffic and excluded paths. If MISS is high, your TTLs are too short, your inactive value is too aggressive, or max_size is too small and the cache is being evicted faster than it fills. If EXPIRED is high relative to HIT, the TTL is too short for the publishing rhythm. If you see STALE regularly, your PHP backend is unstable and you should investigate why upstream errors are reaching nginx in the first place.

A useful baseline to aim for on a content site that does not change every hour is 90% or higher HIT rate during normal traffic. WooCommerce stores and any site with a lot of logged-in traffic will be lower because more requests legitimately bypass the cache.

Troubleshooting stale content and missing exclusions

A few patterns I see often.

Visitors are still seeing the old version of a post after publishing. Either the Nginx Cache plugin is not configured (check Tools > Nginx in the dashboard) or PHP cannot write to the cache directory. Tail /var/log/nginx/error.log while triggering a save in WordPress and look for permission errors. Verify that the directory is owned by the same user PHP-FPM runs as and that FS_METHOD is set to direct in wp-config.php. As a temporary workaround you can manually clear the cache with sudo find /var/cache/nginx/wordpress -mindepth 1 -delete.

The admin toolbar appears at the top of public pages for anonymous visitors. A logged-in response was cached and is being served to everyone. The cookie-based bypass is missing or wrong. Check that if ($http_cookie ~* "wordpress_logged_in") is present in your server block, then purge the cache once. This usually happens when someone copies a generic nginx config and forgets the cookie regex. To prevent recurrence, log in as admin, hit a public URL, and check the X-Cache-Status header: it must say BYPASS. If it says HIT or MISS, your bypass is broken.

The WooCommerce cart count is wrong for visitors. Same root cause: a cart-state response was cached. Verify that woocommerce_items_in_cart, woocommerce_cart_hash, and wp_woocommerce_session are all present in the cookie regex and that ^/(?:cart|checkout|my-account) is in the URI exclusion. Purge once and verify with curl.

X-Cache-Status is missing from responses. Your server block does not include add_header X-Cache-Status $upstream_cache_status always;, or another module is stripping headers. Check your full nginx configuration with nginx -T and look for the directive in the rendered output for the right server block.

MISS on every request, even for the same URL twice. The cache zone is not being written to. Common causes: the cache directory does not exist or is not writable by the nginx user, fastcgi_cache_path is declared inside a server block instead of http (which is invalid), the upstream response carries a Set-Cookie header (which prevents caching by default), or fastcgi_cache_valid does not match the response code. Check nginx -T for the cache path being declared at the right level, run sudo -u www-data touch /var/cache/nginx/wordpress/test to confirm writability, and look at the response with curl -sI to check whether PHP is sending unexpected Set-Cookie headers on pages that should be cacheable. A common WordPress culprit is a plugin that calls setcookie() on every page; you will need to either remove the plugin's call or use fastcgi_ignore_headers Set-Cookie (carefully, because it can mask real per-user state).

Complete final configuration

Here is the assembled configuration in the order it should appear, so you do not have to reconstruct it from the sections above.

/etc/nginx/nginx.conf (inside the http block):

fastcgi_cache_path /var/cache/nginx/wordpress
    levels=1:2
    keys_zone=WORDPRESS:100m
    inactive=60m
    max_size=1g
    use_temp_path=off;

log_format cache_log '$remote_addr - $upstream_cache_status [$time_local] '
                     '"$request" $status $body_bytes_sent';

/etc/nginx/sites-available/yoursite.nl:

server {
    listen 443 ssl http2;
    server_name yoursite.nl www.yoursite.nl;
    root /var/www/yoursite.nl/public;
    index index.php;

    access_log /var/log/nginx/yoursite-cache.log cache_log;

    fastcgi_cache_key "$scheme$request_method$host$request_uri";

    set $skip_cache 0;

    if ($request_method = POST)        { set $skip_cache 1; }
    if ($query_string != "")           { set $skip_cache 1; }
    if ($request_uri ~* "(/wp-admin/|/wp-login\.php|/xmlrpc\.php|wp-.*\.php|/feed/|sitemap.*\.xml)") {
        set $skip_cache 1;
    }
    if ($http_cookie ~* "wordpress_logged_in|comment_author|woocommerce_items_in_cart|woocommerce_cart_hash|wp_woocommerce_session") {
        set $skip_cache 1;
    }
    if ($request_uri ~* "^/(?:cart|checkout|my-account)") {
        set $skip_cache 1;
    }

    location / {
        try_files $uri $uri/ /index.php?$args;
    }

    location ~ \.php$ {
        include fastcgi_params;
        fastcgi_pass unix:/run/php/php8.2-fpm.sock;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;

        fastcgi_cache              WORDPRESS;
        fastcgi_cache_valid        200      1h;
        fastcgi_cache_valid        301 302  10m;
        fastcgi_cache_valid        404      1m;

        fastcgi_cache_bypass       $skip_cache;
        fastcgi_no_cache           $skip_cache;

        fastcgi_cache_lock         on;
        fastcgi_cache_lock_age     5s;

        fastcgi_cache_use_stale         error timeout updating http_500 http_503;
        fastcgi_cache_background_update on;

        # Remove or restrict before going live.
        add_header X-Cache-Status $upstream_cache_status always;
    }
}

Apply with:

sudo nginx -t && sudo systemctl reload nginx

Then install the Nginx Cache plugin in WordPress, point it at /var/cache/nginx/wordpress under Tools > Nginx, and you have a server-level full-page cache that bypasses PHP entirely on hits, refreshes itself in the background, defends against stampedes, respects WordPress and WooCommerce session state, and purges automatically when content changes.

Recurring server or deployment issues?

I help teams make production reliable with CI/CD, Kubernetes, and cloud—so fixes stick and deploys stop being stressful.

Explore DevOps consultancy

Search this site

Start typing to search, or browse the knowledge base and blog.