504 Gateway Timeout in WordPress

A 504 Gateway Timeout means the proxy in front of your WordPress site waited too long for the application server to answer and gave up. This article explains the exact mechanism, the four real causes, how to tell them apart, and how to fix each one.

Your WordPress site shows a blank page with the text "504 Gateway Timeout", or Chrome reports "This page isn't working: HTTP ERROR 504". The page header (or response inspector) shows status code 504. The error appears for visitors and for you, on every browser, on mobile and on desktop.

What a 504 actually means

RFC 9110 §15.6.5 defines 504 as: "the server did not receive a timely response from an upstream server it needed to access in order to complete the request." In a WordPress stack, that "upstream server" is almost always PHP-FPM, the long-running PHP daemon that runs WordPress code on behalf of nginx or Apache. The web server accepted the connection from your visitor, handed the request to PHP, waited for an answer, and a timer ran out before PHP came back with HTML. The web server then closed the connection with a 504.

A 504 is not the same as the other 5xx errors in this category, and the difference matters for diagnosis:

  • 504 Gateway Timeout: the upstream (PHP-FPM) was reachable but took too long. Read on.
  • 502 Bad Gateway: the upstream returned an invalid or empty response, often because the worker crashed mid-request.
  • 503 Service Unavailable: the upstream signaled that it cannot accept the request right now (overload, maintenance mode, rate limit).
  • 500 Internal Server Error: the application itself errored and returned a 500 to the proxy.

In short: a 502 is a corpse, a 503 is a "go away", a 500 is an admission of guilt, and a 504 is silence.

Common causes, ordered by likelihood

1. A single PHP request runs longer than fastcgi_read_timeout

This is the cause behind most 504s on a healthy site. nginx waits for PHP-FPM to send a response, and the time it will wait is set by fastcgi_read_timeout. The nginx documentation lists the default at 60 seconds. If your import job, REST endpoint, search query, or admin-ajax call takes longer than 60 seconds to produce its first byte, nginx closes the connection with a 504 even though PHP is still working in the background.

2. A single PHP request hits request_terminate_timeout

PHP-FPM has its own ceiling. request_terminate_timeout sets the time after which FPM kills the worker that is running a long request. The default is 0 (off), but most managed hosts set a real value (often 30 to 300 seconds). When FPM kills the worker, nginx sees the connection close abruptly and reports a 504 to the visitor (and a 502 in some configurations). This commonly looks identical to cause #1 from the visitor's side.

3. The PHP-FPM pool is fully saturated and the new request waits in the queue too long

Every WordPress site runs on a fixed pool of PHP-FPM workers. The pool ceiling is pm.max_children, and once every worker is busy, new requests queue up. If the queue is long enough, nginx times out waiting for any worker to free up and returns a 504. This is the upstream cause discussed in the PHP workers article. It usually shows up under traffic spikes, slow database queries, or a runaway plugin that holds workers hostage.

4. A reverse proxy or CDN cannot reach the origin in time

If the request passes through Cloudflare, a load balancer, or a separate reverse-proxy tier, the timeout is enforced at that layer instead of at nginx. Cloudflare's free and Pro plans, for example, give the origin 100 seconds before they return their own 504 page. A misconfigured firewall rule, a saturated origin, or a slow TLS handshake on the origin can each trigger a 504 at the edge while your origin nginx logs show nothing unusual.

5. The upstream is unreachable, not slow

Less common but worth ruling out: the proxy can establish the TCP connection to the upstream but the upstream stops accepting data, or proxy_read_timeout (default 60s) elapses before any byte arrives. This happens during a deploy, a PHP-FPM restart, or a network issue between two tiers of your stack.

Diagnose which cause applies

Run these checks before changing any setting. They are non-destructive and tell you exactly which of the five causes is yours.

Check 1: read the nginx error log. This is the single most important step. On a typical Linux host the log lives at /var/log/nginx/error.log (or in your hosting control panel under "error logs"). For a 504 you are looking for one of three patterns:

upstream timed out (110: Connection timed out) while reading response header from upstream

That is cause #1 or #2: PHP did not return data in time. The while reading response header part tells you nginx connected fine but then waited for headers.

upstream timed out (110: Connection timed out) while connecting to upstream

That is cause #5: nginx could not even open a connection to PHP-FPM. PHP-FPM is down, restarting, or misconfigured.

no live upstreams while connecting to upstream

That is cause #5 with multiple upstreams configured: every worker pool is marked dead.

You will know it worked when: you see the precise log line for the failing request and you can match the timestamp to the visitor report.

Check 2: read the PHP-FPM slowlog. PHP-FPM can dump a backtrace of any request that runs longer than a threshold. On most setups it lives at /var/log/php8.3-fpm.slow.log. The directive that controls it is request_slowlog_timeout. If the slowlog exists and contains an entry around the time of the 504, you have a smoking gun: it shows the file, function, and line where the request was hanging.

You will know it worked when: the backtrace points at a specific plugin file, theme function, or core call. Common culprits are external API calls without timeouts, oversized cron jobs, and search queries on un-indexed meta_value columns.

Check 3: count active workers under load. While a 504 is happening, run ps -ef | grep 'php-fpm: pool' | grep -v grep | wc -l on the server. Compare the number to the pm.max_children value in your pool config (typically /etc/php/8.3/fpm/pool.d/www.conf). If active workers equals max_children, you are in cause #3. If active workers is well below the ceiling, the pool is not the bottleneck.

You will know it worked when: you can state with a number whether the pool was full or had headroom at the moment of the 504.

Check 4: bypass the CDN. If you use Cloudflare or a similar edge proxy, hit the origin IP directly (with a Host header override using curl --resolve) and time the response. If the origin answers fast and the edge returns a 504, the timeout is at the edge, not the origin: cause #4.

You will know it worked when: the origin direct request succeeds in under 10 seconds and only the edge URL returns 504.

Solutions, per cause

Cause #1 fix: a long PHP request hit fastcgi_read_timeout

Two paths. The right fix is to make the request shorter. A WordPress page should answer in under a second; an admin task should answer in well under 30 seconds; anything beyond that belongs in WP-Cron, Action Scheduler, or a CLI command. Identify the slow request from the slowlog and fix the underlying code or query.

The temporary fix is to raise the timeout so the work can finish while you fix the real cause. Edit the nginx server block (or, on managed hosting, ask support to apply this and tell them which path needs it):

location ~ \.php$ {
    # ... existing fastcgi_pass and params ...
    fastcgi_read_timeout 300s;   # raise from default 60s
    fastcgi_send_timeout 300s;
}

Reload nginx with nginx -t && systemctl reload nginx. If you raise nginx, you must also raise PHP-FPM in step #2 below, otherwise FPM kills the worker before nginx gives up and you trade a 504 for a 502.

Verification: trigger the same request that produced the 504. It should now complete and return a 200 in the access log within the new window. The slowlog should still record the request so you do not forget to fix it properly.

Cause #2 fix: the worker hit request_terminate_timeout

Edit your FPM pool config (typically /etc/php/8.3/fpm/pool.d/www.conf) and raise the value:

request_terminate_timeout = 300s

The PHP max_execution_time ini setting is usually shorter and can be raised the same way:

php_admin_value[max_execution_time] = 300

Restart PHP-FPM with systemctl restart php8.3-fpm. The same warning as above applies in reverse: if FPM allows 300 seconds but nginx still cuts off at 60, the worker keeps running but the visitor still sees a 504. Match both numbers.

Verification: the same long request now finishes. The FPM error log no longer contains lines like WARNING: [pool www] child 12345, script '...' executing too slow.

Cause #3 fix: the FPM pool was full

The right fix is rarely "add more workers". Adding workers to a saturated pool usually moves the symptom from a 504 to a slow site, because the workers are being held by the same slow query or external API. Read the dedicated article on PHP workers for the structural fix. The short version is:

  1. Find what is holding the workers (slowlog plus a process snapshot during the incident).
  2. Cache that page or endpoint at the edge so it does not hit PHP at all.
  3. Move long jobs out of the request lifecycle into Action Scheduler or WP-CLI.
  4. Only after those, raise pm.max_children if your CPU and RAM headroom allow it.

Verification: the active worker count during peak is meaningfully below pm.max_children, and the same URL that 504'd before now returns a 200 in under 2 seconds.

Cause #4 fix: the edge proxy timed out

If you are on Cloudflare, the origin response time is the lever. The 100-second edge timeout is fixed on free, Pro, and Business plans. Either make the origin faster (the right fix) or move long-running endpoints to a background job. If you must process longer than 100 seconds for a specific path, the only sanctioned escape on Cloudflare is to bypass the proxy for that path (set the DNS record to "DNS only" instead of proxied) or move the endpoint to a separate hostname with a different proxy mode.

Verification: the same URL that 504'd through the edge now returns a 200 through the edge, and the origin access log shows a matching 200 with a response time under 100 seconds.

Cause #5 fix: the upstream was unreachable

Check that PHP-FPM is running with systemctl status php8.3-fpm. If it is masked, crashed, or in a restart loop, look at the FPM error log and the OOM killer log (dmesg | grep -i kill). A common pattern on small VPSes is that FPM was killed for memory pressure, started, accepted requests, and was killed again. The fix there is to lower pm.max_children so that a full pool fits in RAM, not to raise it.

Verification: systemctl status php8.3-fpm shows active (running) and the uptime is older than your most recent 504.

When to escalate

If the steps above do not pinpoint the cause within 30 minutes, hand off to your host or developer. Have these ready, because the first thing they will ask for is exactly this list:

  • The exact URL or admin action that triggers the 504.
  • The time of day the error happens, with timezone, and whether it is reproducible or only under load.
  • Your hosting tier and stack (shared, VPS, managed, container; nginx or Apache; PHP version).
  • The matching line from the nginx error log (cause #1 vs #5).
  • The matching backtrace from the PHP-FPM slowlog if one exists.
  • The values of fastcgi_read_timeout, request_terminate_timeout, max_execution_time, and pm.max_children.
  • The list of plugins active on the site, especially any that were updated in the last 48 hours.
  • Whether the site is behind Cloudflare or a similar edge proxy, and whether the 504 also happens when you bypass it.

Send those in the first message. It saves a full round trip and routes the ticket straight to the right engineer.

How to prevent it from coming back

A persistent 504 is almost always a request that should never have been long in the first place. Three things keep the error rare on a healthy site:

  • Move slow work out of the request. Imports, exports, image regeneration, and any external API call belong in Action Scheduler or WP-CRON, not in a page load. If a request needs more than five seconds, it needs a job queue.
  • Cache uncached endpoints. A 504 in the middle of the night is often a search bot hitting an uncached search results page or an ?s= query. Either cache those URLs at the edge or block them from search crawlers.
  • Monitor pm.max_children and the request_slowlog_timeout output, not just CPU. Hosts that show "all green" on CPU dashboards can still serve 504s because PHP workers are waiting on a database, not on CPU. Worker saturation and slow requests are the real signals.

If a single request to your site can answer in under one second cold, with no edge cache, then a 504 should be impossible during normal operation. Anything else is the system telling you that something is structurally too slow, and the timeout is just where the symptom surfaces.

Want this to stop being your problem?

If outages or errors keep repeating, the fix is often consistency: updates, backups and monitoring that don't get skipped.

See WordPress maintenance

Search this site

Start typing to search, or browse the knowledge base and blog.