Time to First Byte (TTFB) is the time between a browser starting a navigation and the first byte of the response arriving back. Google's web.dev guidance defines it exactly that way. On a WordPress site it is the first thing you can measure about a request, and if it is high the rest of the page cannot start loading until the server finishes its work.
What TTFB actually measures
TTFB is not a single phase. It is the sum of everything that happens before the response starts streaming: redirects, DNS lookup, the TCP connection, the TLS handshake, sending the request, the server processing it, and the first byte coming back. On WordPress that server processing is PHP-FPM picking up the request, WordPress booting, plugins loading, database queries running, and the HTML being built.
That bundling matters. Two sites on identical hardware can report very different numbers: one might be doing a 301 redirect over a cold DNS cache, the other might be hitting a cache-warm edge node. In Chrome DevTools, the Network panel's Timing tab labels the server-side portion as "Waiting (TTFB)" and breaks the surrounding phases out separately. "Waiting (TTFB)" in DevTools is specifically the waiting on the server, while the TTFB metric reported by field tools includes the connection phases too.
What counts as a high TTFB
web.dev's current thresholds are concrete:
- Good: 800 ms or less
- Needs improvement: between 800 ms and 1,800 ms
- Poor: more than 1,800 ms
These are targets for the 75th percentile of real user measurements, not a single test run. A site that reports 300 ms locally but 1,200 ms for a visitor three time zones away is "good" in most regions and "needs improvement" in the others at the same time. TTFB is a distribution, not a number.
TTFB is also not a Core Web Vital. It is an upstream input to Largest Contentful Paint and First Contentful Paint. If your LCP is good, a TTFB in the "needs improvement" band is not by itself a problem. If your LCP is bad, TTFB is usually the first place to look.
Where the time in a WordPress TTFB actually goes
On an uncached request, almost all of TTFB is server processing. Network and handshake phases typically add 50 to 150 ms from the same continent. Everything above that is WordPress building the page.
- PHP boot and opcode loading. Every request needs WordPress core, every active plugin, and the theme in memory. PHP's opcache extension caches the compiled bytecode in shared memory so this cost is paid once per worker. An opcache that is disabled, undersized, or invalidated too often adds hundreds of milliseconds to every TTFB. Configuring OPcache correctly for WordPress walks through the directives that matter, a production-ready profile, and the cache-reset discipline deploys need.
- Database queries. A WordPress page build runs dozens of queries against
wp_options (see [WordPress autoloaded data in wp_options](/en/knowledge-base/wordpress/performance/wordpress-autoloaded-data-wp-options/) for the dedicated deep-dive),wp_posts,wp_postmeta, and the term tables. A slow query, a missing index, or an N+1 pattern from a plugin turns into TTFB. See the slow database in WordPress article. - Synchronous external calls. Plugins that contact a remote API during the page build (license checks, analytics, shipping) block PHP until the remote side answers. One slow API adds its entire latency to TTFB.
- Queue time in the PHP-FPM pool. When every worker is busy, the next request waits in a kernel socket queue before a worker even starts on it. That queue time is invisible to WordPress but shows up as raw TTFB. The PHP workers article explains the mechanism.
On a fully-cached request (Varnish, nginx fastcgi_cache, LSCache, or a CDN edge), WordPress never runs and TTFB collapses to the network phases plus the cache lookup, typically well under 200 ms.
Why TTFB bundles everything before the first byte
TTFB looks like an odd metric the first time you see it. Why lump DNS and TLS together with server processing? Because from the browser's point of view, that is exactly the wait before anything can happen. The browser cannot parse HTML, run scripts, or paint anything until the first byte arrives. Pre-TTFB time is dead time, whether the delay was a sluggish DNS resolver or a slow database.
This is also why caching dominates WordPress performance conversations. A full-page cache (see how WordPress caching works for the layered model) does not make WordPress faster. It skips WordPress entirely. TTFB drops because the server has less to do, not because it became faster at doing it.
Practical implications for a WordPress site
The distribution of TTFB across a site tells you where to look. A few patterns come up often enough to recognize on sight:
- High on uncached pages, low on cached pages. The cache is doing its job and the bottleneck is in the PHP path. Tuning plugins, database indexes, and the worker pool pays off.
- High on logged-in pages, low on logged-out. The WordPress admin bypasses most caches. If your team lives in wp-admin and the front end is fast, this is expected until it slows to the point of blocking real work.
- High only under load. The server is fast in isolation but cannot hold up under concurrency. Usually PHP worker queue time or database contention. Compare with the high CPU usage article to rule out CPU saturation.
- High everywhere, regardless of page or load. Often a synchronous external API call in the page build, or a broken opcache forcing every request to recompile PHP from scratch.
What TTFB is NOT
Most confusion about "high TTFB" comes from treating it as something it is not.
- Not page load time. TTFB ends when the first byte arrives. HTML parsing, CSS, JavaScript, images, and fonts all still have to happen. A site with a 100 ms TTFB and a 4-second LCP has a front-end problem. A site with a 1,500 ms TTFB and a 2-second LCP has a server problem.
- Not a single number. TTFB varies per route, per cache state, per visitor location, and per load. One number from one synthetic test says almost nothing. For requests that bypass the page cache, an Redis object cache is the concrete fix for the uncacheable request path. Field data from real users, broken down by percentile and geography, is the honest picture.
- Not a goal in itself. TTFB is a diagnostic. web.dev explicitly frames it that way: as long as LCP and the other Core Web Vitals are good, a TTFB in the middle band is not something to chase. The reason to care about TTFB is that a high one makes the rest of the metrics harder to achieve.
- Not something a CDN automatically fixes. A CDN drops TTFB for assets it can cache at the edge. For dynamic WordPress responses that bypass the edge cache (logged-in users, carts, checkouts, REST API calls), the CDN mostly adds a hop. The request still has to travel to the origin, run through PHP-FPM, and travel back. Expecting a CDN to rescue a slow WordPress origin on dynamic pages is the single most common misreading of what a CDN does.
- Not server response time alone. The metric includes DNS, TLS, and redirects. A site that chains two redirects before the real page loads carries the full cost of both inside TTFB, even if the server itself is instant.
Where to go next
If the high TTFB correlates with the admin feeling slow, the slow WordPress admin article covers why admin requests saturate resources first. If the whole site feels slow and you are not sure where the delay lives, the general "WordPress site is slow" article walks through how to narrow down the layer at fault. If the symptom is queueing rather than per-request slowness, the PHP workers article explains the worker pool model that produces TTFB spikes under load, and the PHP-FPM tuning guide walks through how to size and configure the pool correctly.