What is the best value for max_execution_time in PHP?

By default PHPs maximum execution time is set to 30 seconds of CPU time (time spent in streams and system calls are excluded from this). Since your webserver has a maximum limit of requests it can handle in parallel, this leads to an interesting question for performance and webserver throughput:

Is the maximum execution time of 30 seconds too large or too small?

Like so often, there is no one size fits all solution. And my quick twitter poll suggests the same – several different strategies are all used regularly:

But the winning answer: “Higher to avoid timeouts” has implications that you should consider.

If your webserver is near its limit of concurrent requests, a script taking 30 seconds can block 300 fast requests that only take 100ms and lead to HTTP 502 errors sent from the webserver or queueing delays. But a script aborted due to a max_excecution_time reached can leave data in an inconsistent state if you are not careful.

In the worst case someone who wants to harm you can request several long running scripts in parallel to bring your webserver to its knees and legitimate customers to see 502 errors.

These conflicting scenarios show that a differentiated view is propably the best approach by configuring the max_execution_time differently for different endpoints and request types.

We recently came across this problem in Tideways, where a small number of relatively unimportant reporting requests jammed the many important short API endpoints that accept data.

So there is no single perfect value, instead we solved this problem by applying the following heuristics when configuring our execution timeouts:

  1. Read requests (GET, HEAD) now have a lower default timeout, because they usually don’t cause inconsistencies when they are aborted by a timeout fatal error and usually make up a large amount of the apps traffic.

  2. Write requests (POST, ..) default to a higher timeout, to avoid inconsistencies when the request is aborted during the execeution.

  3. We measured the latency of all endpoints, especially 95%, 99% percentiles, the maximum duration and the number of requests to get a feeling for what the best default values could be. Tideways itself collects these values over larger timespans and per endpoint.

  4. We increased the timeout for endpoints that can take longer than the default timeout as long as they are low traffic endpoints. Otherwise we need to optimize their performance.

  5. We decreased the timeout for endpoints that are fast, but have a very high share of requests. These requests could quickly jam the webserver resources when their performance decreases.

These are pretty basic heuristics. There are more sophisticated solutions when you need to handle more complex timeouting cases that:

  1. If you have requests that are often slow and are still executed in high numbers, then consider executing them in their own PHP FPM pool with their own max number of parallel requests. This way they don’t affect other requests when requested in high numbers.

  2. If you have endpoints that could write inconsistent data, but you want to limit their execution time to a low timeout anyways, consider writing your own timeout logic to have full control about potential cleanup.

  3. Often slowness is caused by external services (HTTP) or databases (MySQL), in which case we would solve the problem by specifying client side timeouts that can be gracefully handled.

I will write about each of them individually in the next weeks. If you are interested make sure to sign up for the newsletter to get updated when the posts are published.

Benjamin Benjamin 06.12.2016