I asked a question on Twitter on why some of the recommend max_input_time settings seem to be ridiculously large. Some of the defaults I’ve seen have been upwards of 60 seconds. However, after thinking about it I was a little confused as to why a C program (i.e. PHP) would take so long to process string input.
The reason I was thinking about this was because I was thinking about ways to protect PHP from denial of service attacks. Having timeouts longer than necessary can exacerbate service availability problems and while I received some responses, those responses did not contain data.
So I decided to get some data.
I ran the test on a local quad core VM with about a 1G of memory. So clearly I wasn’t going to be pushing a lot of data through. But it would be enough to figure out what a typical PHP response would need.
I wrote a little test script using the ZF2 HTTP client which would simulate uploading a file and gathered elapsed time for sending the request. I changed it to measure both read time and full request time. Read time would only test from when the response had been written to the network to getting data back. Since there was no data coming back that should only have a small impact on the HTTP processing time.
The script I used was this
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 | use Zend\Uri\Http; use Zend\Http\Client; use Zend\Loader\StandardAutoloader; require_once 'Zend/Loader/StandardAutoloader.php'; $loader = new StandardAutoloader(); $loader->registerNamespace('Zend', __DIR__ . '/Zend'); $loader->register(); class HttpClient extends Client { public $elapsed; protected function doRequest(Http $uri, $method, $secure = false, $headers = array(), $body = '') { $this->adapter->connect($uri->getHost(), $uri->getPort(), $secure); if ($this->config['outputstream']) { if ($this->adapter instanceof Client\Adapter\StreamInterface) { $stream = $this->openTempStream(); $this->adapter->setOutputStream($stream); } else { throw new Exception\RuntimeException('Adapter does not support streaming'); } } // HTTP connection $startTime = microtime(true); $this->lastRawRequest = $this->adapter->write($method, $uri, $this->config['httpversion'], $headers, $body); $result = $this->adapter->read(); $this->elapsed = microtime(true) - $startTime; return $result; } } for ($i = 0; $i < 200; $i += 20) { //for ($i = 1; $i < 20; $i += 10) { $client = new HttpClient(); $client->setUri('http://192.168.0.248/'); $client->setMethod('POST'); $client->setFileUpload('test.txt', 'somename', str_repeat('a', 1024 * 1024 * $i)); $client->send(); echo $i . 'MB took ' . $client->elapsed . "\n" ; } |
The read results time for multiple files was
0MB took 0.6802020072937 20MB took 0.2431800365448 40MB took 0.015140056610107 60MB took 0.018751859664917 80MB took 0.02366304397583 100MB took 0.027199983596802 120MB took 0.18756008148193 140MB took 0.58918190002441 160MB took 0.62950801849365 180MB took 0.47761011123657
The full response times for each were
0MB took 0.047544956207275 20MB took 0.10768604278564 40MB took 0.18601298332214 60MB took 0.27659296989441 80MB took 1.966460943222 100MB took 0.4365668296814 120MB took 1.0387809276581 140MB took 0.75083804130554 160MB took 1.340390920639 180MB took 1.0809261798859
But most PHP requests are not file uploads, but URL encoded form files. So let’s see what happens when we change the data being sent to a form submission.
1MB took 0.048841953277588 11MB took 0.32986307144165 21MB took 0.59214305877686 31MB took 0.66419100761414 41MB took 0.72057294845581 51MB took 0.76613712310791 61MB took 0.82655096054077 71MB took 0.91010904312134 81MB took 0.95742678642273 91MB took 0.99846816062927 101MB took 0.89947819709778 111MB took 0.72254300117493 121MB took 1.5053050518036 131MB took 6.4079310894012 141MB took 8.9290759563446
I stopped the test there because the system started swapping.
*note* as you can tell from the times there was a lot of entropy on the system causing significant variations in response time. You can expect a system under load to have similar variations.
So there are a couple of things we learned here.
- If your system does simple HTTP requests (no file uploads or crazy form sizes) 1 second should be sufficient, except if you are under significant load
- multipart/form-data processing seems to be MUCH more efficient than url-encoding from a memory usage standpoint (I was not expecting this)
*note* if you’re wondering why the second batch started at 1MB it’s because of this change in the testing code
1 2 3 4 5 6 | $client->setParameterPost( array( 'test1' => str_repeat('a', 1024 * 1024 * ($i / 2)), 'test2' => str_repeat('b', 1024 * 1024 * ($i / 2)) ) ); |
Clearly I could not start at zero.
Comments
No comments yet...