-
Notifications
You must be signed in to change notification settings - Fork 368
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Segmentation fault due to stack overflow in PHP's Garbage Collector (GC) #75
Comments
thanx for your reporting... it could help people.... just insert |
@pk-fr :
but it did not work: with max. stack size at 8192 (the standard value), I got a segmentation fault at the exact same place as before. It's as if the gc_collect_cycles() function did not have any effect at all... |
Thanx for your testing.... |
NOTE: The following is not a bug (although one can use it as an invitation to think about ways to reduce resource consumption in yakpro-po), so feel free to close it whenever you like. You might, however, want to include some of its information in your documentation, or README, to prepare users who obfuscate large projects.
Problem
Trying to obfuscate ~5000 PHP files of ~1000 lines each, yakpro-po stopped after processing ~1600 files with a simple (and frustrating)
Segmentation fault
No other messages were printed, except two lines in syslog:
However, rerunning yakpro-po would continue with the file where it had previously stopped, as if nothing had happened, for another 1500-1600 files - then stop at the next segmentation fault. A third run would continue from there up to the end. However, the files thus produced would be unusable, as the information from the first two runs that would normally be saved in yakpro-po's own directories (the translation tables and the like) would be lost due to the segfaults and thus the obfuscation would start as if it had started for the first time, only with a different "start file" each time. This indicated that the problem was rather "insufficient memory" than anything else.
But the value of memory_limit in the php.ini file of PHP CLI (which is different from the one for PHP on the web server!) was high enough:
memory_limit = 4096M
and there was no complaining about it from PHP, as there was previously, with much lower settings:
PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 421888 bytes) in /usr/local/bin/yakpro-po/include/functions.php on line 391
Debugging
I was thus confronted (for the first time) with the question:
How is one supposed to debug segmentation faults on the PHP CLI?
I found the article at Debugging Segfaults in PHP helpful: for the PHP CLI, start php from gdb with
gdb php
and, inside the gdb shell, run your script with your options, e.g.
run /usr/local/bin/yakpro-po original-dir -o destination-dir
When the segfault happened, gdb gives you the opportunity to type commands. Type:
bt
for 'backtrace'.
Although I did not have compiled PHP with debug support, this was enough to point me to the right direction about the reason for the segfault.
Reason
I had already put various 'echo's in place, in yakpro-po.php and (mainly)
include/functions.php
From these, it was clear that the problem occurred inside the call to $parser->traverse in the latter:
$stmts = $traverser->traverse($stmts);
The backtrace command in gdb above, showed more than 100000 lines like these:
and, at the end:
gc stands for 'garbage collector', so obviously there was a memory problem there. Looking at Segfault in garbage collector brought the breakthrough - namely the solution. :-)
Solution
This is a stack overflow in garbage collector. The solution is to increase limit for stack. To see your current limit, type
ulimit -s
I had 8192 - for a task of this size obviously totally undersized...Change this to something more appropriate, say
ulimit -s 102400
and retry - the segmentation fault is gone! :-)
The text was updated successfully, but these errors were encountered: