Hi, I cannot reproduce the issue in S_finalize_op on blead[1]. It is possible that it has been "fixed" somehow (or we got different architectures). On the other hand, I can reproduce the crash caused by: perl -e'eval "sub{".q"$a+"x shift . "}"' 500000 I have attached a prototype patch that solves the problem (at least for me). With the patch applied, I can run it with up to at least 2M without it crashing - I have not tested beyond that because perl starts to use >= 4GB of RAM around the 2M mark. A more memory friendly test case is definitely welcome. :) For the reviewer(s): the patch is much easier to read by ignoring space changes[2]. It re-uses the basic principe of the DEFER marco used in the peephole optimizer. The "major" difference is that it uses a fixed-size stack rather than a fixed-size queue. The size of the queue did not seem to matter a lot (even with MAX_DEFERRED reduced to 2 was the crash avoided in original test case). The test suite showed no regression, perl was configured with: ./Configure -des -Dusedevel ~Niels [1] I have been using code suggested by Dave Mitchell (rewritten as a perl one-liner) ./perl -e 'my $code = q[my $i = 0; if ($i) { print } ] . "\n";' -e '$code .= q[ elsif ($i) { print } ] . "\n" for 1..$ARGV[0];' -e 'eval $code' 10000 [2] Compare $ git diff --ignore-all-space --stat op.c | 37 [...] 1 file changed, 34 insertions(+), 3 deletions(-) versus. $ git diff --stat op.c | 141 [...] 1 file changed, 86 insertions(+), 55 deletions(-)Thread Next