Front page | perl.perl5.porters |
Postings from February 2013
From: Steffen Mueller
February 28, 2013 17:12
Message ID: 512F8FCA.firstname.lastname@example.org
On 02/27/2013 10:33 PM, Steffen Mueller wrote:
> On 02/27/2013 09:59 PM, chromatic wrote:
>> On Wednesday, February 27, 2013 08:50:59 PM Nicholas Clark wrote:
>>> Yes, I thought this. I certainly tried it for one of the ops a long time
>>> ago. I hit exactly the problem you did - I couldn't measure the
>> The only worthwhile approach I've ever found is to count the instructions
>> executed by callgrind. Even that varies--especially as it's
>> instructions for a
>> fake processor--but dramatic improvements make themselves visible.
> That and running things in a hot loop in a separate process over and
> over again, then looking at a histogram of the results. Very
> unscientific, but again, significant things tend to stand out. Human
> intuition (particularly when trained a bit) is quite good at spotting
> such differences.
> Anyway: Callgrind not (yet) done in this case. But any difference is
> still going to be small. Callgrind probably also doesn't really
> highlight cost of branch (mis-)prediction fairly, does it?
> Actually, on this one, maybe cachegrind is at least as interesting.
I rand callgrind and cachegrind on some very simple bits of code.
Obviously, the effect isn't huge, but callgrind claims 1/3 fewer
instructions executed for each pp_padsv_nolv vs. pp_padsv. Cachegrind
shows no change in the relative number of cache misses. Of course,
there's still padsv calls in the modified perl (all lvalues). The
overall improvement of all padsv/padsv_nolv calls is 20% fewer
Obviously, padsv - despite being hot - is bound to only make so much of
By the way: did you know that pp_add is a pretty heavy OP? It's thanks
to the PRESERVE_UVIV logic relating to switching underlying types in
overflow situations and retaining maximum accuracy.
I wonder whether there are situations in which we know at compile time
that this logic is not needed. Probably not.