develooper Front page | perl.perl5.porters | Postings from December 2021

Re: Optimised lexical assignments / faster macroöps

Thread Previous | Thread Next
From:
Paul "LeoNerd" Evans
Date:
December 2, 2021 15:56
Subject:
Re: Optimised lexical assignments / faster macroöps
Message ID:
20211202155556.10ea1a78@shy.leonerd.org.uk
On Wed, 6 Oct 2021 12:18:09 +0100
"Paul \"LeoNerd\" Evans" <leonerd@leonerd.org.uk> wrote:

> Except, now this leads me onto the larger question. There's nothing
> *particularly* specific to lexical assignments about this. I'm sure
> similar optimisations could be argued about for many other situations
> in core perl.

A few other situations that come to mind:

  * Giving OP_CONST the OA_TARGMY flag; so situations like

       $idx = 1;

    Could optimise away the OP_PADSV and OP_SASSIGN into a single
    void-context OP_CONST with a TARG set in it. 

  * Permitting (somehow? - no idea how) OA_TARGMY to also apply in
    LVINTRO situations, meaning that every case where OA_TARGMY might
    apply would also apply in situations like

       my $idx = 1;

  * Extending the "op_targ" concept into the incoming argument. I
    notice that a few ops currently abuse that field, which is supposed
    to be about the "target" (i.e. output destination) of an op, to
    really encode where its source argument comes from. A few ops have
    the OPf_STACKED flag optionally set, and if absent it means they
    take their incoming argument from the pad in op_targ.

    This would add a new field to every UNOP to store the pad offset
    of its incoming argument, and then optimise every UNOP whose
    op_first is a PADSV into having no KID and putting the pad offset
    in that field instead. I suspect this would apply in many more
    cases in real code, than the current OA_TARGMY optimisation does
    now, and would give benefits to optree size and execution time, at
    the small cost of memory for every UNOP. This one would need some
    more in-depth benchmarking to make a case justifying it, but if we
    had some good benchmarking test cases it could show a good win.

> I wonder if there is scope for creating a general way to do this sort
> of thing, and a way to measure the performance boost you get from
> doing that in any reasonable workflow, to judge whether it's worth
> doing that.

So yes I want to reïterate this part of the question here. I can easily
write code that "eh, might be better" but I'd want to have real-life
test cases to measure with benchmarks to demonstrate "this change makes
real programs X% faster", for some X, so we can evaluate the magnitude
of the improvement, verses the additional code complexity costs.

I would really like people to supply some interesting test cases. Right
now for my Faster::Maths module, I have a single, small script that
attempts to generate Julia fractals, to demonstrate that the module
makes it run about 30% faster:

  https://metacpan.org/release/PEVANS/Faster-Maths-0.02/source/t/95benchmark.t

In summary: 

  If you want perl to run faster, send me examples of your slow
  programs so I can optimise them :)

-- 
Paul "LeoNerd" Evans

leonerd@leonerd.org.uk      |  https://metacpan.org/author/PEVANS
http://www.leonerd.org.uk/  |  https://www.tindie.com/stores/leonerd/

Thread Previous | Thread Next


nntp.perl.org: Perl Programming lists via nntp and http.
Comments to Ask Bjørn Hansen at ask@perl.org | Group listing | About