develooper Front page | perl.perl5.porters | Postings from June 2022

Re: Optimised lexical assignments / faster macroöps

Thread Previous | Thread Next
From:
Paul "LeoNerd" Evans
Date:
June 6, 2022 10:44
Subject:
Re: Optimised lexical assignments / faster macroöps
Message ID:
20220606114413.31afc2a2@shy.leonerd.org.uk
On Thu, 2 Dec 2021 15:55:56 +0000
"Paul \"LeoNerd\" Evans" <leonerd@leonerd.org.uk> wrote:

> A few other situations that come to mind:
> 
>   * Giving OP_CONST the OA_TARGMY flag; so situations like
> 
>        $idx = 1;
> 
>     Could optimise away the OP_PADSV and OP_SASSIGN into a single
>     void-context OP_CONST with a TARG set in it. 

Well... This did not go well.

Having made this the subject of my latest TPF grant proposal[1], I
didn't get very far into implementing it when I hit upon a huge snag.
Namely, that OP_CONST is an SVOP and so on threaded builds of perl, the
op_targ field is (ab)used to store not the target of the op's result,
but the source SV holding the actual constant.

Hrm.

>   * Permitting (somehow? - no idea how) OA_TARGMY to also apply in
>     LVINTRO situations, meaning that every case where OA_TARGMY might
>     apply would also apply in situations like
> 
>        my $idx = 1;

That clearly makes this a non-starter as well.

>   * Extending the "op_targ" concept into the incoming argument. I
>     notice that a few ops currently abuse that field, which is
> supposed to be about the "target" (i.e. output destination) of an op,
> to really encode where its source argument comes from. A few ops have
>     the OPf_STACKED flag optionally set, and if absent it means they
>     take their incoming argument from the pad in op_targ.
> 
>     This would add a new field to every UNOP to store the pad offset
>     of its incoming argument, and then optimise every UNOP whose
>     op_first is a PADSV into having no KID and putting the pad offset
>     in that field instead. I suspect this would apply in many more
>     cases in real code, than the current OA_TARGMY optimisation does
>     now, and would give benefits to optree size and execution time, at
>     the small cost of memory for every UNOP. This one would need some
>     more in-depth benchmarking to make a case justifying it, but if we
>     had some good benchmarking test cases it could show a good win.

I therefore wonder whether this part might need to be done first,
performing a somewhat more radical overhaul of the op system by tidying
up the meaning of op_targ to really mean the "target" (i.e. output
destination) of every op, and never to mean the source even in UNOP or
SVOP cases. Having done *that*, then an assigning form of OP_CONST
could then use both fields in this case.

In addition I suspect I might also have a go at the
OP_PAD{S,A,H}V_STORE ops I previously mentioned in the first mail of
this thread, as another way to optimise these assignments in a more
general way.


> In summary: 
> 
>   If you want perl to run faster, send me examples of your slow
>   programs so I can optimise them :)

Still verymuch looking for this. So far I've received.. er.. nothing.
Can I conclude from that that everybody already thinks perl is already
quite fast enough, and not a single person has a single program they'd
ever want to run faster?

For me I know that's no true, so unless I receive any other examples
from anyone I'm going to use my `sdview` program[2] while parsing
complicated POD or nroff markup and rendering manpages from them. This
does perhaps feel like an odd thing to optimise perl for, but I don't
currently have any other examples ;)

-- 
Paul "LeoNerd" Evans

leonerd@leonerd.org.uk      |  https://metacpan.org/author/PEVANS
http://www.leonerd.org.uk/  |  https://www.tindie.com/stores/leonerd/

Thread Previous | Thread Next


nntp.perl.org: Perl Programming lists via nntp and http.
Comments to Ask Bjørn Hansen at ask@perl.org | Group listing | About