develooper Front page | perl.perl5.porters | Postings from March 2011

Re: [perl #82110] Still "keys" performance regression between 5.10.1and 5.12.3-RC1

Thread Previous | Thread Next
From:
demerphq
Date:
March 12, 2011 13:18
Subject:
Re: [perl #82110] Still "keys" performance regression between 5.10.1and 5.12.3-RC1
Message ID:
AANLkTim3QHgw-5T-rTqO+G=yGFg4QkZu0BcU9o6NWxYT@mail.gmail.com
On 12 March 2011 21:58, Dave Mitchell <davem@iabyn.com> wrote:
> On Sat, Mar 12, 2011 at 01:51:46PM +0100, demerphq wrote:
>> Am I right in understanding that our position is that a 30% slowdown
>> in "my @array= list " is an acceptable cost
>
> Note its only these two
>
>    my @array = @$arrayref;
>    my %hash  = %$hashref;
>
> that get the pessimisation, and not the more general versions of
>
>    my @array = something;
>    my %hash  = something;

So then this doesn't explain:

my @array= keys %$hash;

being slower? Isn't that what this thread/bug is about?

>> I personally do not think that this is so clear cut, given that they
>> can do this:
>>
>>      my $r;
>> again:
>>      my @x;
>>      @x = @$r; # *** common assignment on 2nd attempt!
>>      print "@x\n";
>>      @x = (1,2,3);
>>      $r = \@x;
>>      goto again unless $i++;
>>
>> And have it work correctly?
>
> I'm not sure what point you're trying to make here. In that code, the
> @x = @$r is pessimised, and always has been. The only change made in 5.1.20
> onwards is that formerly, it was assumed that code starting with a my
> declaration, i.e.
>    my @x = ...
> could always be optimised, because it was impossible at that point for @x
> to have any elements. FC's closure example, and then my goto example, both
> demonstrate that it is possible at the point of executing the PADAV/INTRO,
> for @x to already have gained elements. Thus we now recognise that there
> are some cases where this assumption isn't true.
>
>> I have not fully understood FC's closure case properly, so maybe this is
>> an easier trap to fall into than I think, but right now I'm thinking
>> this is a pessimisation that is much akin to make $1 "safe" after a //g
>> in scalar context, a case where we decided that simply saying "don't do
>> that" was better than making every //g match slower.
>>
>> IOW, I'm not sure that sacrificing performance on a very common
>> construct to make what seems to me to be a pretty unusual edge case work
>> properly is a good trade off especially given there is an easy work
>> around.
>
> I don't understand what you mean by a workaround.

By workaround I meant that moving the my one line up "makes it work ok".

>> While I don't think id go so far as to say we should actually choosing
>> speed over correctness is right here, I have to admit I'm tempted, and
>> do think at the very least we should keep this bug open.
>
> We're not talking speed over correctness here. We're talking speed over
> panics and crashing.

Yes, the panic is why I was being so mealy-mouthed about things.

Thanks for explaining things.

cheers,
Yves


-- 
perl -Mre=debug -e "/just|another|perl|hacker/"

Thread Previous | Thread Next


nntp.perl.org: Perl Programming lists via nntp and http.
Comments to Ask Bjørn Hansen at ask@perl.org | Group listing | About