develooper Front page | perl.perl5.porters | Postings from April 2008

Re: Make built-in list functions continuous

Thread Previous | Thread Next
From:
wren argetlahm
Date:
April 2, 2008 03:02
Subject:
Re: Make built-in list functions continuous
Message ID:
47F2FFB0.8040903@cpan.org
Nicholas Clark wrote:
> In the general case, I think that that would be better as
> 
>     foreach (@ys) {
>         {
>           p1($_);
>           p2($_);
>           ...
>         }
>         if (pn($_)) {
>             f1($_);
>             f2($_);
>             ...
>             push @xs, fn($_);
>         }
>     }
> 
> so that the scope of the grep is exited before the scope of the map is
> entered, because that preserves at least some of the code execution ordering
> of the current example.

Sure sure. I was just giving the general idea.

> > Because of this ugliness, the rewrite needs to take place within the 
> > parsing/compilation pass in order to have access to the AST.
> 
> Which gets tricky, because there isn't an AST in the perl 5 interpreter.
> (As I understand it, and this is not my strong point) the tokeniser and
> lexer spit out an optree directly, having already done some transformations.
> Further necessary transformations are performed by the same functions as
> are actually also performing some true optimisations (eg the ck_* functions
> that "check" ops as they are built, and the peephole optimiser)
> Constant folding is performed as the optree is assembled.

That does make things tricky (though an optree *is* an AST, albeit less 
malleable than standard intermediate ASTs). If the tokenizer/lexer is 
already doing some transformations, then it should be possible to add in 
a few more. That is, assuming they have a large enough window to see the 
whole statement before spewing anything out. I've done some XS but I 
haven't really looked into the interpreter itself.

> > Granted, that's something of a contrived example, but you get the idea. 
> > There's also a related optimization which takes the composition of maps 
> > as the map of composition. I.e. these two lines are equivalent, but the 
> > latter is more efficient:
> >
> >     @xs = map {f($_)} map {g($_)} @ys;
> >     @xs = map {$_ = f($_); g($_)} @ys;
> 
> I think you made a typo and meant
> 
>      @xs = map {$_ = g($_); f($_)} @ys;

Indeed.

As you say, the fact that functions return by pushing things onto the 
Perl stack does complicate things greatly. (In Haskell lists are treated 
as whole objects and so more like \@xs, hence they don't have this 
issue.) I wrote the example under the assumption that `g` was returning 
a scalar just to make the point.

The general rewrite for when the first function may return a list/array 
for each input is the nested foreach-loops you arrived at. It still 
builds up a temporary list, but it's a shorter one at least (I think I 
talked about this later on in the general discussion of continuous 
functions and stream processing).

> > threads and pass messages between them; while nicely parallel for 
> > multi-core folks, usually the overhead is too much unless the functions 
> > are doing some heavy computations (using a thread pool can reduce the 
> > overhead somewhat). Yet another approach is to use continuation passing 
> 
> I don't think that this is viable, as Perl allows the functions called to
> manipulate global state. If those functions were in separate threads, then
> that means that global state has to be shared, which ithreads can't do (and
> 5005threads failed), so a pre-requisite for this is solving threading; not
> a trivial problem.

I think this implementation option should only be considered for a 
language that's already doing lots of (micro)threading for parallelism 
(i.e. not Perl5). I only mentioned the threading and lazy list 
implementations to demonstrate that TMTOWTDI. (Continuation passing is 
also mostly TMTOWTDI, though for other compilers/interpreters that 
aren't stack-based it can be a big win.)

> > overhead somewhat). Yet another approach is to use continuation passing 
> > in order to capture the flow of control; how nice this approach is 
> > depends on the implementation of closures. And of course there are other 
> > approaches as well.
> 
> I'm not sure that this is viable, as transparently taking a continuation from
> inside a function means deciding and implementing the semantics of the
> interaction between lexical and dynamic scopes
> 
>     sub foo {
>         local $bar::baz = glurpp();
>         ...
>         foreach (@_) {
>              # presumably we might take a continuation here
>              ...
>         }
>     }
> 
> what happens to the value of $bar::baz whilst we temporarily jump outside the
> function foo()?

I generally avoid dynamic scoping, but how would the following be treated?

     sub bif {
         local $baz = glurpp($baz);

         my $f = sub {
             my ($bif_baz) = @_;
             return sub { ...$bif_baz... };
         };
         return &$f($baz);
     }

What the anonymous $f is doing is "currying". Idealistically speaking, 
there's some function `f` which takes all its arguments in at once. The 
function $f takes in only some of those arguments, caches them, and 
returns a new function which will take in the remainder of the arguments 
and return what `f` would have returned if given all those arguments at 
once.

Having done a few simple tests, Perl (5.8.8) seems to Do The Right Thing 
regarding the dynamic variable in this example. And from currying it's 
only a small distance to continuation passing. We can have a function 
like `bif` that does all the preamble of `foo` and returns a 
continuation that takes a single argument; and we can have another 
function that does all the postamble once we've built up the answer; 
thus we have three functions which together can reconstruct our original 
`foo`:

     sub foo {
         # Maybe we could play some more tricks to optimize
         # the stack passing here
         my ($foo, @remainder) = foo_BEGIN;

         my @return;
         push @return, &$foo($_) foreach @remainder;

         return foo_END(@return);
     }

Of course, the whole point of deconstructing `foo` in the first place 
was not to reconstruct `foo`, but rather to have those three primitive 
functions so we can chain them together in different ways, i.e. to 
combine `foo` with another function that also iterates over @_. So 
assuming the ending functions do nothing interesting, we could derive 
the moral equivalent of the following:

     sub foo_bar {
         my (@foo_, @bar_, @my_) = @_; # (^_-)
         my $foo = foo_BEGIN(@foo_);
         my $bar = bar_BEGIN(@bar_);

         my @return;
         foreach (@my_) {
             foreach (&$foo($_)) {
                 push @return, &$bar($_);
             }
         }

         foo_END(); # nothing interesting, remember
         bar_END();
         return @return;
     }

All of this is moot however, since we don't need to take the 
continuation of arbitrary functions. I'm only advocating that we do it 
for the built-in list functions. I'm guessing that the implementation of 
`map`, `grep`, etc are just thin wrappers around foreach-loops and so we 
can pretty easily construct the component functions, i.e. the rewrites 
using foreach-loops basically captures it.

> > Going all the way and implementing continuous functions would be 
> > awesome, but it'd also be a good deal of work which might be better 
> > spent elsewhere. But even just doing the rewrites can improve 
> > performance for data-heavy programs by 160~200% (== +60~+100%), even 
> > though it doesn't catch everything. For someone who knows how the 
> > parsing/compilation sweep works (or can quickly learn) so they can to 
> > manipulate the ASTs/opcode tree, adding the rewrites should be about the 
> > right size for a Google Summer of Code project.
> 
> I suspect that the number of people who have good knowledge of the entire
> parsing/compilation code worldwide is under 10. I suspect that I count as
> "fast learner" for the purposes of this exercise, and it took me an entire
> full time week to figure out how to optimise `reverse sort` and `foreach
> (reverse @zam)` at the optree level, and I think a day for cutting [] and {}
> construction from 3 ops to 1. I don't know how long it took Dave to
> implement in-place sort (`@glurp = sort @glurp`), or Yves to implement a
> faster `if (%hash)` but these are the only optree level optimisations that
> I'm aware of in the past 3 years. I didn't find it easy at all.

Fair enough. I knew the optree stuff was hairy, though I didn't expect 
it to be quite that hairy. If there aren't any hooks to apply 
transformations prior to only having the opcodes, that severely limits 
the options. For someone familiar with the Filter modules and familiar 
enough with Perl to reinvent the grammars for the lexer/parser, that 
could be another avenue for proof of concept and debugging.

Live well,
~wren

Thread Previous | Thread Next


nntp.perl.org: Perl Programming lists via nntp and http.
Comments to Ask Bjørn Hansen at ask@perl.org | Group listing | About