develooper Front page | perl.perl5.porters | Postings from March 2015

Re: OP_SIGNATURE

Thread Previous | Thread Next
From:
Zefram
Date:
March 4, 2015 20:16
Subject:
Re: OP_SIGNATURE
Message ID:
20150304201606.GR8710@fysh.org
Dave Mitchell wrote:
>NB - do you also object to my recent introduction of OP_MULTIDEREF?

I hadn't looked at it until now.  I think a reasonable optimised op type
of this general nature could be devised, but at first glance it looks
like multideref has taken on too much to be comfortable.  The fact that
it only shows up after the peephole optimiser may ameliorate the problems
it causes: it only needs to be understood by code walkers that operate
on ops chained for execution, not by op mungers that operate during the
earlier phases of compilation.  Appearing only in the execution-chain
form of ops makes it rather closer to being the invisible implementation
detail that you've claimed the signature op is.  Provided, that is,
that it remains only generated by the peeper, which isn't a proposition
that I'd be comfortable betting on.

Your multideref and signature ops both have embedded opcode-based
sublanguages, and this gets me a bit concerned.  It seems to be an
inner-platform situation.  It's not totally outrageous to have such a
thing -- after all, regexps already work by such a sublanguage.  But I
think the impetus to create an inner platform should always be suspect.
What's bad about the opcode system we've already got that means it has to
be supplanted?  (In this case, presumably the performance implications
of the runloop.)  Could we fix those things in the existing framework,
without punting to an inner platform?  (Op chaining could probably
be made cheaper for some classes of op.)  If there must be an inner
platform, how well can we shift things between the two platform levels?
(With great difficulty, in these cases.  Utility functions could make
the transformations easier, but it's painfully clear that they're not
designed for this sort of manipulation.)

>The specification of OP_SIGNATURE is 'assign the passed stack items to
>the signature variables with default handling'.

You're glossing over a bunch of visible details there.  It's still way
more complicated than entersub.

>         If a consensus were reached that that constraint is unreasonable,
>then OP_SIGNATURE could be easy tweaked to handle that, at the cost of
>reduced performance.

Checking for magicalness is cheap, and in some cases can probably be
merged with flag tests that you're already performing.  Putting such a
check in front of the optimised code paths, to make the op better behaved
under optree manipulation, should be an easy choice.  I appreciate the
optimisations that you then get in the simple cases.

Slight tangent: how would you feel about spinning off your optimisations
(independent of what happens around signatures) into op types that aren't
at all aimed at signatures?  For example, you have an optimised version
of assigning an IV to a scalar, which is particularly efficient where
the scalar is SVt_IV and non-magical.  There are a lot of "$x = 1;"
type statements in Perl code, for which those fast-path preconditions
are usually true, so it's potentially a big win to use the optimised
code for them.  Even where the preconditions are false, there's a
side benefit in reducing the three ops (const, padsv, sassign) to one.
I'd think this one probably worth the API complexification.

>So, ignoring specific optimisations which could be easily reverted,
>I don't think OP_SIGNATURE is semantically visible.

It is not those constraints alone that make the op type semantically
visible.  They do add to its visibility, and in particular the way you
applied them makes the op type visible in places that it shouldn't be.
But that's the icing on the cake of its visibility.  Separating out the
two issues that you said I was conflating, my original statement that
"the op type is somewhat semantically visible" was actually expressing
the view that the op type is semantically partially visible per se,
independent of its implementation limits.  I didn't expand on that,
because I thought it obvious enough.  I only expanded on the other
issue, of the specific arity limit, and my issue with the limit is not
(as you suggest) whether they're good tradeoffs, it's that the limit is
being applied in ways that it shouldn't.  I'll now expand a bit on both
of these.

An op type is, by its nature, a visible part of the Perl API.  There's
quite a bit of CPAN code that pushes ops around during compilation,
particularly of the nature of building up optrees, often with arbitrary
subtrees resulting from subexpression compilation.  Modules should always
treat these subtrees opaquely as far as possible, but there's often some
need to recognise particular semantic operations and pull the ops apart
for rearrangement.  Any new op type that this code will encounter has
implications for this sort of activity.  A new op type to express a new
semantic is always fine, as long as it's well behaved when moved around
as an opaque unit.  The real potential for trouble arises when a new op
type expresses existing semantics in a new way, as is the case with the
signature op, because it means that any code looking for that semantic
operation needs an update to understand both ways of expressing it.

There is less module code that examines ops in their peepholed form,
as seen in complete subs, and even less that modifies ops in that form.
These are relatively esoteric situations, so a new op that only exists
here (such as multideref is apparently intended to be) is qualitatively
less visible than one that exists during the primary compilation phase.
But even this is a visible part of the API, which should not be changed
without consideration of API stability.

Now for the specific implementation limits.  The reliance on a fresh pad
damages op manipulability, which just makes the op type badly behaved.
I don't have much to say about that.  The interesting one is the arity
limit.  You raise the issue of whether it's a good tradeoff, and from
the point of view of designing optimised op types I think it probably
*is* a good tradeoff.  Specifically, saving a couple of bytes from
every instance of the op type is probably worth more than being able
to apply its optimisation to very long signatures.  The big problem is
that you didn't make a choice on this tradeoff spectrum: instead you
weighed up that couple of bytes per op against being able to compile
very long signatures at all.  The problem is that you tied the parsing of
signatures directly to this optimised op, to the point that you lost the
ability to compile signatures that don't qualify for this optimisation.
At this point the signature op ceases to be an optimisation, and becomes
the definitive implementation of signatures.

Op types with arbitrary limits are OK as optimisations, but it is
wrong to apply those arbitrary limits to the language as a whole.
Like the deparser with :proto syntax, the coupling is wrong here: you
coupled the language feature of signatures too closely to the optimised
implementation.

>This I don't understand. There is nothing to stop a hypothetical plugin
>either emitting a series of "plain" ops instead of an OP_SIGNATURE, or
>later replacing the OP_SIGNATURE (depending on what point in the
>compilation process it is called at).

Once again you're thinking in terms of a plugin replacing the whole of the
signature syntax, which is a trivial and uninteresting case.  My paragraph
to which you're responding here was concerned with plugins replacing a
small part of a signature that is otherwise parsed by the core mechanism.

>If people aren't concerned with performance (due to that wonderful
>ever-cheaper hardware (which has been stuck at 3Ghz for several years
>now)), while wanting maximum flexibility, plugability etc, then again I
>suggest that we point them to perl6. 

So your position is that Perl 5 is, or should be, stable^Wdead?

>Think of it conceptually that Perl_parse_subsignature() produces an
>optree, which under certain (but very common) situations can then optimised
>into a single OP_SIGNATURE op. It just so happens that the current
>implementation, knowing that at the moment the optree can *always* be
>reduced, just skips the optree generation and creates the OP_SIGNATURE
>directly,

Except that that's not what you've implemented.  Aside from the actual
nature of the optimised op, the first half of this is what I've been
proposing, and the second half wouldn't be terrible.  But what you've
implemented is that the explicit optree *isn't* optimised into the single
signature op.  If the explicit optree were optimised that way, that
would give us a simple route to pluggability.  It would also incidentally
resolve that issue with the arity limit: optrees that exceed the signature
op's limit would merely not get (fully) optimised into a signature op.

For op manipulability and syntax pluggability, it is way more important
to get the optimised ops generated from explicit ops than to get any
optimised direct generation of them in the signature parser.

>Well, the bit that I quoted,
>is an example of an op that *can't* combine multiple argument assignments.

Yes, it's an example of a specific optimisation that happens to be one
not addressing combination of multiple assignments.  You expect too much
of one example.  See next section.

>                                                       for now I will have
>to assume that OP_SIGNATURE is *much* more efficient.

By virtue of having been crafted very precisely for the specific needs
of the signature feature, your signature op is liable to remain at least
slightly more runtime efficient than anything that arises by other routes.
But from the context and the "*much*" you seem to be assuming here
not just some gain from the specificity but also that no other system
of op types would succeed in handling multiple parameters in one op.
That's not justified.

The padrange op stands as an example of the sort of thing that can
be done, and indeed the existing padrange op could have a role in
the optimisation of signature-like code.  We can surely manage to
peephole-optimise "my $x = $_[0]; my $y = $_[1];" into a padrange+aassign.
I wouldn't object to a variant of padrange also taking over the actual
assignment when the RHS is @_ (or perhaps any simple array), in which
case it can copy straight from the array to the pad variables without
putting anything on the stack.  As with the existing padrange op, these
optimisations would help code that's not derived from signatures.

>Also, in terms of the criticism that OP_SIGNATURE isn't general purpose,
>I think you'll find that the "push_arg" op is highly specific too; I think
>it would be fairly rare to find perl code of the form '@_ >= N+1 ? $_[N] :
>...' in the wild

Interesting grep.  There's a bunch of code doing equivalent things that
wouldn't precisely match that pattern; for example I've got some code
that does "push @_, 0 if @_ == 1".  In those kinds of cases, one would
probably choose to write the code in the optimisable form if there were
a specific form that got such optimisation.

>                 outside of code generated by your version of
>parse_subsignature().

This qualification is deeper than it appears.  Quite a bit of what we've
discussed involves sometimes expressing a signature in the explicit form
that is currently its only implementation: the deparser might do it for
a couple of reasons; the compiler might generate it because the signature
doesn't qualify for the optimised case; plugins need it as an intermediate
form that can't always be optimised.  So in any version of this situation
where signature syntax and the signature op aren't a tightly-coupled pair
that might as well have come from Mars, the explicit version of the code
will be flying around.  Optimising it from that form is actually useful,
even if the specific op patterns involved rarely originate anywhere else.

You raised this very issue yourself with respect to the deparser: "the
issue of not losing performance on a round trip".  That's an excellent
reason to derive the optimised op from the explicit ops, rather than to
jump straight to the optimised form and let the explicit form go hang.

>hypothetically use the op_aux data to recreate a set of "normal" ops to
>replace or augment the OP_SIGNATURE if required.

An exposed utility function that recreates the normal ops would help.

>i.e. to wrap each sub in an extra scope, so that code (like Data::Dumper)
>that deparses individual subs rather than whole files, will see the right
>environment.
...
>Can anyone think why that wouldn't work?

It's pretty reasonable; just make sure it talks properly to the deparser's
existing tracking of the pragma state in its output.  The pragma state
gets synchronised at each state op, so that the collapsed %^H visible
through caller will be correct.

>* Another specific optimisation restricts the number of params a sub can
>  have to 32767. I can easily increase this limit to 2**31-1 at the
>  expense of requiring an extra U32 of storage in the op_aux of each
>  OP_SIGNATURE on 32-bit platforms. I am minded to do this.

If your objective is to handle any possible arity, then 2^31-1 is
the wrong new limit.  You should base the limit on the STRLEN type or
something else that reflects address space size.  But as I discussed
above, that's only needed if the signature op needs to handle every
possible signature, which it does if there's no other implementation
of the signature syntax.  If, as I would prefer, the signature op is
purely an optimisation of things that are principally expressed in a
more general form, then the 2^15-1 limit is fine.

-zefram

Thread Previous | Thread Next


nntp.perl.org: Perl Programming lists via nntp and http.
Comments to Ask Bjørn Hansen at ask@perl.org | Group listing | About