develooper Front page | perl.perl6.internals | Postings from March 2004

Re: newbie question....

Thread Previous | Thread Next
From:
Dan Sugalski
Date:
March 12, 2004 09:00
Subject:
Re: newbie question....
Message ID:
a06010202bc77666a3dc0@[172.24.18.98]
At 6:06 PM -0500 3/11/04, Matt Greenwood wrote:
>Hi all,
>	I have a newbie question. If the answer exists in a doc, just
>point the way (I browsed the docs directory). What is the design
>rationale for so many opcodes in parrot? What are the criteria for
>adding/deleting them?

Whether we have a lot or not actually depends on how you count. (Last 
time I checked the x86 still beat us, but that was a while back) In 
absolute, unique op numbers we have more than pretty much any other 
processor, but that is in part because we have *no* runtime op 
variance.

For example, if you look you'll see we have 28  binary "add" ops. 
.NET, on the other hand, only has one, and most hardware CPUs have a 
few, two or three. However... for us each of those add ops has a very 
specific, fixed, and invariant parameter list. The .NET version, on 
the other hand, is specified to be fully general, and has to take the 
two parameters off the stack and do whatever the right thing is, 
regardless of whether they're platform ints, floats, objects, or a 
mix of these. With most hardware CPUs you'll find that several bits 
in each parameter are dedicated to identifying the type of the 
parameter (int constant, register number, indirect offset from a 
register). In both cases (.NET and hardware) the engine needs to 
figure out *at runtime* what kind of parameters its been given. 
Parrot, on the other hand, figures out *at compiletime*.

Now, for hardware this isn't a huge deal--it's a well-known problem, 
they've a lot of transistors (and massive parallelism) to throw at 
it, and it only takes a single pipeline stage to go from the raw to 
the decoded form. .NET does essentially the same thing, decoding the 
parameter types and getting specific, when it JITs the code. (And 
runs pretty darned slowly when running without a JIT, though .NET was 
designed to have a JIT always available)

Parrot doesn't have massive parallelism, nor are we counting on 
having a JIT everywhere or in all circumstances. We could waste a 
bunch of bits encoding type information in the parameters and figure 
it all out at runtime, but... why bother? Since we *know* with 
certainty at compile (or assemble) time what the parameter types are, 
there's no reason to not take advantage of it. So we do.

It's also important to note that there's no less code involved (or, 
for the hardware, complexity) doing it our way or the 
decode-at-runtime way--all the code is still there in every case, 
since we all have to do the same things (add a mix of ints, floats, 
and objects, with a variety of ways of finding them) so there's no 
real penalty to doing it our way. It actually simplifies the JIT some 
(no need to puzzle out the parameter types), so in that we get a win 
over other platforms since JIT expenses are paid by the user every 
run, while our form of decoding's only paid when you compile.

Finally, there's the big "does it matter, and to whom?" question. As 
someone actually writing parrot assembly, it looks like parrot only 
has one "add" op--when emitting pasm or pir you use the "add" 
mnemonic. That it gets qualified and assembles down to one variant or 
another based on the (fixed at assemble time) parameters is just an 
implementation detail. For those of us writing op bodies, it just 
looks like we've got an engine with full signature-based dispatching 
(which, really, we do--it's just a static variant), so rather than 
having to have a big switch statement or chain of ifs at the 
beginning of the add op we just write the specific variants 
identified by function prototype and leave it to the engine to choose 
the right variant.

Heck, we could, if we chose, switch over to a system with a single 
add op with tagged parameter types and do runtime decoding without 
changing the source for the ops at all--the op preprocessor could 
glom them all together and autogenerate the big switch/if ladder at 
the head of the function. (We're not going to, of course, but we 
could. Heck, it might be worth doing if someone wanted to translate 
parrot's interpreter engine to hardware, though it'd have bytecode 
that wasn't compatible with the software engine)

As for what the rationale is... well, it's a combination of whim and 
necessity for adding them, and brutal reality for deleting them.

Our ops fall into two basic categories. The first, like add, are just 
basic operations that any engine has to perform. The second, like 
time, are low-level library functions. (Where the object ops fall is 
a matter of some opinion, though I'd put most of them in the "basic 
operation" category)

For something like hardware, splitting standard library from the CPU 
makes sense--often the library requires resources that the hardware 
doesn't have handy. (I wouldn't, for example, want to contemplate 
implementing time functions with cross-timezone and leap-second 
calculations with a mass 'o transistors. The System/360 architecture 
has a data-formatting instruction that I figure had to tie up a good 
10-15% of the total CPU transistors when it was first introduced) 
Hardware is also often bit-limited--opcodes need to fit in 8 or 9 
bits.

For things like the JVM or .NET, opcodes are also bit-limited (though 
there's much less of a real reason to do so) since they only allocate 
a byte for their opcode number. Whether that's a good idea or not 
depends on the assumptions underlying the design of their engines--a 
lot of very good people at Sun and Microsoft were involved in the 
design and I fully expect the engines met their design goals.

Parrot, on the other hand, *isn't* bit-limited, since our ops are 32 
bits. (A more efficient design on RISC systems where byte-access is 
expensive) That opens things up a bunch.

If you think about it, the core opcode functions and the core 
low-level libraries are *always* available. Always. The library 
functions also have a very fixed parameter list. Fixed parameter 
list, guaranteed availability... looks like an opcode function to me. 
So they are. We could make them library functions instead, but all 
that'd mean would be that they'd be more expensive to call (our 
sub/method call is a bit heavyweight) and that you'd have to do more 
work to find and call the functions. Seemed silly.

Or, I suppose, you could think of it as if we had *no* opcodes at all 
other than end and loadoplib. Heck, we've a loadable opcode 
system--it'd not be too much of a stretch to consider all the opcode 
functions other than those two as just functions with a fast-path 
calling system. The fact that a while bunch of 'em are available when 
you start up's just a convenience for you.

So, there ya go. We've either got two, a reasonable number the same 
as pretty much everyone else, an insane number of them, or the 
question itself is meaningless. Take your pick, they're all true. :)
-- 
                                         Dan

--------------------------------------"it's like this"-------------------
Dan Sugalski                          even samurai
dan@sidhe.org                         have teddy bears and even
                                       teddy bears get drunk

Thread Previous | Thread Next


nntp.perl.org: Perl Programming lists via nntp and http.
Comments to Ask Bjørn Hansen at ask@perl.org | Group listing | About