Revolution compilation

Dar Scott dsc at swcp.com
Mon Jan 3 17:09:29 EST 2005


On Jan 3, 2005, at 8:22 AM, Frank D. Engel, Jr. wrote:

>> Also, last time I looked, RR compiles scripts 'on the fly' like Java. 
>> Didn't know RB was a compiler. Must be tough on 
>> edit/compile/run/debug cycles. Perhaps it's an interpreter like RR 
>> and compiles during runtime? I don't know.
>
> Rev "compiles" scripts into a bytecode format which is later 
> interpreted by the runtime environment.  Faster than a "pure" 
> interpreter, but still slower than compiled code.

I've wondered about this.

> Java code gets run through a compiler which translates it into a Java 
> bytecode.  That bytecode is then interpreted at runtime, much as is 
> Rev code.  However, some Java runtime environments (JREs) will 
> actually "recompile" the bytecode into native code for the platform.  
> This is slower than compiling code directly for the hardware, but adds 
> the write-one-run-anywhere flexibility of Java (and to some degree of 
> Rev), and the result of this (sometimes referred to as "Just-In-Time" 
> compilation) is much closer in speed to native compiled code.

Revolution needs to do something reasonable with stacks made with 
earlier versions and to some extent with stacks made with later 
versions.  Also, bug fixes in compiling need to allow a broken stack to 
start working.

This probably puts some limitations on what can be done at the point we 
click the "apply" button.  However, since Revolution keeps a copy of 
source, maybe what gets stuffed into the script property can be 
optimized for size and compiling speed.

In the engine, some early phases might be done that assumes some schema 
for the language, this might speed up platform dependent compiling and 
allow new engines to use the half-processed source.  If the script text 
is also kept, this can be more aggressive and rebuilt if not compatible 
with the engine.  The more aggressive intermediate form might do some 
platform independent optimization.  Most of the important optimization 
that a Rev compiler can do is at a higher level than code generation.

The target compilation need not be compact and may be made up of 32-bit 
or even 64-bit atoms.  If compiled as needed, then the operations need 
not be fixed across platforms or even across compilations, but pointers 
to actual code segments as is done in Forth.

> Real Basic is a true compiler, which translates code into instructions 
> for the actual hardware on which it is run.  This is the fastest 
> solution, since the computer hardware does virtually *all* of the work 
> of figuring out how to execute each instruction, and there is no 
> runtime translation step.  However, this also locks the compiled 
> version to the platform for which it was compiled (similarly to a 
> standalone produced by Rev), and it causes a Compile-Run-Debug cycle 
> to be introduced.  Visual Basic (M$ product) is also a true compiler.
>
> I personally wonder what it would take to create a true compiler for 
> Rev stacks?

Because of the high level abstractions, much of what would be compiled 
might be mostly calls.  Some optimization might be done by compiling 
some direct instructions, but most optimization might be in selecting 
the right set of calls that are based on information inferred by the 
compiler.

> Obviously this would introduce some limitations on certain operations, 
> but for stacks which don't use those operations, it could 
> substantially increase performance...

The language should drive the compiling.  The compiling should not 
drive the language.  The platform should not drive the meaning at this 
level.  IMHO, the platform driven types of C have been a drag on 
programming for a third of a century.


On my OS X system in Revolution, an arithmetic operation takes a 
fraction of a microsecond.  Other things typically take 5 times as 
long.  A call to a handler takes 30 to 50 times as long.  A call to an 
external takes a hundred times as long.  About.

Macros or private handlers with direct call optimization would greatly 
improve this, especially if that can apply to any externals of a stack. 
  Use would decrease the names in the message path and thus increase 
performance elsewhere.  Another method is dynamically learning direct 
calls; this can greatly increase performance without any language 
changes.  (One approach to that is suggested in bugzilla 1444; it might 
be expanded to include other runtime lookup domains.)

Those and other optimizations might do more optimizing than compiling.  
Scott Raney reported a 1000 fold performance increase in the 
enhancement of bugzilla 586 (which might have some nice side effect 
features).

Dar

**********************************************
     DSC (Dar Scott Consulting & Dar's Lab)
     http://www.swcp.com/dsc/
     Programming Services and Software
**********************************************



More information about the use-livecode mailing list