old habits are hard to break
Dr. Hawkins
dochawk at gmail.com
Fri Jun 21 11:26:28 EDT 2013
On Fri, Jun 21, 2013 at 2:33 AM, Mark Wilcox <m_p_wilcox at yahoo.co.uk> wrote:
> A good example from this thread is having four different versions of the same function with
>tiny variations at the beginning.
For that matter . . . does anyone really know the timing comparisons
for LiveCode. Say, for parsing a constant "abc", pulling it from a
variable (ltrs), and an array? (which was all I was optimizing away,
really, fixed for the price of a four way switch setting the
variable).
Or if I'm doing or not doing something with an array element based on
that element, is
if ary[why][el] > 12 then
put gizmo(ary[why][el]) into widget
end if
better than
put ary[why][el] into tstVal
if tstVal > 12 then
put gizmo(tstVal) into widget
end if
Some things I just expect to be expensive (function calls, lookup of a
text key in an array), for no other reason than they were in other
languages I used years ago, or it just "seems so"
I built a wonderful smalltalk model of a market, with "proper"
messages & so forth.
After looking at it, and being irritated with runtime, I recoded it
line for line in Fortran. Most of the function calls, and consequent
context switches, became simple array element reads.
The speedup was 45,000 to 1, and I'd done no optimization at all.
>One of the key bottlenecks in a modern device
>(desktop or mobile) is the speed of the RAM vs the speed of the processor; if your whole
>program doesn't fit in the CPU caches then splitting out function variants can actually make
>things slower, due to the need to fetch a different variant from RAM (or even disk or flash
>memory) vs having a single longer version that remains in cache.
We hit this on my dissertation. It was a new dynamic programming
method, over a high dimensional search space *far* to large to keep
around. Afterwards, I realized that my solution was the same as hard
disk for virtual memory, or a cpu cache, where I simply recalculated
values not there.
We found that by increasing the size of my "cache" array from 256M to
512M (On a screaming fast for the time machine with an unheard of 1G
of ram), it slowed down as the extra tie to search the keys outweighed
the advantage of a cached value.
We also discovered a bug in the Cray compiler as sold by, uhm, I
forget (Absoft?). As it was actually bit-addressed, not
byte-addressed, a dynamic array was limited to 128M (if memory
serves), and they didn't know this. They'd never had a machine with
as much memory we had, and never tried oen that big . . .
--
Dr. Richard E. Hawkins, Esq.
(702) 508-8462
More information about the use-livecode
mailing list