3.0 for Linux is 8% slower than 2.6.1 ?
Chipp Walters
chipp at chipp.com
Tue Oct 7 01:09:52 EDT 2008
Bernard,
Regarding your test stack. (Bottom of message is your test script)
I'm not sure what the 'send in time' helps test? I think it better to
just repeat your test as many times as you wish and not use 'send in
time'. It adds another variable to the process.
I can see where this test can be setup to run for a couple of hours
using 'send in time'. But, running this test for a long period of time
will not take into account any other background processes which may be
taking place--thus slowing down the test and providing inaccurate
results when compared to another test--even if on the same machine.
It appears the only thing you are testing is the random() function, or
perhaps I am missing something? Wilhelm's stack is a good benchmark
example as it states and tests a specific function. Unless you are
only trying to test the speed of the random() function, this test does
not provide an overall speed test for Rev. Also, as a note, I
typically try not to update the screen on benchmarks (writing to
fields or message box) during the time calculation-- as these can
represent yet another variable to the test criteria.
Also, you are using the round function to calculate your times:
put round ((the milliseconds - tStartMS)/1000) into tElapsed
This has the possibility of introducing significant error. For
instance, when I run your default setting of a 15 seed (on my basic
speed desktop), it takes 21.48 seconds to run, but you round that
number up or down (in this case it becomes 21) thus introducing a
greater than 2% error. If I run a seed of 11 or below, it registers as
taking 0 seconds, though it actually takes close to .4 seconds. This
is also a significant error percentage-wise (infinity?). If I were to
run this on a faster machine (like many of the Macs out there), the
potential error would be even greater.
Not to mention this error can be compounded by the number of tests you
run in your repeat measure field-- which the default is 5-- so on my
machine it is theoretically possible to have close to +- 10%
difference in actual times vs reported times based upon the default
settings. Not very reliable.
Your throw commands are a nice trick to exit a handler:
if fld "repeat measure" is not a number then throw "repeat needs a number"
And I suppose you already know this, but for others reading, you may
wish to provide the below errorDialog handler in your button script as
well. Of course, it will only display when script debug mode is turned
off.
on errorDialog pExecutionError
answer pExecutionError
end errorDialog
I hope this helps you understand why you may be having some problems.
FYI, Richard Gaskin, a very experienced Rev user and benchmarking
guru, has some good benchmarking tools which you might ask about, or
search the archives here for. I'd suggest using a proven tool to help
you get a grasp on this situation.
--->BERNARD's BENCHMARK CODE -----
on mouseUp
if fld "repeat measure" is not a number then throw "repeat needs a number"
if fld "send delay seconds" is not a number then throw "send delay
needs a number"
put round(fld "send delay seconds") into tSendDelaySize
repeat with tCounter = 1 to (round(fld "repeat measure"))
put tSendDelaySize * tCounter into tNextSend
send "doTest" to me in tNextSend seconds
end repeat
end mouseUp
on doTest
put the long time && the pendingMessages
put the milliseconds into tStartMS
if fld "seed" is not a number then throw "need a number to kick things off"
put round(exp(fld "seed")) into tLimit
repeat with tCounter = 1 to tLimit
put random(100) & comma & random(100) & comma & tLimit & return
after tItemList
end repeat
put line 1 to 5 of tItemList into tNewItemList
put round ((the milliseconds - tStartMS)/1000) into tElapsed
put tElapsed & return & tNewItemLIst & return & the long time into
fld "result"
set the itemDel to "."
if item 1 of version() < 3 then
put "old" into tVersion
else
put "new" into tVersion
end if
set the itemDel to comma
put tElapsed & comma after fld tVersion
put fld tVersion into tRunningScore
if char -1 of tRunningScore is comma then put empty into char -1 of
tRunningScore
put (tVersion & "avg") into tVersionAvg
put avg(tRunningScore) into fld tVersionAvg
end doTest
More information about the use-livecode
mailing list