Deleting Data Woefully Slow

Mark Wieder mwieder at ahsoftware.net
Thu Mar 25 02:18:43 EDT 2010


Kay-

Wednesday, March 24, 2010, 11:01:15 PM, you wrote:

> If the numbers were remotely close, ie if I wanted to prove it was faster to
> create 99% than delete 1%, then yes, I totally agree, but the variation in
> my random 10% is insignificant compared to the slowness of using 'delete
> line x', especially on really large data sets.

This is getting a bit off topic, but you can't prove the above. If you
used a counter instead of a random number, I'd say you were accurate.
But you're deliberately calling the random number generator, which has
a finite chance of returning all ones.

> Yes I could empty these but I don't see they match your statement 'the
> variable will be continuing to increase'? Memory, now that's another issue,

<g> that's because I missed the fact that your revised code uses
different variables for the three tests...

> What I'm after is the fastest way to take a HUGE amount of data and reduce
> it by roughly 5-10%. The repeat for each code I supplied seems to do that,
> if anyone has any other code that is faster PLEASE provide. As I said at the
> beginning of this thread I'm dealing with two nested repeat loops each
> dealing with 1.4 million cycles!! Having something tenfold slower is NOT
> what I'm after but is what I was seeing because I was using delete.

Is it possible to refactor your code to use something other than
nested repeat loops? My guess from what you've posted so far is that
there isn't, but I'm grasping at straws... I don't know what you're
aiming at for an end result - could a filter command help out?

-- 
-Mark Wieder
 mwieder at ahsoftware.net




More information about the use-livecode mailing list