the large file challenge

Richard Gaskin ambassador at fourthworld.com
Fri Nov 8 22:22:01 EST 2002


andu wrote:

>> I think I missed something from the original post....
> 
> No, you got it right.

Thanks, Andu.  I thought I was losin' it.

If we're allowed to read the whole thing into RAM and the goal is the count
the occurences of the string "mystic_mouse", then to optimize speed we can
just remove the redundant read commands and use offset to search for us:


#!/usr/local/bin/mc
on startup
  put "/gig/tmp/log/xaa" into the_file
  put url ("file:"&the_file) into the_text
  put 0 into the_counter
  put 1 into tPointer
  --
  repeat for each line this_line in the_text
    get offset("mystic_mouse", the_text, tPointer)
    if it = 0 then exit repeat
    add 1 to the_counter
    add it to tPointer
  end repeat
  put the_counter
end startup


This is off the top of my head.  If it runs I'd be interested in how it
compares.

-- 
 Richard Gaskin 
 Fourth World Media Corporation
 Developer of WebMerge 2.0: Publish any database on any site
 ___________________________________________________________
 Ambassador at FourthWorld.com       http://www.FourthWorld.com
 Tel: 323-225-3717                       AIM: FourthWorldInc




More information about the metacard mailing list