There should be a "unique" option on sort . . .
pete at lcsql.com
Sun Jan 5 12:40:18 EST 2014
Or (assuming a simple line delimited list):
put theList into theNewList
split theNewList by return and return
put the keys of theNewList into theNewList
Haven't done any performance testing but previous posts have suggested that
array manipulation is usually lightning fast.
lcSQL Software <http://www.lcsql.com>
On Sun, Jan 5, 2014 at 8:19 AM, Dr. Hawkins <dochawk at gmail.com> wrote:
> On Sat, Jan 4, 2014 at 8:54 PM, <dunbarx at aol.com> wrote:
> > And in any case it is simple and fast to delete duplicates, if that is
> > what is desired, in a few lines of code.
> > The idea of singling out one instance seems more like a job for "filter".
> I'd agree with that, too.
> But "a few lines of code" is almost always going to be slower than
> something actually built in.
> What looks fastest to me (not tested) is
> assuming sorted data:
> repeat with i = the number of lines of stuff down to 1
> put line i of stuff into newLn
> if newLn = oldLn then delete line i of stuff
> put newLn into oldLn
> next repeat
> you could also
> unsorted data:
> repeat for each line line theLn in stuff
> if theLn is not among the lines of newStuff then put cr & theLn after
> end repeat
> Each of these, though, seems to be doing a search each time, while the info
> was already there while the intrinsic sorted.
> nice would be
> repeat for each line theLn in stuff BACKWARDS
> which would allow a single pass through, and an appending of non-unique
> lines (with sorted data)
> Dr. Richard E. Hawkins, Esq.
> (702) 508-8462
> use-livecode mailing list
> use-livecode at lists.runrev.com
> Please visit this url to subscribe, unsubscribe and manage your
> subscription preferences:
More information about the Use-livecode