Finding duplicates in a list
Bill Marriott
wjm at wjm.org
Wed Jan 9 07:38:14 EST 2008
Well if *that's* what you want :) Here's the "ideal" format, where "index1"
is the original index and the subsequent numbers are duplicates.
It's even slightly faster:
on mouseUp
put url "file:D:/desktop/dupeslist.txt" into tList
put the milliseconds into tt
repeat for each line tCheck in tList
add 1 to n
put n & comma after tCheckArray[tCheck]
end repeat
repeat for each key theKey in tCheckArray
if the number of items in tCheckArray[theKey] > 1 then
put theKey & tab & tCheckArray[theKey] after tListResult
put return into char -1 of tListResult
end if
end repeat
put the milliseconds - tt & return & "number of files:" && \\
the number of lines in tList & return & return & tListResult
end mouseUp
"Ian Wood" <revlist at azurevision.co.uk> wrote
in message news:ED3C057D-7C3A-4541-89BD-C73F42E619E5 at azurevision.co.uk...
> The absolute *ideal* would be:
>
> checksum tab index1,index2,index3 etc.
>
> However, once I have a list of dupes I have to plug it back into the DB
> to get more info anyway, so the list of checksums by itself also works.
More information about the use-livecode
mailing list