Speaking of Filter and Match...

J. Landman Gay jacque at hyperactivesw.com
Sun Mar 13 16:05:24 EDT 2022


On 3/12/22 8:54 PM, Roger Guay via use-livecode wrote:
> I have a field with about a thousand lines with many duplicate lines, and I want to delete the duplicates. Seems like this should be simple but I am running around in circles. Can anyone help me with this?

Making the list into an array is the easiest way but as mentioned, it will destroy the original 
order. If the order is important then you can restore it with a custom sort function. Here's my 
test handlers:


on mouseUp
   put fld 1 into tData -- we keep this as a reference to the original order
   put tData into tTrimmedData -- this one will change
   split tTrimmedData by cr as set -- removes duplicates
   put keys(tTrimmedData) into tTrimmedData -- convert to a text list
   sort tTrimmedData numeric by origOrder(each,tData)
   put tTrimmedData into fld 1
end mouseUp

function origOrder pWord, @pData
   set wholematches to true -- may not matter, depends on the data
   return lineoffset(pWord, pData)
end origOrder

Field 1 contains lines in random order with duplicates.

-- 
Jacqueline Landman Gay         |     jacque at hyperactivesw.com
HyperActive Software           |     http://www.hyperactivesw.com



More information about the use-livecode mailing list