Memory garbage overrun on windows server

Sean Cole (Pi) sean at
Thu May 27 13:37:02 EDT 2021

I have an LC app running on a server that collects csv files from an email
server, analyses the data from each, stores that data to an sql db then
creates a pdf which it then uploads to an ftp server.

Each iteration of this is followed by my attempt to purge the data in all
local and global arrays and variables. The stack which is used to create
the pdf is also closed, destroyed and deleted.

But with each email that gets processed the memory usage goes up by about
120Mb. That's ok, but it only goes up. It only plateaus once it reaches
100%, which for a 2Gb server instance happens after just 10 emails. After
that the resource manager begins to record hard faults/sec spikes and task
manager shows the memory usage go up to 14Gb of 14Gb committed memory where
it finally gives up and kills the app.

Looking through old posts and bugs it does seem related to bug but this refers to MacOS
and heap fragmentation. Is this a problem with windows also? I wouldn't

Is there some other way of purging this memory intermittently so that it
can continue without having to keep closing down LC or the standalone and
restarting it? It kind of defeats the object of having a server app when I
have to baby sit it processing. Even once it has completed it never frees
up the memory used until it is killed and restarted. I'm going to increase
the memory allocation for the server instance to 4GB (at great cost) which
will alleviate some of the pressure but is only delaying the inevitable.

It does seem we need some kind of bare knuckle approach to memory purging
in LC somewhere, in my humble opinion. But, in the meantime, if I can get
some advice and background it may help to manage it for now.

Pi Digital

More information about the use-livecode mailing list