Asynchronous Server Design
david at viral.academy
Tue Jul 21 13:23:01 CEST 2015
Thanks Mike / Richard. Brief comments below:
On 20 July 2015 at 17:33, Mike Kerner <MikeKerner at roadrunner.com> wrote:
> The main reason for doing things asynchronously are
> 1) to improve performance in a high-transaction environment
> 2) keep collisions to a minimum.
> In your case, it does not appear that either is going to happen, so I
> wouldn't bother.
I'm not so sure. Say I want to serve up impromptu content at a conference
behind a LAN - there may be anywhere from 30 people in a class to hundreds
of people wanting to interact with the content at the same time.
The idea is that the static content will be loaded into custom properties,
and any new authored content will either be stored as custom props or
exported to disk. Putting the stack on a USB will give people the entire
I'd like to know more about collisions - you mean multiple request for the
same resource could get the innards of the handler routing clobbered?
> On Mon, Jul 20, 2015 at 12:03 PM, Richard Gaskin <
> ambassador at fourthworld.com
> > wrote:
> > If static there are many good solutions for that.
1. Server in a file
2. Zero installation - click and run
3. Custom standalone servers if needed
4. Simple flexible scripting - create your own routes in Livecode stack
and drop them in
5. Authoring portability - work in groups on a site then distribute a
single (stack / array / json) file that contains everything you need to run
/ view site at home.
6. Clean uninstall - delete file from laptop
7. Cross platform
My need came out of wishing to script server access to p2p file systems,
and to distribute this to a bunch of researchers (some non-technical)
across the internet. I also want to get this up and running in forthcoming
conferences on a presenters laptop (not necessarily my own).
Are there any similar things out there - would love to take a look. not
quite sure what to search for :)
> This is one of the nice conveniences of CGI: it lets Apache handle most
> of the requests with no extra coding, since any CSS, JS, or image files
> probably aren't changing for each visitor, and Apache is pretty good at
> delivering static content.
Yes - but I'm not going to use Apache for this. Its' a big ugly beast.
> > - so is there a design consideration for any long running processes
> > here - not just in terms of figure out a bette way to do it on the
> > client, not with regard to the scripting of the server handler
> > above?
> I have a couple of workgroup apps running as CGIs, and they support a few
> features which could take as long as 30 seconds if run as a single
> request. I found this problematic for the user experience because we're
> not giving them any feedback while it's happening (in addition to
> timeouts), so I changed it to support paging.
> Now the CGI includes an argument for the range of records the command will
> operate on. The client first obtains the number of records from the
> server, divides them by a number that means only a hundred or so will be
> processed per batch, and then sends a series of requests to the server in
> which each request includes the range of records to operate on. Now the
> client UI has regular periodic feedback letting the user know what's
> happening, and the socket never times out.
Splitting a long process into pieces or paging is a good strategy.
> My understanding is that the dispatch command has no affect on blocking vs
> non-blocking; it's still blocking, akin to calling a handler inline.
Interesting - I remember testing it ages ago and it seemed to work like
send in time?
> The "send" command can give non-blocking behavior when the "in <time>"
> option is used, but for processor-intensive tasks it's of minimal value
> within a single LC instance because once the handler it calls is running
> all other processing stops until that handler completes.
My understanding is that this may still be useful. send-in-time can allow
the server to send back a message over the open socket to the client - so
the client does not hang there. Yes then subsequent calls may block (I'm
not sure this is the case). But it is still useful?
> In the absence of multithreading, this can be mitigated through
> multiprocessing. The engine appears to handle socket I/O in a non-blocking
> way when callbacks are used, so handing off CPU-intensive tasks to other
> instances would seem a reasonable way to let the main daemon focus on
> network I/O while leaving the heavy lifting to child processes.
Yes that would be the next step. Once we can roll our own custom server
apps quickly and fast - we can explore getting them to dispatch stuff to
each other over UDP sockets etc. It's easy now, but becomes useful when we
have an authoring environment that includes a robust, fast, but limited
I see then being able to test out stuff locally - then when things are
working script the deployment of servers and load balancing which include
these LiveCode standalone servers behind some more robust front end server.
More information about the use-livecode