Asynchronous Server Design

Mike Kerner MikeKerner at roadrunner.com
Mon Jul 20 12:33:05 EDT 2015


The main reason for doing things asynchronously are
1) to improve performance in a high-transaction environment
2) keep collisions to a minimum.

In your case, it does not appear that either is going to happen, so I
wouldn't bother.

On Mon, Jul 20, 2015 at 12:03 PM, Richard Gaskin <ambassador at fourthworld.com
> wrote:

> David Bovill wrote:
>
> >    - Is it worth designing a Livecode server to use asynchronous
> > calls to handlers rather than normal synchronous processing?
> >
> > First off - the way this is going to be done will not use any io
> > (file, shell or internet / socket calls) - all the data is going
> > to be in memory / custom properties. This restricts the use case
> > to simple sites for now - but it suits the current purpose.
> ...
> > The question is is there any point?
>
> Good question, because if there's no file I/O there would be no saving,
> and no saving implies the data is either static or unimportant.
>
> If static there are many good solutions for that.
>
> If unimportant it would seem difficult to justify the R&D time.
>
>
> > Firstly, because messages come into the server and are then
> > immediately dispatched by the engine to event handlers
> > we craft. So is this all the asynchronicity needed?
> >
> > Or are there some other tricks akin to the things node does if
> > there are long running processes. Say there is a handler:
> >
> > command longRunningFibonacci someInput, someSocket
> >>   -- do something that takes a very long time
> >>   return a result
> >> end longRunningFibonacci
> >
> >
> > A browser fires off around 20 calls to the server to load a complex
> > page, and they all hit at once
>
> This is one of the nice conveniences of CGI:  it lets Apache handle most
> of the requests with no extra coding, since any CSS, JS, or image files
> probably aren't changing for each visitor, and Apache is pretty good at
> delivering static content.
>
> And for the subset of requests that require special processing that would
> need LiveCode or some other custom programming, each request prompts Apache
> to create a new instance of the CGI app, providing concurrency with no
> additional programming required and only a relatively small additional cost
> in RAM and init time compared to system threads.
>
> Long processes are different, though, since they can not only result in
> timeouts but also confuse the user accustomed to quick responses from
> servers:
>
> > - so is there a design consideration for any long running processes
> > here - not just in terms of figure out a bette way to do it on the
> > client, not with regard to the scripting of the server handler
> > above?
>
> I have a couple of workgroup apps running as CGIs, and they support a few
> features which could take as long as 30 seconds if run as a single
> request.  I found this problematic for the user experience because we're
> not giving them any feedback while it's happening (in addition to
> timeouts), so I changed it to support paging.
>
> Now the CGI includes an argument for the range of records the command will
> operate on.  The client first obtains the number of records from the
> server, divides them by a number that means only a hundred or so will be
> processed per batch, and then sends a series of requests to the server in
> which each request includes the range of records to operate on.  Now the
> client UI has regular periodic feedback letting the user know what's
> happening, and the socket never times out.
>
>
> > Does the fact that the engine is dispatching calls means we can forget
> > about this - or as I think is the case should I consider dispatching /
> > offloading to another process this long running task, and returning
> > something to the browser? If so do I keep the socket open, or ???
>
> My understanding is that the dispatch command has no affect on blocking vs
> non-blocking; it's still blocking, akin to calling a handler inline.
>
> The "send" command can give non-blocking behavior when the "in <time>"
> option is used, but for processor-intensive tasks it's of minimal value
> within a single LC instance because once the handler it calls is running
> all other processing stops until that handler completes.
>
> In the absence of multithreading, this can be mitigated through
> multiprocessing. The engine appears to handle socket I/O in a non-blocking
> way when callbacks are used, so handing off CPU-intensive tasks to other LC
> instances would seem a reasonable way to let the main daemon focus on
> network I/O while leaving the heavy lifting to child processes.
>
> --
>  Richard Gaskin
>  Fourth World Systems
>  Software Design and Development for the Desktop, Mobile, and the Web
>  ____________________________________________________________________
>  Ambassador at FourthWorld.com                http://www.FourthWorld.com
>
>
> _______________________________________________
> use-livecode mailing list
> use-livecode at lists.runrev.com
> Please visit this url to subscribe, unsubscribe and manage your
> subscription preferences:
> http://lists.runrev.com/mailman/listinfo/use-livecode
>



-- 
On the first day, God created the heavens and the Earth
On the second day, God created the oceans.
On the third day, God put the animals on hold for a few hours,
   and did a little diving.
And God said, "This is good."



More information about the use-livecode mailing list