LiveNode Server

David Bovill david at viral.academy
Wed Apr 1 19:06:17 EDT 2015


On 1 April 2015 at 16:55, Richard Gaskin <ambassador at fourthworld.com> wrote:
>
>
> David Bovill wrote:
>
> > The question is can you create in Livecode an aynchronous event-drive
> > architecture?
..
>
> With socket I/O apparently handled asynchronously when the "with
<message>" option is used, this is a very tempting pursuit.
>
> The challenge arises from the recipient of the message: it will be
running in the same thread as the socket broker, causing a backlog of
message queueing; requests are received well enough, but responding
requires then to be processes one at a time.

Ah - OK
So the first response would be fine - but not the second.

>
> The challenge there is that the LC VM is not currently forkable, so we
can't pass a socket connection from the broker to a child worker process.


 I am not quite sure what not being forkable is here - can you explain.
What is special about LC here compared with other VM's

>
>
> Instead, we have to look at more primitive means, which tend toward two
camps (though I'm sure many others are possible):
>
> a) Consistent Socket Broker
>    The socket broker handles all network I/O with all clients, and
>    feeds instructions for tasks to workers via sockets, stdIn, or
>    even files (/sys/shm is pretty fast even though it uses simple
>    file routines).
>
>    The upside here is that any heavy processing is distributed among
>    multiple workers, but the downside is that all network I/O still
>    goes through one broker process.
>
>
> b) Redirects to Multiple Workers
>    Here the main socket broker listening on the standard port only
>    does one thing: it looks at a list of available workers (whether
>    through simple round-robin, or something smarter like load
>    reporting), each of which is listening on a non-standard port,
>    and sends the client a 302 redirect to the server with that
>    non-standard port so each worker is handling the socket comms
>    directly and only a subset of them.   If each worker also has
>    its own collection of sub-workers as in option a) above, this
>    could greatly multiple the number of clients served concurrently.
>
>    The upside is that all aspects of load are distributed among
>    multiple processes, even socket I/O, but the downside is the
>    somewhat modest but annoying requirement that each request
>    be submitted twice, once to the main broker and again to the
>    redirected instance assigned to handle it.

OK - so a graph of servers communicating over sockets is better than one
central spoke and hub scenario.

>From the FastCGI docs: http://www.fastcgi.com/drupal/node/6?q=node/16

With session affinity you run a pool of application processes and the Web
server routes requests to individual processes based on any information
contained in the request. For instance, the server can route according to
the area of content that's been requested, or according to the user. The
user might be identified by an application-specific session identifier, by
the user ID contained in an Open Market Secure Link ticket, by the Basic
Authentication user name, or whatever. Each process maintains its own
cache, and session affinity ensures that each incoming request has access
to the cache that will speed up processing the most.

>
> I think the sweet spot for an entirely LiveCode application server would
be those apps where backend processing load exceeds network I/O.

Yes - I see no real use for using LiveCode as a server. I'd use Node. I
want to be able to use LiveCode within a mixed coding environment and get
LiveCode to do stuff there - for instance image processing. I want to be
able to deploy it using NPM - so it's easy to set up.

>
> As interesting as these things are, I have to admit I currently have no
practical need for such a creature, so my brief experiments have been few
and limited to an occasional Sunday with a free hour on my hands. :)

Hell - I do. I'd be able to write all sorts of stuff for real world
applications if I could choose to write a routine in LiveCode and switch to
something else down the line if needed. The main use case is to work in
teams with other mainstream devs, and to choose the language that suites
the problem - so polyglot server programming.

>
> If you have such a need it would be interesting to see how these things
flesh out under real-work load.
>
>
> > Assuming there is an issue with the above, the next question is
> > that given that Node already can be extended with C / C++
> > extensions api - so why not treat Livecode as simply a Node
> > extension and let Node do the async event driven I/O that it is
> > so good at?
>
> I have no direct experience with either Node.js or NgineX, so I'm out of
my depth here - but that won't stop me from conjecturing <g>:

Addons are dynamically linked shared objects. They can provide glue to C
and C++ libraries - [https://nodejs.org/api/addons.html nodejs.org]


>
> My understanding is that LiveCode, being single-threaded today, is
limited to CGI, while Node.js and NgineX expect FastCGI (forkable) support.

This does not make sense to me - not the single threaded bit. It's not
single threading that is the problem. Node is single threaded.

In the Node example I tried - a long Fibonacci sequence blocked the server
to any further incoming calls. To solve this you could pass the request to
another server and return asynchronously. Then the server could return to
other requests. However more calls to the Fibonacci sequence would cue up
like you say. Is that a problem though?

That's the sort of thing that yes you would handle with some sort of broker
and farm of LC server processes - but that is a second order problem. The
main one is keep doing normal file serving fast. The second  problem is
make installation on any server easy.

I think the main problems are simply delt with by having LiveCode available
as a Node Addon - https://nodejs.org/api/addons.html Or am I missing
anything?

The two aspects of that that seem relevant are:

   1. V8 JavaScript, a C++ library. Used for interfacing with JavaScript:
   creating objects, calling functions, etc.
   2. libuv <https://github.com/joyent/libuv>, C event loop library.
   Anytime one needs to wait for a file descriptor to become readable, wait
   for a timer, or wait for a signal to be received one will need to interface
   with libuv. That is, if you perform any I/O, libuv will need to be used.

The first would seem to being handled by the core LC team as they integrate
Javascript with LiveCode Builder?

The second I think is the work that would need doing? As far as I get it -
LiveCode would need to allow Node to return to Node's event loop any time
any I?O was performed. I'm not sure why this is only about I/O and not
other long running calculations - but it's something in this area - right?

>
> If you can get LiveCode to run well under Node.js I'd be very interested
to see what you come up with.
>

It's great that since LiveCode is open source we can look to answer these
things ourselves :)



More information about the use-livecode mailing list