LiveNode Server

Richard Gaskin ambassador at fourthworld.com
Wed Apr 1 11:55:34 EDT 2015


GMTA: the folder where I keep my experimental server stacks is named 
"LiveNode". :)

Good stuff here, a very useful and practical pursuit, IMO, in a world 
where one of the largest MMOs is also written in a high-level scripting 
language (EVE Online, in Python) so we know it's more than possible to 
consider a full-stack server solution entirely written in LiveCode:

David Bovill wrote:

 > The question is can you create in Livecode an aynchronous event-drive
 > architecture? Livecode is built after all around an event loop, and
 > through commands like dispatch, send in time, and wait with messages,
 > it is possible to create asynchronous call back mechanisms - so why
 > can we not create a node-like server in Livecode?
 >
 > Perhaps the answer lies in the nature of the asynchronous commands
 > that are available? Still I don't see why this in an issue. From
 > my experience of coding an HTTP server in Livecode - I cannot
 > understand why it should not be possible to accept a socket
 > connection, dispatch a command, and immediately return a result on
 > the connected socket. The event loop should take over and allow
 > new connections / data on the socket, and when the dispatched
 > command completes it will return a result that can then be send
 > back down the open socket.

I've been pondering similar questions myself:
<http://lists.runrev.com/pipermail/use-livecode/2015-February/211536.html>
<http://lists.runrev.com/pipermail/use-livecode/2015-March/212281.html>

Pierre's been exploring this even longer:
<http://lists.runrev.com/pipermail/metacard/2002-September/002462.html>

With socket I/O apparently handled asynchronously when the "with 
<message>" option is used, this is a very tempting pursuit.

The challenge arises from the recipient of the message: it will be 
running in the same thread as the socket broker, causing a backlog of 
message queueing; requests are received well enough, but responding 
requires then to be processes one at a time.

Down the road we may have some form of threading, though that's not 
without programming complication and threads are somewhat expensive in 
terms of system resources (though there are options like "green threads" 
at least one Python build uses).

Working with what we have, Mark Talluto, Todd Geist, and I (and probably 
others) have been independently exploring concurrency options using 
multiprocessing in lieu of multithreading, using a single connection 
broker feeding processing to any number of worker instances.

The challenge there is that the LC VM is not currently forkable, so we 
can't pass a socket connection from the broker to a child worker process.

Instead, we have to look at more primitive means, which tend toward two 
camps (though I'm sure many others are possible):

a) Consistent Socket Broker
    The socket broker handles all network I/O with all clients, and
    feeds instructions for tasks to workers via sockets, stdIn, or
    even files (/sys/shm is pretty fast even though it uses simple
    file routines).

    The upside here is that any heavy processing is distributed among
    multiple workers, but the downside is that all network I/O still
    goes through one broker process.


b) Redirects to Multiple Workers
    Here the main socket broker listening on the standard port only
    does one thing: it looks at a list of available workers (whether
    through simple round-robin, or something smarter like load
    reporting), each of which is listening on a non-standard port,
    and sends the client a 302 redirect to the server with that
    non-standard port so each worker is handling the socket comms
    directly and only a subset of them.   If each worker also has
    its own collection of sub-workers as in option a) above, this
    could greatly multiple the number of clients served concurrently.

    The upside is that all aspects of load are distributed among
    multiple processes, even socket I/O, but the downside is the
    somewhat modest but annoying requirement that each request
    be submitted twice, once to the main broker and again to the
    redirected instance assigned to handle it.


Purpose-built application servers can indeed be made with the LiveCode 
we have today and can handle reasonable amounts of traffic, more than 
one might think for a single-threaded VM.

But all systems have scaling limits, and the limits with LC would be 
encountered sooner than with some other systems built from the ground up 
as high-load servers.

IMO such explorations can be valuable for specific kinds of server apps, 
but as tempting as it is I wouldn't want to build a general purpose Web 
server with LiveCode.  In addition to the scope of the HTTP 1.1 spec 
itself, Web stuff consists of many small transactions, in which a single 
page may require a dozen or more requests for static media like CSS, JS, 
images, etc., and Apache and NgineX are really good solutions that 
handle those needs well.

I think the sweet spot for an entirely LiveCode application server would 
be those apps where backend processing load exceeds network I/O.

As interesting as these things are, I have to admit I currently have no 
practical need for such a creature, so my brief experiments have been 
few and limited to an occasional Sunday with a free hour on my hands. :)

If you have such a need it would be interesting to see how these things 
flesh out under real-work load.


 > Assuming there is an issue with the above, the next question is
 > that given that Node already can be extended with C / C++
 > extensions api - so why not treat Livecode as simply a Node
 > extension and let Node do the async event driven I/O that it is
 > so good at?

I have no direct experience with either Node.js or NgineX, so I'm out of 
my depth here - but that won't stop me from conjecturing <g>:

My understanding is that LiveCode, being single-threaded today, is 
limited to CGI, while Node.js and NgineX expect FastCGI (forkable) support.

If you can get LiveCode to run well under Node.js I'd be very interested 
to see what you come up with.

-- 
  Richard Gaskin
  Fourth World Systems
  Software Design and Development for the Desktop, Mobile, and the Web
  ____________________________________________________________________
  Ambassador at FourthWorld.com                http://www.FourthWorld.com




More information about the use-livecode mailing list