Livecode server + NGINX ?
Richard Gaskin
ambassador at fourthworld.com
Wed Jan 21 11:37:57 EST 2015
Peter W A Wood wrote:
> Richard is correct that nginx does not support CGI. It will forward
> requests to an upstream server or a number of upstream servers. I
> support one application that has been running for a couple of years
> that forwards CGI requests to a second server. (The second server is
> Cheyenne from Softinnov with which you may be familiar).
>
> I have also used nginx to distribute requests to a number of upstream
> servers when the upstream server could only process one request at
> a time. In my case, I spread the load over four servers. This load
> balancing feature of nginx use a simple approach in the free version,
> I believe the “not free” version has more sophisticated load
> balancing.
A few months ago I was experimenting with multi-processing in LiveCode
as an alternative to multi-threading. The literature on each notes the
higher overhead of the former compared to the latter, but also that the
overhead is not as significant as one might think. In many cases
multi-processing allows for less complex code than multi-threading by
virtue of being able to rely on OS partitioning of memory and CPU
resources rather than having to manage all of that internally via threads.
In those early tests I was interested in seeing just how many requests I
could throw at a single non-threaded LC-based daemon listening on port
80. This required a VPS, of course, since shared hosts generally don't
allow always-on processes.
I was surprised by the amount of traffic it could handle. I had three
clients hammering it with requests as fast as they could, up to as
little as 5ms apart (though given the overhead of TCP that was merely a
theoretical limit; I don't think any of my systems were able to
round-trip requests that fast with latency, etc.). The server held up
admirably, able to broadcast each request to all three clients faster
than all but one of the clients could keep up with.
This suggested an extension of the experiment I've not had time for, in
which I'd build a modest quad-core system (Intel's J1900 would be an
ideal CPU for this, as would some similarly-priced AMD quad-cores), with
one broker daemon listening on port 80 for incoming requests, and routes
those to any of three other daemons listening to the broker on internal
ports. In fact, communications between the broker and the workers need
not even be via sockets; file polling could be quite efficient on a
system with excellent caching like Linux, esp. with an SSD, similar to
some distributed file systems.
With this setup, the broker merely hands off requests to other processes
but never bothers with the heavy lifting of any of them, that heavy
lifting done by the worker daemons which handle only those requests and
nothing more, since the broker handles the client comms.
I doubt such a system would completely solve the "C10k Problem" ("How
efficiently can a system handle 10,000 concurrent connections?") as well
as NginX. But given the ever-increasing role of Python and other
high-level scripting languages in large-scale systems, I think it's
worth exploring further.
The most famous example is Eve Online, a massively-multi-player game
server made with Stackless Python.
Also worth considering are Tahoe and Disco, both distributed file
systems similar to Hadoop but written in Python.
Osterhout was right: scripting is the 21st century solution for many
application needs.
Given all this, I believe we may well be able to use LiveCode in systems
of similar scope, with relatively modest enhancements to the engine.
And in the meantime, there are probably ways we can use the existing
engine in ever more clever ways to achieve scaling far beyond current
expectations.
--
Richard Gaskin
Fourth World Systems
Software Design and Development for the Desktop, Mobile, and the Web
____________________________________________________________________
Ambassador at FourthWorld.com http://www.FourthWorld.com
More information about the use-livecode
mailing list