Livecode server + NGINX ?

Richard Gaskin ambassador at fourthworld.com
Wed Jan 21 13:34:58 EST 2015


Robert Brenstein wrote:

> On 21.01.2015 at 8:37 Uhr -0800 Richard Gaskin wrote:
>>
>>A few months ago I was experimenting with multi-processing in
>>LiveCode as an alternative to multi-threading.  The literature on
>>each notes the higher overhead of the former compared to the latter,
>>but also that the overhead is not as significant as one might think.
>>In many cases multi-processing allows for less complex code than
>>multi-threading by virtue of being able to rely on OS partitioning
>>of memory and CPU resources rather than having to manage all of that
>>internally via threads.
>>
>>In those early tests I was interested in seeing just how many
>>requests I could throw at a single non-threaded LC-based daemon
>>listening on port 80.  This required a VPS, of course, since shared
>>hosts generally don't allow always-on processes.
>>
>
> Would you care to provide more technical details on those tests?

I wish it were more interesting, but it was the quickest test I could 
come up with to begin to measure system load on the server, so it's 
kinda rudimentary.

I used the example chat scripts here for both client and server:
<http://lessons.runrev.com/m/4071/l/12924-how-to-communicate-with-other-applications-using-sockets>

Normally all that happens with those scripts is that any string sent to 
the server is broadcast to each connected client, where the echo is 
added to a log field.

Here I modified the client to add a repeat loop that attempts to 
continually bombard the server with sent messages as fast as it can up 
to 5ms between them, while also responding to echos from the server to 
update the log field.  Given the size of the log field, I truncate it to 
show only the last 1000 or so messages.

The three clients used represent a broad spectrum of performance, in an 
attempt to identify issues known in the gaming world with slow clients:

- Slow:   Atom 230, 1.6 Ghz, Lubuntu 14.04 LTS
- Medium: Core 2 Duo, 2.26 GHz, Mac OS 10.7
- Fast:   Haswell G3220, 3.0 GHz, Ubuntu 14.10

The slow Atom-based machine also runs an Apache server with OwnCloud and 
some custom LiveCode services for my network so throughput is even worse 
than one might imagine, though I did try to keep the network somewhat 
quiet during the test.

Each test ran for two minutes, with each client sending and receiving 
messages as fast as they can for the duration of the test.

While running, I had terminal logged into the server running top so I 
could see a profile of the system in near-real-time, watching the LC daemon.

Most of the time the server never used more than 25% of CPU, with RAM 
usually below 45 MB.

I did sometimes find the server would freak out with a spike in CPU and 
an apparent hang, but I found that once I took the slow Atom client off 
the test performance became reliable again.  Given the synchronous 
nature of the test, server impairment from an unusually slow client is 
not surprising (and workarounds for handling that the subject of many 
articles on game server design).

After throttling all clients to send at intervals no shorter than about 
25 ms, all runs of the test were always completed successfully, with the 
server able to handle all three clients gracefully with just a slice of 
CPU time and surprisingly little RAM.

I never spent the time to explore ways to mitigate TCP bottlenecks on 
the server side, though I hope to have time to get back to those 
experiments in Spring.

TCP is generally reliable, but that robustness comes at a significant 
cost to throughput over UDP.   Many large-scale game servers send event 
frames in UDP for this reason, but at a certain scale they need to go 
much further to employ action prediction and other exotic solutions to 
keep up with the traffic.

In this test, the results may seem modest:  only three clients, and 
throttled.

But a steady stream of traffic - far in excess anything we'd expect in 
any normal chat context - continually handled by a LiveCode daemon that 
never maxes the CPU nor even consumes much RAM seems quite promising.

Employing multiple worker daemons and making better use of asynchronous 
methods would likely yield satisfying performance for a wide range of 
real-time connected apps, all using the humble LiveCode engine we have 
in our hands right now.

-- 
  Richard Gaskin
  Fourth World Systems
  Software Design and Development for the Desktop, Mobile, and the Web
  ____________________________________________________________________
  Ambassador at FourthWorld.com                http://www.FourthWorld.com




More information about the use-livecode mailing list