Another server question (mixing node.js and LC)
Richard Gaskin
ambassador at fourthworld.com
Wed Feb 28 12:22:12 EST 2018
jonathandlynch wrote:
> I have another server question. I really like scripting with LC,
> because I can make improvements very quickly. This is important
> because of my very limited free time.
>
> But, I want to be able to handle many many concurrent server requests,
> the way node.js does.
Good timing. Geoff Canyon and I have been corresponding about a related
matter, comparing performance of LC Server with PHP.
PHP7 is such a radical improvement over PHP5 that it's almost unfair to
compare it any scripting language now. But it also prompts me to
wonder: is there anything in those PHP speed improvements which could be
applied to LC?
But that's for the future, and for CGI. In the here-and-now, you're
exploring a different but very interesting area:
> Would it work to have node take In a request, launch an LC cgi
> executable to process the request, set an event listener to wait
> for LC to send the results back to Node, then have node return
> the results to the user?
>
> This is not unlike using Apache to launch LC CGI processes, but
> the asynchronous nature of node would, presumably, tie up fewer
> system resources and allow for larger concurrency. This could mean
> having a couple thousand LC processes running at any one time - would
> that be okay as long as the server had enough RAM?
>
> In general, would this work for a system that hand to handle, say,
> 10,000 server requests per minute?
A minute's a long time. That's only 167 connections per second.
Likely difficult for any CGI, and certainly for LC (see general
performance relative to PHP, and the 70+% of LC boot time spent
initializing fonts that are almost never used in CGIs - BZ# 14115).
But there are other ways beyond CGI.
A couple years ago Pierre Sahores and I traded notes here on this list
about tests run with LC socket servers. There's a lot across multiple
threads, but this may be a good starting point:
http://lists.runrev.com/pipermail/use-livecode/2016-March/225068.html
One thing is clear: if high concurrency is a requirement, use something
dedicated to manage comms between connected clients and a pool of workers.
My own tests were measuring lchttpd against Apache, a different model
but instructive here because it's still about socket comms. What I
found was that an httpd written in LC was outmatched by Apache two-fold.
But that also means that a quickly-thrown-together httpd script in LC
was about half as fast as the world's most popular httpd written in C by
hundreds of contributors specializing in that task.
So, promising for certain tasks. :)
The key with my modded fork of the old mchttpd stack was rewriting all
socket comms to use callbacks. The original used callbacks only for
incoming POST, but I extended that to include all writes as well.
Applying this to your scenario:
client client client
-------- -------- --------
\ | /
........internet.......
\ | /
|----------- HTTP SERVER -----------|
| / | \ |
| worker worker worker |
|-----------------------------------|
While LC could be used in the role of the HTTP SERVER, that would be
wasteful. It's not an interesting job, and dedicated tools like Node.js
and NginX will outperform it many-fold. Let the experts handle the
boring parts. :)
The value LC brings to the table is application-specific. So we let a
dedicated tool broker comms between external clients and a pool of
workers, where the workers could be LC standalones.
That's where much of Pierre's experiments have focused, and where the
most interesting and productive use of LC lies in a scenario where load
requirements exceed practical limitations of LC as a CGI.
The boost goes beyond the RAM savings from having a separate LC instance
for each CGI request: as a persistent process, it obviates the
font-loading and other init that take up so much time in an LC CGI.
As with the lchttpd experiments, using callbacks for all sockets comms
between the LC-based workers and the HTTP SERVER will be essential for
keep throughput optimal.
TL;DR: I think you're on the right track for a possible solution that
optimizes your development time without prohibitively impeding scalability.
The suitability of this comes down to: what exactly does each
transaction do?
167 transactions/sec may not be much, or it might be a lot.
If a given transaction is fairly modest, I'd say it's probably worth the
time to put together a test system to try it out.
But if a transaction is CPU intensive, or heavily I/O bound, or
otherwise taking up a lot of time, the radical changes in PHP7 may make
it a better bet, esp. if run as FastCGI.
Can you tell us more about what a given transaction involves?
--
Richard Gaskin
Fourth World Systems
Software Design and Development for the Desktop, Mobile, and the Web
____________________________________________________________________
Ambassador at FourthWorld.com http://www.FourthWorld.com
More information about the use-livecode
mailing list