Another server question (mixing node.js and LC)

jonathandlynch at gmail.com jonathandlynch at gmail.com
Wed Feb 28 14:49:41 EST 2018


I think you might be right, Mike. I have been reading about benchmark tests between node, Apache, and ningx. Node does not seem to live up to the hype at all. 

Sent from my iPhone

> On Feb 28, 2018, at 2:27 PM, Mike Bonner via use-livecode <use-livecode at lists.runrev.com> wrote:
> 
> One thing you might do if you were to decide to stick with apache would be
> to make sure you use either the worker mpm or events mpm (sounds like
> events would be the one you wanted for this) (read more on this page...
> https://httpd.apache.org/docs/2.4/misc/perf-tuning.html ) to get better
> performance.
> 
> Alternatively as Richard mentioned, there is nginx, which might be just
> what the doctor ordered.  Basically, a request comes in, is handed off to
> the your lc script, and when a response is ready, it handles it and sends
> it back to the client, meanwhile still being able to listen for, and accept
> new requests. At least this is what I get from my reading, some of which
> are older postings. Sounds pretty much like what you are thinking of doing
> with node.js.
> 
> I'm also wondering where a docker swarm might fit into your needs. multiple
> containers with a custom nginx image that can run your scripts, with load
> balancing and auto failover could be a great thing, and still be very
> lightweight. (the nginx docker on alpine is amazingly tiny, lightweight)
> 
> I've no clue how performance and reliability might compare to node.js for
> this.
> 
> On Wed, Feb 28, 2018 at 11:26 AM, Jonathan Lynch via use-livecode <
> use-livecode at lists.runrev.com> wrote:
> 
>> In reading about fastCGI and LC, it seems rather experimental. I am just
>> wondering if replacing Apache with node.js as the http server would give us
>> the necessary concurrency capacity for using LC server on a large scale.
>> 
>> Basically, I am soon going to start pitching augmented tours (idea
>> suggested by guys at a business incubator) to tourism companies, using
>> Augmented Earth, and I don’t want to have the server crash if a large
>> number of people are using it all at once.
>> 
>> Sent from my iPhone
>> 
>>> On Feb 28, 2018, at 12:48 PM, jonathandlynch at gmail.com wrote:
>>> 
>>> Thank you, Richard
>>> 
>>> A given transaction involves processing a user request, making two or
>> three requests to the database, and returning around 500 kB to the user.
>>> 
>>> I certainly don’t need to load fonts in the LC process. Can that be
>> turned off?
>>> 
>>> I like the idea of maintaining a queue of running LC processes and
>> growing or shrinking it as needed based on request load.
>>> 
>>> How does the http server know which process to access?
>>> 
>>> I know that node.js has a pretty simple code for launching a CGI process
>> and listening for a result. I don’t know how it would do that with an
>> already-running process.
>>> 
>>> Sent from my iPhone
>>> 
>>>> On Feb 28, 2018, at 12:22 PM, Richard Gaskin via use-livecode <
>> use-livecode at lists.runrev.com> wrote:
>>>> 
>>>> jonathandlynch wrote:
>>>> 
>>>>> I have another server question. I really like scripting with LC,
>>>>> because I can make improvements very quickly. This is important
>>>>> because of my very limited free time.
>>>>> 
>>>>> But, I want to be able to handle many many concurrent server requests,
>>>>> the way node.js does.
>>>> 
>>>> Good timing.  Geoff Canyon and I have been corresponding about a
>> related matter, comparing performance of LC Server with PHP.
>>>> 
>>>> PHP7 is such a radical improvement over PHP5 that it's almost unfair to
>> compare it any scripting language now.  But it also prompts me to wonder:
>> is there anything in those PHP speed improvements which could be applied to
>> LC?
>>>> 
>>>> 
>>>> But that's for the future, and for CGI.  In the here-and-now, you're
>> exploring a different but very interesting area:
>>>> 
>>>>> Would it work to have node take In a request, launch an LC cgi
>>>>> executable to process the request, set an event listener to wait
>>>>> for LC to send the results back to Node, then have node return
>>>>> the results to the user?
>>>>> 
>>>>> This is not unlike using Apache to launch LC CGI processes, but
>>>>> the asynchronous nature of node would, presumably, tie up fewer
>>>>> system resources and allow for larger concurrency. This could mean
>>>>> having a couple thousand LC processes running at any one time - would
>>>>> that be okay as long as the server had enough RAM?
>>>>> 
>>>>> In general, would this work for a system that hand to handle, say,
>>>>> 10,000 server requests per minute?
>>>> 
>>>> A minute's a long time.  That's only 167 connections per second.
>>>> 
>>>> Likely difficult for any CGI, and certainly for LC (see general
>> performance relative to PHP, and the 70+% of LC boot time spent
>> initializing fonts that are almost never used in CGIs - BZ# 14115).
>>>> 
>>>> But there are other ways beyond CGI.
>>>> 
>>>> A couple years ago Pierre Sahores and I traded notes here on this list
>> about tests run with LC socket servers.  There's a lot across multiple
>> threads, but this may be a good starting point:
>>>> http://lists.runrev.com/pipermail/use-livecode/2016-March/225068.html
>>>> 
>>>> One thing is clear:  if high concurrency is a requirement, use
>> something dedicated to manage comms between connected clients and a pool of
>> workers.
>>>> 
>>>> My own tests were measuring lchttpd against Apache, a different model
>> but instructive here because it's still about socket comms.  What I found
>> was that an httpd written in LC was outmatched by Apache two-fold.  But
>> that also means that a quickly-thrown-together httpd script in LC was about
>> half as fast as the world's most popular httpd written in C by hundreds of
>> contributors specializing in that task.
>>>> 
>>>> So, promising for certain tasks. :)
>>>> 
>>>> The key with my modded fork of the old mchttpd stack was rewriting all
>> socket comms to use callbacks.  The original used callbacks only for
>> incoming POST, but I extended that to include all writes as well.
>>>> 
>>>> Applying this to your scenario:
>>>> 
>>>>  client      client      client
>>>> --------    --------    --------
>>>>    \           |          /
>>>>     ........internet.......
>>>>      \         |       /
>>>> |----------- HTTP SERVER -----------|
>>>> |     /           |          \      |
>>>> |  worker       worker      worker  |
>>>> |-----------------------------------|
>>>> 
>>>> 
>>>> While LC could be used in the role of the HTTP SERVER, that would be
>> wasteful.  It's not an interesting job, and dedicated tools like Node.js
>> and NginX will outperform it many-fold.  Let the experts handle the boring
>> parts. :)
>>>> 
>>>> The value LC brings to the table is application-specific.  So we let a
>> dedicated tool broker comms between external clients and a pool of workers,
>> where the workers could be LC standalones.
>>>> 
>>>> That's where much of Pierre's experiments have focused, and where the
>> most interesting and productive use of LC lies in a scenario where load
>> requirements exceed practical limitations of LC as a CGI.
>>>> 
>>>> The boost goes beyond the RAM savings from having a separate LC
>> instance for each CGI request:  as a persistent process, it obviates the
>> font-loading and other init that take up so much time in an LC CGI.
>>>> 
>>>> As with the lchttpd experiments, using callbacks for all sockets comms
>> between the LC-based workers and the HTTP SERVER will be essential for keep
>> throughput optimal.
>>>> 
>>>> 
>>>> TL;DR: I think you're on the right track for a possible solution that
>> optimizes your development time without prohibitively impeding scalability.
>>>> 
>>>> 
>>>> The suitability of this comes down to:  what exactly does each
>> transaction do?
>>>> 
>>>> 167 transactions/sec may not be much, or it might be a lot.
>>>> 
>>>> If a given transaction is fairly modest, I'd say it's probably worth
>> the time to put together a test system to try it out.
>>>> 
>>>> But if a transaction is CPU intensive, or heavily I/O bound, or
>> otherwise taking up a lot of time, the radical changes in PHP7 may make it
>> a better bet, esp. if run as FastCGI.
>>>> 
>>>> Can you tell us more about what a given transaction involves?
>>>> 
>>>> --
>>>> Richard Gaskin
>>>> Fourth World Systems
>>>> Software Design and Development for the Desktop, Mobile, and the Web
>>>> ____________________________________________________________________
>>>> Ambassador at FourthWorld.com                http://www.FourthWorld.com
>>>> 
>>>> _______________________________________________
>>>> use-livecode mailing list
>>>> use-livecode at lists.runrev.com
>>>> Please visit this url to subscribe, unsubscribe and manage your
>> subscription preferences:
>>>> http://lists.runrev.com/mailman/listinfo/use-livecode
>> 
>> _______________________________________________
>> use-livecode mailing list
>> use-livecode at lists.runrev.com
>> Please visit this url to subscribe, unsubscribe and manage your
>> subscription preferences:
>> http://lists.runrev.com/mailman/listinfo/use-livecode
>> 
> _______________________________________________
> use-livecode mailing list
> use-livecode at lists.runrev.com
> Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
> http://lists.runrev.com/mailman/listinfo/use-livecode




More information about the use-livecode mailing list