monte at appisle.net
Thu Jan 7 20:02:08 EST 2016
> My understanding is that spawn-fcgi uses fork, no?
Yes. Are you looking to implement your own process manager though?
> > We mainly need two things for FastCGI:
> > - an engine with the FastCGI accept loop as the main loop (LC Server
> > just starts up and quits at the end of the code and standalones just
> > keep looping until you quit).
> I believe that has more to do with the nature of CGI than with LC per se. That is, as a CGI any engine (Perl, Python, Ruby, LiveCode) will be born, live, and die during the request.
> But with FastCGI the engine is only loaded once, and instances forked with requests as needed, and using fork they get the socket and other data needed for the child process to handle the task.
> Some engines may use multithreading rather than multiprocessing, but the difference is less of a concern on Linux than on Windows since Linux spawns processes much more efficiently.
> If multithreading were pursued as an alternative to multiprocessing via fork, I fear a threading subsystem would be much more work to implement, no?
spawn-fcgi and mod_fcgid do essentially what you are proposing. Spawn long running processes when told to and manage passing the request to them in a balanced way if there’s more than one process.
> > - to decide on how to handle things like global variable scope etc
> > because you’re going to end up with multiple requests to the same
> > environment.
> How is that handled in the FastCGI version of PHP?
PHP tears down everything although you can maintain persistent db connections. There’s a few different ways to do FastCGI for PHP as it has it’s own process manager.
> I would imagine it would be no more onerous than with threading, arguably simpler since so much of the action takes places in a separate process.
I actually haven’t mentioned threading at all...
> I wouldn't expect to be able to use FastCGI without modifying some of my scripting habits; as with any new feature, just a few new things to learn and keep track of. Indeed, I would welcome the opportunity for it to become possible to learn those things.
> In that outline would "acceptRequest" be a request from Apache, or are you proposing a system that replaces Apache to accept requests directly from the client?
acceptRequest would be called in response to the FastCGI main loop which processes a requests then waits for the next one to come in. Where it comes from is from anything that implements the FastCGI protocol but it is a HTTP request if that’s what you are asking.
> Once we have forking we could completely replace Apache (or NGineX or Node.js) with a fully-functioning server for specific applications where the efficiencies of a purpose-built system would be helpful.
Ah, ok so you wan’t MCHTTPd with child processes and maybe FastCGI but maybe just some custom protocol between them?
> But even when running under Apache with FastCGI, fork would seem a very useful thing. It's how PHP and other engines are able to scale, and indeed not having it prevents LC from being used in traffic-heavy scenarios.
I’m not saying it’s not useful, just suggesting letting something else do the forking might be a good idea.
> > Of course you could have a FastCGI engine that cleared all the
> > globals and stacks from memory between requests and loaded any
> > script only stack file but it’s not quite as much fun, you lose
> > the advantage of keeping resources in memory and as far as I can
> > tell it’s a bit more work to do that ;-)
> As with other persistent systems like LC on the desktop, we should maintain control over which data is purged and which data is shared. We have globals and script-locals, depending on the context we need them, and in a multiprocessing environment we should have the same flexibility.
> For example, one of the strong advantages of FastCGI or other persistent implementation is that we don't have to create and destroy database connections with every request. That sort of information (along with config data and other such things) we'd want to remain globally available to child processes. Request-specific data could be handled in script-locals, where they can be managed and cleared as needed within the worker process itself, without affecting truly global data managed by the parent.
I think what I was proposing covers that.
More information about the Use-livecode