Multi stacks = multi processes?

Mike McManus mcmanusm at kramergraphics.com
Tue Jul 30 07:37:01 EDT 2002


So much information! Well some o the commands will take a long time to 
process. Reading in, and doing some S&R on large files while writing it 
to a new file file takes a while.  Currently in a standalone in OS9, up 
to around 10 minutes. But that is often with a file in the 60 meg 
catagory, which then gets written to a larger file. That is why the move 
to a server type of application.  If it is doing that. I do want to be 
able to check to see if new files arrive and send them off to whatever 
free processes may be avaibale. This is really a large workflow 
automation application. And will interact with at least one other server 
application as part of the process.

As far as feedback, since it is a server app. the base feedback will be 
a log screen of some such. Which is easy to send lines too as processes 
finish. It is possible I may ceate something to send feedback to the 
local workstations that indicate progress, which wouldn't be much more 
than a "processing file x", "completed file x", error A in file x". 
Which I could do using some sort of a basic chat tool...using only 1 way 
capabilities.

Would I be correct that what I am hearing is, that some processes would 
be easiest using a send/event list internal to the stack and others 
"longer running" best  using send/event list sending them off to other 
stacks rather than substacks? Kinda like spaning a process in unix? Or 
is there a way to do just that so that multiple processes(stack) of the 
same process would be running? It will be in OSX after all.

This would be so much easier, if I had started it as a server project 
and not a local standalone.....


On Sunday, July 28, 2002, at 03:05  PM, Dar Scott wrote:

>
> On Sunday, July 28, 2002, at 09:48 AM, Rob Cozens wrote (quoting Mike):
>
>>> What I want is to be copying at the same time a file is being 
>>> read/wrote and still keep an eye on the directories. each process 
>>> working with  different files of course.  Normally Rev would not do 
>>> that. But if I put the different processes in different STACKS or 
>>> SUBSTACKS? would I be able to get this?
> ...
>> I think you will need to build separate standalones to attain multi 
>> processing capabilities in Revolution.
>
> At times this is the right thing, especially when atomic Rev commands 
> are too big.
>
> However, often one can limit use of commands to only those that take a 
> short time.  The meaning of short depends on your application.  In this 
> case each "process" can be envisioned as state changes made by simple 
> handlers that finish quickly.  These can be integrated into simple 
> domains of send-in-time cycles; use both your sends and Revolution 
> callbacks.  You can use variables, properties and event the message 
> list to communicate among these; no handler is going to be unexpectedly 
> interrupted by another, so you can use several.  You can make these 
> domains or "processes" modular by making a callback scheme similar to 
> that used by Revolution.
>
> (If you have trouble with this style, maybe you can pick one "process" 
> to run all the time and sprinkle it with wait-with_messages.  Choose 
> the one hardest to fit to this style.  I would give the send a try for 
> all, though.)
>
> In the case of handling drop-box files, if the operation on any one 
> file does not take so long as to affect feedback to a user, I would 
> consider doing this as a single "process".
>
> Dar Scott
>
>
>
>
> _______________________________________________
> use-revolution mailing list
> use-revolution at lists.runrev.com
> http://lists.runrev.com/mailman/listinfo/use-revolution
>




More information about the use-livecode mailing list