Multi stacks = multi processes?

Mike McManus mcmanusm at
Thu Aug 1 07:49:01 EDT 2002

Thanks Jan. I think that is what makes sense. Though most of the work 
will be done with freestanding Revolution stacks, not all of it will. So 
that may make life easier. Plus, god only knows what someone will want 
to tie into this application at some point. Gives me the freedom to base 
it on IAC for all activites.

You don't have to tell me about limiting the specs before I start 
programing. I know that, you know that...but business owners just never 
seem to grasp it.

On Wednesday, July 31, 2002, at 03:48  AM, Jan Schenkel wrote:

> Hi Mike,
> So far nearly all the suggestions others have made are
> for building your own "send" cycles to keep everything
> running within a single Revolution process.
> Might I suggest a different approach? It's actually
> pretty easy to have communication between several
> Rev-apps under MacOS, using the IAC (Inter-Application
> Communciation) calls provided within Revolution.
> Specifically I'm talking about
>     send to program
>     request
>     reply
> First you would have to build one separate app per
> type of process, then whenever you have a long job
> ahead of you, your main program could spawn off a new
> child-process and interact with it using these
> commands.
> If you setup "send" cycles in the child-process, you
> can then see how far along it is in progress, or if
> you don't need that detailed information you can
> simply let it get back to you once it's finished.
> The main thing you'd have to do is maintain jobID's
> for easy communication, and report to the user.
> So if we put this theory in practice, we get the
> following cycle:
> 1) spawn a new child-process for a file to process
>     launch fileToProcess with <childProcess>
> The childProcess will get an 'odoc' appleEvent which
> tells it what file to open, so add a handler for this
> event to your stack script
>     on appleEvent pClass, pID, pSender
>       switch (pClass&pID)
>       case "aevtodoc"
>         request appleEvent data
>         put it into theFile
>         break
>       default
>         break
>       endswitch
>     end appleEvent
> 2) setup communication and jobID
> Then the childProcess asks the mainProcess that it's
> ready to proceed and would like to have a jobID
>     request "jobID("&theFile&")" from program \
>     <mainProcess>
> 3) commincation between mainProcess and childProcess
> Once that has been established, we can either setup
> our send cycle within the childProcess and let the
> mainProcess poll the status with
>     request "jobStatus" from program <childProcess>
> or forego with these hassles and keep the mainProcess
> up to date with our progress at regular intervals
> within the process, or simply at the end with
>     send "updateStatus"&&jobID"&","&jobStatus to \
>     program <mainProcess>
> 4) cleaning up at the end
> The childProcess terminates and the mainProcess can
> update its screen to inform the user. In the meantime,
> it could have spawned off a few other processes, and
> all it would have had to do itself is minor tasks and
> maintain communication with the child-processes.
> Hope this helped a bit. Admittedly, I've never done
> this myself, though I have in the past successfully
> used the IAC-capabilities of MacOS+HyperCard to build
> a client-server architecture without a full-fledged
> dtabase server at the back-end.
> Jan Schenkel.
> "As we grow older, we grow both wiser and more foolish
> at the same time."  (De Rochefoucald)
> --- Mike McManus <mcmanusm at> wrote:
>> I am sure this has come up before, but I can't find
>> it.
>> I have a stack with substacks that will be handling
>> a number of things
>> from watching folders, copying files and processing
>> very large files(up
>> to about 300meg) None of the processes is a
>> killer...under Mac OS X, OS
>> 9 is slow. But now I want to move this whole thing
>> to a more hot folder
>> based system. Meaning multiple users will be putting
>> multiple files into
>> folders that my app will then check, move, read and
>> write.  I figure I
>> can deal with handing of the process to substack or
>> stacks as required,
>> but I want them to happen simiultaniously?
>> What I want is to be copying at the same time a file
>> is being read/wrote
>> and still keep an eye on the directories. each
>> process working with
>> different files of course.  Normally Rev would not
>> do that. But if I put
>> the different processes in different STACKS or
>> SUBSTACKS? would I be
>> able to get this?
>> _______________________________________________
>> use-revolution mailing list
>> use-revolution at
> __________________________________________________
> Do You Yahoo!?
> Yahoo! Health - Feel better, live better
> _______________________________________________
> use-revolution mailing list
> use-revolution at

More information about the use-livecode mailing list