Some more thoughts on multithreading

Jan Schenkel janschenkel at yahoo.com
Wed Feb 9 16:40:02 EST 2011


[Sorry for the late reply - I've been very busy and only now found the time o collect my thoughts on this interesting topic]

This thread has brought up various reasons for some form of multithreading and several interesting ideas as to how this might be implemented. In the heat of the discussion, quite a few items were piled on top of one another, so I figured a recap-with-comments would be of use.

Our processors aren't getting much faster for sequential execution of instructions. Instead, we're getting more cores and we can even tap into the raw power of graphics processors for some tasks. Parallel/concurrent computing is clearly our future.

1) Graphics performance
While not so important for number crunching, and more of interest to game developers, this is still related: if the engine could offload computationally intensive graphics tasks to the GPU, we would all benefit from a more responsive user interface.
The challenge for the RunRev team would be cross-platform implementation, especially accross different operating system versions. But they could start slow, and adopt the most recent operating system level enhancements, falling back to the current software-based rendering pipeline in 'lesser' environments.

2) Non-graphics performance
And while we're talking about leveraging the hardware, with such technologies as OpenCL, more mundane computational tasks could also be rewritten to profit from the available CPU cores and GPU cycles.
Think of the tasks that can be split up into smaller chunks of work that can be handled independently: determining the min or max number from a list, sorting the lines of a variable, etc.
Even if our code would still run sequentially, we could see speed-ups if the engine would fork off threads to do part of the work and then join the results when all parts are done.
If you have two cores, and each core can handle half of the work, you potentially have double the performance (minus the time to coordinate the threads of course - plus, some parts of the job may depend on the outcome of other parts - it's not because one man can do a particular job in tweznty days, that twenty men can do the same job in one day).

3) Callback messages
LiveCode socket communication with its callback message model should serve as an example that deserves to be extended into other input/output areas: process communication, reading/writing files, and database queries come to mind.
How I would love to say: load these 100K records from 16 joined tables in the database, and get back to me when you're done; or even send it to me in batches of 20 records. The user interface would stay responsive and can be updated to display an indeterminate progress bar; and if the RunRev team plays its cards right, we may even get a means to abort pending queries if the user decides it's better to refine the query criteria.

4) Externals and callback messages
At the RunRevLive'09 conference in Edinburgh, Mark Waddingham demonstrated experimental engine extensions to allow better interaction with externals, including a way for multithreaded externals to send callback messages to LiveCode controls, where scripts could handle them on the usual 'main' user interface thread.
The worker thread could also wait for a return value and then be on its merry way again for more data crunching. Or to scan the network for more Zeroconf-discoverable processes. Or othert things you'd want to hand off to a separate thread and weren't afraid to program in C.
Unfortunately, this didn't make it into version 4.0 but here's hoping it will find its way into the engine soon. It still wouldn't be script-level multithreading but would fit in well with the callback message paradigm of socket communication.

Most of what I've written here is about sending some thread off to do some work, and getting back to me with the result when it's done. The callback message mechanism gels well with this event-driven programming style.
The problems start when two threads are accessing the same resource. Race conditions, deadlocks, etc. are the bane of concurrent programming. Having rewritten and still maintaining a pervasively multithreaded Java application, I know the headaches and the unfavourable odds of bugs creeping into the design of such a monster.

That said, what could the engine offer us in terms of multithreading that would be straightforward to use and yet protect us from the many pitfalls at the same time?
My first idea would be to extend the callback mechanism to scripts:
  schedule "<messageName>" on stack "<stackName>" with <paramList>
One worker thread per stack would pick these messages from the stack's schedule queue and handle the execution of the message - but such threaded script messages would be prevented from escaping the bounds of their execution context. 
They would have their parameters to do some data crunching, but wouldn't be able to set any control or global properties, or change global variables - maybe they could show their progress in a separate 'status area' of the stack window, but that would be about it as far as the user experience goes. And when they're finished with part or all of their work, they would use a regular 'send' to call back into the 'main' UI thread.
After the 'scheduled' task is finished, the thread takes the next sceduled message of the queue and executes that.

It wouldn't be a perfect solution, but fairly easy to comprehend and use for us mere scripting portals - much easier than having to worry about synchronized access to mutable state, managing locks, etc.

And now I'll leave the floor open to your thoughts again :-)

Jan Schenkel.
=====
Quartam Reports & PDF Library for LiveCode
www.quartam.com

=====
"As we grow older, we grow both wiser and more foolish at the same time."  (La Rochefoucauld)


      




More information about the use-livecode mailing list