Oh ! So completely OT ......( All about CPU allocation !)

Francis Nugent Dixon effendi at wanadoo.fr
Sat Jul 18 19:47:49 CEST 2015

Hi from Beautiful Brittany,

I was brought up in the era of IBM 360/370  main-Frame computers,
where the Operating Systems used a dynamic CPU allocation, based
upon the the CPU requirements. For the novice, in this era, it seemed
strange to allocate CPU time in an “apparent” reverse order of logic, 
ie  : The Input/Output bound programs had a higher priority than CPU
bound programs. This worked well for several decades, based upon
the simple precept that the highest priority program would go rapidly
into a wait state, while the I/O device did its transfer relatively slowly,
thus allowing the lower priority programs to sponge up the maximum
CPU time for their Nitty-Gritty CPU-bound activities. It worked well !

Now we come to todays problem. Today, on my latest Mac computers, I
launch long-duration data transfers (of many gb’s - no longer I/O bound
 ‘cos of the fantastic data transfer speeds, and find that my request to do
a simple Google lookup is now taking eons to execute, because of the
extremely high data-transfer rates (and low wait time) of the Data Transfer

I’ve lost touch with Operating Systems over the years, but I wonder if
anybody out there has any knowledge of CPU allocation techniques
on the latest micro-computers, using giga-bit data-transfer rates. It 
may be “round-robin” techniques as with old IBM-MFT systems. but it
is giving me pains (like now, I never risk launching a long-standing
I/O disk copy if I want to get back into my computer quickly) …..

Just a tad perplexed ……


“Nothing should ever be done for the first time !”

More information about the use-livecode mailing list