pixelbird at interisland.net
Fri Feb 13 13:17:39 CST 2004
> Date: Thu, 12 Feb 2004 19:05:54 -0500
> From: Thomas McGrath III <3mcgrath at adelphia.net>
> Subject: Re: Exocet dreams; not the missile but...
> Ken, You know I applaud your efforts already with working with the
> disabled on 'your little island'. I too find it to be the most
> rewarding thin that I do, aside from maybe my art work which I use to
> open everyones eyes to the beauty and magnificance around us.
Good for you!
> This is a very interesting idea. I am curious, what types of objects
> might these be? Just a few more examples, please?
> Do you have a list started with 'personalized software tool' ideas?
> Do you have a list started with 'their needs' ideas?
Well, when I refer to objects in the immediate context, I mean a system of
screen objects and modules which a person can use to design for themselves a
personalized extensible control system, and an imbedded AI bot to help them
make decisions and test, monitor progress, and offer suggestions.
Let's pretend your best, and perhaps only, motor control is in your right
thigh. You can bang your knee into a switch, but that's all you can do with
Let's take something common to start with: a hovering object with an
automatic selection timer. I'll call it a HoverSelector.
This is an object that moves over other screen objects, pacing itself from a
preset timer as it goes from object to object. This is often refered to as a
The HoverSelector also has another timer preset to where it hovers over each
screen object long enough to let the user make a decision, and then moves
If the user hits the switch while the HoverSelector is over an object, the
associated code is executed. The HoverSelector waits another time to allow
for the possibility of choosing the same object again unless the selected
object takes them to another screen full of objects (often the case).
Here's the thing:
You or I can create a project that does these kinds of things, right? But
what about the user? What if the HoverSelector and/or objects aren't large
enough. What if the user also has poor visibility (often the case) and needs
special coloring, sizing, or animation, plus audio feedback augmentation?
What kind should it be? Can they adjust timing for themselves?
What if the user had a complete set of tools and a bot that helps tests
their skills and helps them select tools with which they can effectively
design a system that molds itself to their individual needs, and
continuously follows the user's progress, helping them to make subtle
adjusments along the way.
What I'm thinking about is placing a whole set of adjustable tool elements
at the disposal of the user, i.e. a set of modular building blocks of user
selectable, user-defineable, and user-adjustable elements, like timers and
speed controls, variable button animation styles, color selection, text
styles and sizes, screen layouts, audio feedback sounds and voices, choices
of speech replacement voices, etc.
The list can be fairly long.
And then write a system bot, an AI tool that helps the user to discover and
build and set up their system, using those tools.
Also, what about controllers? In my Mac, I have a great generic USB driver
called USB Overdrive (I just call it USBO). I can tell a stick which buttons
do what, a few different controller modes, and speed. HOWEVER, the USBO
folks didn't supply an AS dictionary for it (and have never replied to my
So AFAIK, there is currently no way to set up USB controllers from within
Rev or SuperCard. I've been begging for this feature for a year.
Now, with such a system, the user can also move on to build screen objects
to control external devices, controls for home systems, lights,
entertainment, communications, environmental controls, various kinds of
robotic systems, etc.
All from a single switch. Or two switches, or a tough controller like this:
The difficulties for me will be four-fold:
1) Designing a truly extensible, expandable modular system which, at the
same time, offers connectivity between appropriate modules. The fewer
limits, the better.
2) Designing and maintaining the AI bot. It may involve some fairly
convoluted multi-dimensional arrays, where I tend to get lost.
3) Gearing it to be able to _adapt_ to what is now high-end equipment
(64-bit processors, etc,) and parallel processing algorithms when the price
of that stuff starts coming down.
4) Handling driver control for lots of different controllers.
More information about the use-livecode