Distinguishing simple and double clicks
Richard Gaskin
ambassador at fourthworld.com
Tue Sep 23 18:21:10 EDT 2008
Eric Chatonet wrote:
>> Eric Chatonet wrote:
>>> I just want to make something that was easy with HC: e.g. allow the
>>> user to use simple and double clicks on the same button but with
>>> different actions.
>>> MouseUp is always sent by the engine first then mouseDoubleUp is sent
>>> if appropriate.
>>
>> Is that not how HyperCard works?
>
> Yes for sure but as mouseDoubleUp did not exist, it was making things
> finally simpler ;-)
I think I'm not understanding something: if HC didn't have mouseDoubleUp
messages how was this easier to handle there?
>> As for the task itself, your handlers look good for what you want
>> to do so I'd go with those, but the question has me curious: What
>> is it you're working on?
>
> I have web pages filmstrips and a simple click opens a viewer related
> to a database while a double click goes to the 'real' page in your
> current browser.
> Of course I could replace the double click with a shift click but,
> from an ergonomic point of view, it's not satisfying: two hands
> instead of one...
> All that knowing that hovering a thumb is enough to select it.
This sounds similar in some respects to the Finder's column view, in
which a single click opens a new pane in the right-most column
displaying info about the file, while a double-click triggers the most
common verb, to launch the file. The first click is always handled in
the same manner consistently, with the same behavior when invoked by
itself or when it occurs as part of a double-click.
With the Finder, the verb-noun model is closely followed. The initial
action of showing the info pane isn't really a verb per se, but merely a
selection of the noun, where the only action shows info about the
selection without modifying it. It's more akin to updating an inspector
based on the current selection; the controls in an inspector are verbs,
but updating the Inspector as selection changes is not a verb.
Your case may be unique in this regard:
In most galleries the thumbnails are nouns, and can be selected to allow
the user to choose actions to be performed on them. The developer can
add new actions at any time by just adding new command buttons, while
the basic interaction with the gallery remains essentially the same -
select noun, then select verb to apply to the noun.
Your gallery implements thumbnails as verb objects in a sense, in which
they don't have a selection mode per se as clicking on them invokes an
immediate action, the action differing based on the type of click.
I know you are deeply and earnestly studied in UI principles, so my
question wasn't about your judgment but merely the particulars of your
app that give rise to this uncommon model.
Given your extensive background in usability I'm confident your design
provides a good solution for what it aims to do, but it's such a rare
occurrence to have content displayed as verb-only controls that I'm not
sure this would be widely used an engine enhancement, esp. given that
your excellent solution to the messaging conundrum was done in just a
few lines of code. Sometimes custom solutions require custom code. :)
FWIW, I'd sooner see mouseStillDown updated to no longer rely on polling
(see <http://quality.runrev.com/qacenter/show_bug.cgi?id=1832>).
Dragging gestures are very common, and currently require four handlers
to do with any flexibility (mouseDown, mouseMove, mouseUp, mouseRelease).
That said, it may not be a bad thing for RunRev to implement a way of
suppressing one type of message when another is triggered, and I'd be
interested to see the UIs that make use of it.
>> If so, what is it about the interaction that requires that?
>>
>> Or to word it in a more solutions-oriented way, might a change to
>> the UI make for an interaction model which maintains the
>> traditional noun-verb expectation?
>
> In a certain sense, yes ;-)
> As you, I have deeply studied all guidelines (especially Apple ones
> that always have been better thought :-) but times have changed:
> Look at iTunes or iPhoto, and other software that even don't use any
> menu (except contextual ones).
I reviewed both of these after reading your post, and I can't find
exceptions to the noun-verb model. They also both have complete menu
bars, and as far as I can tell have double-click actions also available
as menu items.
But like I said, I've had some really good coffee this morning, so it
may be that I've overlooked something. If you see a verb-only content
object help me see it through my caffeinated haze. :)
> People needed a strong 'layout syntax' ten or twenty years ago: now,
> it's a bit different: intuitive understanding, simplicity, several
> levels to make an app limpid for beginners but able to provide
> sophisticated features to advanced users without overloading its
> interface is the new canon :-)
We're moving into a realm perhaps more abstract than most readers will
want to follow, but we're here now so let's have some fun and hope
they'll indulge - it's not too far off topic, perhaps even relevant:
While interface conventions definitely evolve, and arguably should over
time as the audience becomes increasingly sophisticated, the underlying
principles remain consistent, driven as they are not by technological
advancement or even user familiarity, but by the limits of cognitive
psychology, which evolves very slowly.
Apple's UI has come a long way since Mac OS 1.0, but the opening section
of their HIG on "Human Interface Design Principles" remains intact
almost verbatim from the first edition of Inside Macintosh.
One of the sub-sections there discusses the verb-noun model (though
without using those terms directly):
Explicit and Implied Actions
Each Mac OS X operation involves the manipulation of an object
using an action. In the first step of this manipulation, the
user sees the desired object onscreen. In the second step, the
user selects or designates that object. In the final step, the
user performs an action, either using a menu command or by
direct manipulation of the object with the mouse or other device.
This leads to two paradigms for manipulating objects: explicit
and implied actions.
<http://developer.apple.com/documentation/UserExperience/Conceptual/AppleHIGuidelines/XHIGHIDesign/chapter_5_section_2.html#//apple_ref/doc/uid/TP30000353-TPXREF130>
It goes on to describe the differences between explicit and implied
actions, but all the while maintains the distinction between actions and
the objects actions are performed on.
Even digging into the ancient history that is the Motif Style Guide, we
find the same core principle described:
The direct manipulation model is an object-action model. That
is, you first select an object or group of objects, then you
perform an action on the selected objects. An object-action
model allows the user to see what elements will be acted on
before performing an action. It also allows multiple actions
to be performed successively on the selected elements.
<http://www.s-and-b.ru/syshlp/motif_guide/MotifStyleGuide/Use_Real-World_Metaphors.html>
In the case of an object that support multiple actions, personally I
would allow a selection mode, so the object can be selected and the user
can choose which action to perform on it. I may also support
double-click to trigger the most common action, but double-clicking is
physically difficult for some people so I would use it as a shortcut,
rather than as the only way to trigger a command on the object.
But a lot of these types of decisions come down to the audience: if
your audience is known to be arthritis-free, and if there is some visual
guidance to let them know that different gestures will perform different
actions, it may not matter much if double-click is the only way to
perform an action.
>> While not every difference between HC and Rev favors Rev, this is
>> one where perhaps Dr. Raney's doctorate in cognitive psychology may
>> have shown itself well: while HyperCard eats clicks on double-
>> click (and presumably introduces a delay to make that possible),
>> Rev gives the developer the freedom to handle them both, confident
>> that in most cases it'll be fine since it conforms to the most
>> common interaction model.
>
> The problem is that a simple click is not another thing than the
> first click of a double click ;-)
Precisely. Different messages, each with its own meaning. If a
single-click does one thing, why shouldn't continue to do that thing
even when followed by a second click within the doubleClickInterval?
Speaking of, that reminds me of a tip I forgot to include in my first post:
Since the interval distinguishing two single-clicks from a double-click
can be modified by the user, Rev provides a doubleClickInterval global
property which is the number of milliseconds for the threshold between
the two on the current system.
While the 500 ms you have in your code should be more than generous for
most cases, more challenged users may have it set to an unusually high
number. On OS X 10.4 the slowest doubleClickInterval settable in System
Preferences is 5000.
Side note: while testing this I discovered that the doubleClickInterval
is apparently initialized on startup rather than requested dynamically
in real time, so if you're testing you'll have to quit Rev between
changes to your OS settings to see those changes reflected in that property.
>> Am I wearing rose-colored glasses on this? Are there common
>> examples of verb objects handling double-clicks that I've
>> overlooked? (Sometimes good coffee breeds unwarranted optimism <g>.)
>
> Actually, I agree with you. More: models built twenty years ago have
> to be known precisely in order to be overtaken :-)
For UI conventions, definitely. For cognitive psychology, a little
patience is still required. :)
With gestures, perhaps the most popular new convention is what we see on
iPhone and EeePC with multi-point input. By processing multiple
touch-pad points simultaneously in a different manner than a single
point, it opens up a lot of very intuitive behaviors.
But even with this new set of gestures, the basic noun-verb model is
maintained: the gestures apply actions to objects, and all gestures
invoke consistent behavior whether or not they're followed by another
gesture.
As long as we're this far off into the weeds, any opinions on Office
12's Ribbons?:
<http://blogs.msdn.com/jensenh/>
Of all the UI conventions since the Xerox Star, Ribbons is probably the
most dramatic departure. For decades all GUI OSes have been using WIMPs
(Windows, Icons, Menus, Pointer), but Ribbons omits menus altogether.
Oddly, even after all the research MS did supporting its rollout in
Office, apparently they lacked the confidence to use it in the OS
itself. Curious choice; I'd love to hear the reasoning behind the mixed
metaphors.
I like many things about Ribbons, but here I'm building only what I
would call "Dynamic Toolbars", so we can make use of the strong benefits
a Ribbon provides while still keeping the menu bar for quick review of
command options.
Have Ribbons influenced any of your designs?
--
Richard Gaskin
Managing Editor, revJournal
_______________________________________________________
Rev tips, tutorials and more: http://www.revJournal.com
More information about the use-livecode
mailing list