A question about using metacard/runtime to measure response time

Wilhelm Sanke sanke at hrz.uni-kassel.de
Wed Nov 26 23:36:09 EST 2003


On Tue, 25 Nov 2003, Robert Brenstein <rjb at rz.uni-potsdam.de>  wrote:

> Aimee Skye <skyeal at mcmaster.ca> wrote on Fri, 21 Nov 2003:
>
> >Hi,
> >
> >I downloaded the runtime revolution (2.1.2) evaluation edition, and I am
> >interested in programming experiments that measure people's response
> >times/latencies to particular events or stimuli. What I need to know is -
> >suppose a person initiates some event (e.g. a keypress) at time T. 
> What is
> >the degree of or maximum amount of error in the RT that will be 
> recorded by
> >the program. As a specific example, suppose I show a word on the 
> screen at
> >time T1 (and ask the program to get the milliseconds when this event 
> occurs)
> >and wait for the use to press a key on the keyboard which is time T2, and
> >then subtract the two to get a response time - what sort of error can I
> >expect in these estimates (on the order of milliseconds). Thanks for any
> >help you can provide.
> >
> >Aimee
>
> There is no simple answer to this question because it depends on the
> hardware, OS version, and computer configuration in varying degrees.
> For example, transfer of signal identifying a keypress from the
> keyboard to program can have large variability from the way the
> desktop bus works.
>
> You should search reports on this topic that have been published over
> last several years. One of the journal that had a number of these is
> Behavior Research Methods, Instruments, & Computers.
>
> Robert


I have got an application, built with Metacard,  that measures "short 
term memory span". Increasing numbers of numbers or letters are 
tachystoscopially displayed, up to 1 second, and the response times are 
listed.

So I had a similar problem to determine the accuracy of measurement with 
the milliseconds functions. My experience is that there is a deviation 
of about 5% from the average, meaning you have a maximum of a 10 % 
variation of the measurement accuracy you have to take into account.

You can easily test that for yourself.

Create a nested repeat loop. The outer loop measures the difference 
between Start and End.
In the inner loop you put any activity that is of course  identical each 
time (like doing some math 50 times).
Display the differences between start and end time measured in 
milliseconds and you get an idea of the accuracy.

Regards,

Wilhelm Sanke



More information about the metacard mailing list