Educational uses for Rev (was Re: Plea to sell Dan's book widely)

Mark Swindell mdswindell at
Wed Aug 11 13:14:07 EDT 2004

It would seem courseware in this context implies primarily evaluation, 
not teaching/learning.  Students would need to have control of the 
reins in Revolution to create content that would show learning had 
occurred.  But then you have the PowerPoint multimedia slide show model 
as a result, most likely.

But on the testing end of things, perhaps the models provided by the 
AICC are the best easy models available.  Expository writing, 
interviewing is the only real way I know of to test the depth of 
retention and comprehension of what a student has learned.

Perhaps an answer is to create the tools by which the student must 
create the test themselves, rather than take it, using the models you 
cite.  Then you will be assured they have at least known that material 
long enough to create it, and in contradiction to the wrong answers 
they provide, which should provide a context that would imply some real 
comprehension.  How to evaluate this would pose another problem.


On Aug 11, 2004, at 9:53 AM, Richard Gaskin wrote:

> Marian Petrides wrote:
>> Not only in teaching programming but in designing custom educational 
>> courseware. Who wants the student to have ONLY simple multiple-guess 
>> questions to work with?
>> Life doesn't come with a series of four exclusive-or questions 
>> tattooed across it, so why give student this unrealistic view of the 
>> real world, when a little work in Rev will permit far more 
>> challenging interactivity?
> Agreed wholeheartedly.  Education-related work was the largest single 
> set of tasks folks did with HyperCard, and for all the tools that have 
> come out since there remains an unaddressed gap which may be an ideal 
> focus for DreamCard.
> But moving beyond simple questions models like multiple choice is 
> difficult.  The AICC courseware interoperability standard describes 
> almost a dozen question models, but most are variants of "choose one", 
> "choose many", "closest match", etc., sometimes enlived by using 
> drag-and-drop as the mechanism for applying the answer but not 
> substantially different from what gets tested with a simple multiple 
> choice in terms of truer assessment of what's been learned.
> The challenge is to find more open-ended question models which can 
> still be assessed by the computer.  For example, the most open-ended 
> question is an essay, but I sure don't want to write the routine that 
> scores essays. :)
> What sorts of enhanced question models do you think would be ideal for 
> computer-based learning?
> -- 
>  Richard Gaskin
>  Fourth World Media Corporation
>  ___________________________________________________________
>  Ambassador at
> _______________________________________________
> use-revolution mailing list
> use-revolution at

More information about the Use-livecode mailing list