Synchronisation of sound and vision
livfoss at mac.com
Wed Feb 12 07:28:37 EST 2020
Thanks, that’s a start - I will look at the dictionary. I suppose the callbacks rely on one analysing how long each line/word takes the performer to say. It’s a lot of work, but there’s no way around it since potentially every line takes a different length of time to recite. If it’s too much work, I guess I can just display the whole text and have one callback at the end of each recording. Maybe that is really the practical solution for a large body of work (say all the Shakespeare sonnets, for example).
Anyway thanks for the hint.
> On 12 Feb 2020, at 12:16, Tore Nilsen via use-livecode <use-livecode at lists.runrev.com> wrote:
> You will have to use the callbacks property of the player to do what you want to do. The callbacks list would be your cues. From the dictionary:
> The callbacks of a player <> is a list of callbacks, one per line. Each callback consists of an interval number, a comma, and a message <> name.
> Tore Nilsen
>> 12. feb. 2020 kl. 11:25 skrev Graham Samuel via use-livecode <use-livecode at lists.runrev.com>:
>> Folks, forgive my ignorance, but it’s a long time since I considered the following and wondered what pitfalls there are.
>> I have in mind a project where a recording of someone reading a poetry text (“old fashioned” poetry in metrical lines) needs to be synchronised to the display text itself on the screen, ideally so that a cursor or highlight would move from word to word with the speaker, although that would almost certainly involve too much work for the developer (me), or at least highlight lines as they are being spoken. I see that one would inevitably have to add cues to the spoken text file to fire off the highlighting, which is indeed an unavoidable amount of work, but can it be done at all in LC? For example, what form would the cues take?
More information about the use-livecode