Play MP3 - What format do you use for mobile?

Sannyasin Brahmanathaswami brahma at hindu.org
Mon Aug 8 11:52:56 EDT 2016


Mark Waddingham" wrote:

    command playSound pSoundFile
       if the environment is "mobile" then
         play pSoundFile
       else
         set the filename of player "myHiddenPlayer" to pSoundFile
         play player "myHiddenPlayer"
       end if
    end playSound

-----
Mark: Welcome back to "After The Conference Land" wish I could have been there.

play fork code: Thanks, perfect… we are already using this model for actual exposed players w/controls for the user since we have to fork to a mobile player in iOS/Android and the native LC player on desktop anyway, so this follows that same paradigm, only even easier. In fact we might adopt this instead of an exposed player and only give the user the option to stop or start, even for a long recording. I'm not close to user expectations for control over audio play. The younger set here run all day with earbuds in… I don't…  Even in iTunes all I ever do is stop or start what I'm listening to. I don't think there is a strong use case to scrub forward or back, but I could be wrong.

The dictionary offers these are related props in entry on "play" property: looping, dontRefresh, playRate, showSelection, frameCount, playLoudness, callbacks, currentTime, playDestination 

Can we extract the currentTime from just a "play sound" in progress or that was stopped? Is suspect that level of control is only available for the player. 

SIZE ISSUES:

I just saved the same 49K mp3 file as WAV with the same sample rate and bit rate settings as wav. The difference in size was even more dramatic than I expected. WAV was 678k!  We are not talking here about short beeps or quick "bird tweets" or midi type loops, but relatively long (for apps)  10-30 second voice instructions.

So if one is to add e.g. 100 , 15 second sound files to your app package, that would be:

~5MB of MP3's 

vs 

67.8MB of WAV!  

OT -- RECORDING:  I'm a newbie on this mobile delivery platform. We do have our high end Sennhauser  → Edirol recording system for the important work, that goes to web, where I run files carefully through dynamics processor, tube equalizer, scrub the high range, normalize and save with dither… etc.,  but I'm looking here at at Q and D production process that still meets the requirements. Any advice appreciated. 

Input settings in Adobe Audition set to Sample rate 16000 hz, mono, bit depth to 24. 14 seconds. Made with an inexpensive USB Plantronics USB headset/mic: same one I use for Skype. Anything lower that this starts to sound terrible. 

Only sound processing was to sample the hiss on the floor and remove that with noise reduction and export with same settings as input.

The wav has a minutely better quality than the mp3… which one would expect. But for voice I'm not sure the difference will be perceived by the user. You have to listen to them side-by-side to "get critical"…  for those interested, check out these two sound files… a 14 second "instructions" test. 

http://wiki.hindu.org/outgoing/instructions.mp3
http://wiki.hindu.org/outgoing/instructions.wav


 



More information about the use-livecode mailing list