OT?: AI, learning networks and pattern recognition (was: Apples actual response to the Flash issue)

Matthias Rebbe runrev260805 at m-r-d.de
Mon May 3 02:08:19 EDT 2010


Dear all,

i think it is all said. Please stop this annoying discussion.

This list is called  "use-revolution", so maybe we can come back to this again. 

Thank you!

Matthias
Am 03.05.2010 um 07:47 schrieb Randall Lee Reetz:

> Why don't you ask the guys at adobe if their content is really aware.
> 
> -----Original Message-----
> From: Ian Wood <revlist at azurevision.co.uk>
> Sent: Sunday, May 02, 2010 9:27 PM
> To: How to use Revolution <use-revolution at lists.runrev.com>
> Subject: OT?: AI,	learning networks and pattern recognition (was: Apples actual	response to the Flash issue)
> 
> Now we're getting somewhere that actually has some vague relevance to  
> the list.
> 
> 
> On 2 May 2010, at 22:39, Randall Reetz wrote:
> 
>> I had assumed your questions were rhetorical.
> 
> If I ask the same questions multiple times you can be sure that  
> they're not rhetorical.
> 
>> When I say that software hasn't changed I mean to say that it hasn't  
>> jumped qualitative categories.  We are still living in a world where  
>> computing exists as pre-written and compiled software that is  
>> blindly executed by machines and stacked foundational code that has  
>> no idea what it is processing, can only process linearly, all  
>> semantics have been stripped, it doesn't learn from experience or  
>> react to context unless this too has been pre-codified and frozen in  
>> binary or byte code, etc. etc etc.  Hardware has been souped up.  So  
>> our little wrote tricks can be made more elaborate within the  
>> substantial confines mentioned.  These same in-paradigm restrictions  
>> apply to both the software users slog through and the software we  
>> use to write software.
>> 
>> As a result, these very plastic machines with mercurial potential  
>> are reduced to simple players that react to user interrupts.  They  
>> are sequencing systems, not unlike the lead type setting racks of  
>> Guttenburg-era printing presses.  Sure we have taught them some  
>> interesting seeming tricks – if you can represent something as  
>> digital media, be it sound, video, multi-dimentional graph space,  
>> markup – our sequencer doesn't know enough to care.
> 
> So for you, for something to be 'revolutionary' it has to involve a  
> full paradigm shift? That's a more extreme definition than most people  
> use.
> 
>> Current processors are capable of 6.5 million instructions per  
>> second but are used less than a billionth of available cycles by the  
>> standard users running standard software.
> 
> From a pedantic, technical point of view, these days if the processor  
> is being used that little then it will ramp down the clock speed,  
> which has some environmental and practical benefits in itself. ;-)
> 
>> As regards photo editing software, anyone aware of the history of  
>> image processing will recognize that most of the stuff seen in  
>> photoshop and other programs was proposed and executed on systems  
>> long before some guys in france democratized these algorithms for  
>> consumer use and had their code acquired by adobe.  It used to be  
>> called array arithmetic and applied smoothly to images divided up  
>> into a grid of pixels.  None of these systems "see" an image for its  
>> content except as an array of numbers that can be crunched  
>> sequentially like a spread sheet.
>> 
>> It was only when object recognition concepts were applied to photos  
>> that any kind of compositional grammar could be extracted from an  
>> image and compared as parts to other images similarly decomposed.   
>> This is a form of semantic processing and has its parallels in other  
>> media like text parsers and sound analysis software.
> 
> You haven't looked up what content-aware fill *is*, have you? It's  
> based on the same basic concepts of pattern-matching/feature detection  
> that facial recognition software is based on but with a different  
> emphasis.
> 
> To paraphrase, it's not facial recognition that you think is the only  
> revolutionary feature in photography in twenty years, it's pattern- 
> matching/detection/eigenvectors. A lot of time and frustration would  
> have been saved if you'd said that in the first place.
> 
>> Semantics opens the door to the building of systems that  
>> "understand" the content they process.  That is the promised second  
>> revolution in computation that really hasn't seen any practical  
>> light of day as of yet.
> 
> You're jumping too many steps here - object recognition concepts are  
> in *widespread* use in consumer software and devices, whether it's the  
> aforementioned 'focus-on-a-face' digital cameras, healing brushes in  
> many different pieces of software, feature recognition in panoramic  
> stitching software or even live stitching in some of the new Sony  
> cameras.
> 
> Semantic processing of content doesn't magically enable a computer to  
> initiate action.
> 
>> Data mining really isn't semantically mindful, simply uses  
>> statistical reduction mechanisms to guess at the existence of the  
>> location of pattern ( a good first step but missing the grammatical  
>> hierarchy necessary to work towards a self optimized and domain  
>> independent ability to detect and represent salience in the stacked  
>> grammar that makes up any complex system.
> 
> Combining pattern-matching with adaptive systems, whether they be  
> neural networks or something else is another matter - but it's been a  
> long hard slog to find out that this is what you're talking about.
> 
> Adaptive systems themselves are also quite widespread by now, from  
> Tivos learning what programmes you watch to predictive text on an  
> iPhone, from iTunes 'Genius' playlists & recommendations through to  
> Siri (just bought up by Apple, as it happens).
> 
>> Such systems will need to work all of the time.  ALL OF THE TIME!   
>> Only pausing momentarily to pay attention to our interactions as  
>> needed.  Once they are running, these systems will subsume all of  
>> the manual activity we have been made to perform to this day.  Think  
>> "fly by wire" for processing.
> 
> That's a really REALLY bad analogy. FBW is a pilot-initiated control  
> system. It's smaller/lighter (the initial reason for it's use) and it  
> reacts to changes faster than the pilot can to stop stalls etc, in a  
> similar way to ABS systems in a car reducing the chances of a skid. It  
> doesn't *initiate* anything in itself, it's 'just' a moderated control/ 
> feedback system.
> 
>> Gone is the need to discreetly encode every single bit in exactly  
>> the only possible sequence.
> 
> This sentence makes no sense. Did you mean 'process' rather than  
> 'encode'?
> 
>> What it means is the difference between writing a letter and our  
>> computer interceding by understanding the meta-intent of the wrote  
>> and inefficient processes we engage in today – what are letters  
>> for?  What resources is this user or entity after and why?  Who has  
>> those resources?  Whom of those who have the desired resources need  
>> something that we might have in exchange?  How are the vectors of  
>> intent among all entities entangled and grouped and how can our  
>> systems work towards the optimization of this global intent matrix?
> 
> I like William Gibson or Stuart & Cohen as much as the next SF fan,  
> but again you're taking too many steps at once.
> 
> Emergent behaviour from a complex system (such as bypassing letter  
> writing by finding another way of communicating or reason for doing  
> things) is *emergent behaviour* - by definition you can't predict what  
> form it will take and you can't *plan* for it. You can't even plan  
> that it will *happen*.
> 
> In the same way, setting up a protocol for a network doesn't let you  
> predict that most internet traffic some years later will be via  
> Facebook or MMORPGs.
> 
>> So, when I use the word "revisionist" I am calling attention to the  
>> old sheep dressed up in new clothing but still being sheep.
> 
> Having now looked in a number of dictionaries on and offline, I stand  
> with Richmond's response. In common usage it's a word with very  
> specific connotations and they aren't ones that people associate with  
> software. With Steve Jobs, perhaps, but not with software. ;-)
> 
>> Software feature creep is not really evolving software.
> 
> That's a matter of definition. Within the photography field, apps like  
> Aperture and LightRoom have had huge impacts on people's ways of  
> working (often making whole suites of other apps redundant in one go),  
> looking at a wider field the explosion of geolocation features and  
> services has revolutionised mobile devices and our interactions with  
> them, multi-touch devices are giving us new ways to physically  
> interact with computing systems.
> 
>> That the jump is so long in coming is understandable.  It is easy to  
>> send a punch card through a machine and have it react accordingly  
>> every time.  The jump from wrote execution of static code to self  
>> aware semantically self optimized pattern engines is a big big big  
>> jump.  But it isn't as big as it might at first seem.  It is  
>> happening.  It will happen.  And computing will finally result in  
>> the kind of substantial increase in productivity that its expense  
>> requires.
> 
> 1. Much of what you're talking about as a final aim is emergent  
> behaviour - you *can't* predict what will or won't happen, or when.
> 
> 2. We've already been through the substantial increases in  
> productivity that the expense of computing requires. Increased  
> productivity isn't the problem - *expected* productivity is the  
> problem because it automatically increases as productivity increases.
> 
> 3. Adaptive systems don't just happen. They need to be trained, and  
> for the level of abstraction you're talking about they have to be  
> trained *a lot*. From a pragmatic point of view, much of that  
> increased productivity will be swallowed up by learning to be a good  
> system trainer, in the same way that certain types of information  
> research have been vastly increased via the net, only to be swallowed  
> up in learning to use search engines efficiently and learning how to  
> winnow out all the chaff.
> 
> 4. The level of independent action you appear to be happy with in a  
> computer gives most people the screaming heebie-jeebies and flashbacks  
> to "I'm sorry Dave, I can't do that".
> 
> 5. Most importantly, your entire body of communications on the list  
> appears to miss out a vital step - how do adaptive systems magically  
> morph into systems capable of initiating actions without user or  
> programmer input? I'm uncomfortably reminded of S. Harris's 'then a  
> miracle occurs' cartoon. :-(
> http://www.sciencecartoonsplus.com/pages/gallery.php
> 
> 6. On a slightly more tongue-in-cheek note, enjoy the minutes between  
> the first ubiquitous 'self aware semantically self optimized pattern  
> engine' and dying/transcending/experiencing "It's life Jim, but not as  
> we know it" in the ensuing technological singularity. ;-)
> 
> 
> Ian
> 
> 
> P.S. It's 'rote', not 'wrote'. I know it's just a typo, but it's one  
> that drastically alters some of your sentences._______________________________________________
> use-revolution mailing list
> use-revolution at lists.runrev.com
> Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
> http://lists.runrev.com/mailman/listinfo/use-revolution
> 
> 
> _______________________________________________
> use-revolution mailing list
> use-revolution at lists.runrev.com
> Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
> http://lists.runrev.com/mailman/listinfo/use-revolution




More information about the use-livecode mailing list