bobsneidar at iotecdigital.com
Fri Mar 30 18:42:40 CEST 2018
I think we are not seeing the elephant in the room here. Programming languages work because a great deal of effort has been exherted defining what we MEAN when we SAY something to the computer. In fact the whole process of writing software is precicely that of removing all ambiguity. It's true that on the surface it appears that we can tell the computer what we want and it can interpret what we mean, but only because under the hood someone wrote code that says in effect, given all these inputs, produce this output. That process is after all at it's heart a binary one.
The fact that we are not constantly aware of this is why some men are able to believe that "artificial" intelligence means "actual" intelligence. The only intelligence a computer can posess is that of the developer and the end user. Anything else is an illusion. A grand one I grant, but still it's only the old smoke and mirrors of the ancient sorcerers. The "magic" is making sure no one sees what the sorcerer is actually doing while his subjects are distracted by something else.
Let the flaming begin! ;-)
> On Mar 30, 2018, at 08:56 , Mark Waddingham via use-livecode <use-livecode at lists.runrev.com> wrote:
> An important question to ask here is 'what do we mean by English-like'?
> I'd suggest that the language doesn't matter - so 'natural language like' would perhaps be a better term but even then is that really what we mean?
> There's no inherent difference (formally at least) between a programming language and a natural language - at least from the way they are written (letters, punctuation, grammar, vocabulary) and perhaps not even in terms of interpretation (what a phrase in a language means - they are either declarations/definition of things, providing context or instructing actions).
> The difference comes at the point we consider the 'machine' which the language is instructing - human or computer.
> From this (very narrow) point of view, human machines (the brain) are perhaps just a great deal better 'engineered' to process language quickly and have a much greater capacity for storing, recalling and processing contextual information which means ambiguities can be resolved with a much greater degree of precision and fault tolerance.
> So we are perhaps talking about constructing language(s) which allows a computer to be instructed more like we would a human - i.e. not having to define every single thing in mind numbing detail, knowing that the receiver has enough competence and knowledge to infer and fill in the gaps correctly and then carrying out those actions with a high degree of accuracy (although computers are probably already better for accuracy in many domains - they just need their hand held throughout!) or at least have the ability to shout when things really don't 'compute'. In this vein I'm not sure syntax is so important.
> I don't think the experiment as you put it has yet ended - computers and their software development have just not caught up yet which is, in part, probably at least related to performance of computer machines for these kinds of tasks.
> Warmest Regards,
More information about the use-livecode