Training the AI to write better LiveCode

Geoff Canyon gcanyon at gmail.com
Tue Jan 24 20:20:51 EST 2023


On Tue, Jan 24, 2023 at 8:10 AM Bob Sneidar via use-livecode <
use-livecode at lists.runrev.com> wrote:

> I don't think it needs to store ALL the permutations, only the viable
> ones, the ones that lead to success. That has to be a much smaller number.


There are only three outcomes: win, lose, draw. Even if the breakdown is
0.1% win, 0.1% lose, and 99.8% draw, that would still be far more positions
than could be stored using all the computing power on Earth, a billion
times over.

> But I was using that as an example of the mathematical nature of Chess. I
> think what we must mean by AI is that through recursion, a computer can
> retain successful paths to success (success being that which we define as
> success in the process.) I don't think we will ever see the day where a
> computer, lacking experience and all the data for a problem, can "reason"
> it's way to success.
>

That's almost exactly what AlphaZero did: it was given the rules for moves,
and a definition of win conditions, and then played against itself. It
wasn't given any info on existing openings or endgames. It was entirely
self-taught, in 9 hours. I think the only reason to say that it didn't
reason about the game is that we *do* understand how it works at a low
level, and at an abstract level, but we *don't* understand the specifics
about how it works at a high level. It's the same way I might understand
what a chess master means when they say a move is better because it's more
active; I understand what "active" means in general, but I would likely not
be able to say why that move was more active than several other moves.

gc


More information about the use-livecode mailing list