Interacting locally with LC and AI models

jbv at souslelogo.com jbv at souslelogo.com
Mon Feb 3 04:11:30 EST 2025


Hi list,

Someone asked me privately, so I thought that maybe others
would be interested in a simple way to interact between LC
and AI models locally on their machine.

The method uses Ollama, which is available for Mac, Win and
Linux : https://ollama.com

1- Download and install Ollama (plenty of tutorials on youtube)

2- run Ollama via Terminal (on Mac) :
   ollama serve

3- load a model via Terminal (from HuggingFace for instance) :
   ollama run llama3.2:1b

4- in a livecode stack, create a field with the following content :
   curl -X POST http://localhost:11434/api/generate \
   -H "Content-Type: application/json" \
   -d '{
     "model": "llama3.2:1b",
     "prompt": "What is the capital of France ?",
     "stream": false
   }'

5- create a button with the following script :
   on mouseup
     put field 1 into tcurl
     put shell(tcurl)
   end mouseup

6- the answer from the model displays in the message box in json format.

This works great on machines with a GPU and at least 16 Gb of RAM.
The more RAM, the bigger the models that can be used.

Enjoy,
jbv



More information about the use-livecode mailing list