AI assistant and Ollama

Hello.
Ollama is installed on my computer, on Linux Ubuntu.
I’m running Grist with this command :

docker run -p 8484:8484  \
--network host \
-e ASSISTANT_CHAT_COMPLETION_ENDPOINT=http://localhost:11434/api/generate   \
-e ASSISTANT_MODEL=llama2   \
-e ASSISTANT_API_KEY=   \
-v $PWD/GRIST_persist:/persist   -it gristlabs/grist

But, unfortunately, the AI Formulas Assistant seems not to run correctly. :sleepy:
This is the result in the log :

 warn: Waiting and then retrying after error: TypeError: Cannot read properties of undefined (reading '0')
2025-04-10 09:09:51.022 - warn: Error during api call to /docs/nJ4nxPEEeKUYn1fX8dyfUN/assistant: Sorry, the assistant is unavailable right now. Try again in a few minutes. 
(TypeError: Cannot read properties of undefined (reading '0')) path=/docs/nJ4nxPEEeKUYn1fX8dyfUN/assistant, userId=5, altSessionId=nGrRun1rrHUXP6f9kj6azt, conversationId=f37cae7b-b5c1-472d-962a-3e79ac5baad9, type=formula, tableId=Table1, colId=A, text=Concaténer ID2 et nom_demarche,

Is there someone with the same problem ?
Is someone OK with this config ?
Could you help me ?

Thanks.

Hi @Luc_Leger, does the model have /api/v1/chat/completions available? That’s the kind of OpenAI-compatible conversational endpoint Grist needs.

Yes ! It’s run nicely ! :slightly_smiling_face: :+1:
Big thanks to you @paul-grist

So, the definitive command is :

docker run -p 8484:8484  \  
  --network host \  
  -e ASSISTANT_CHAT_COMPLETION_ENDPOINT=http://localhost:11434/v1/chat/completions   \  
  -e ASSISTANT_MODEL=llama2   \  
  -e ASSISTANT_API_KEY=   \  
  -v $PWD/GRIST_persist:/persist   -it gristlabs/grist
1 Like

It’s run also with LLM Mistral :wink: :fr: