Documentation
Configuration
Running Locally
⚠️

Local support hasn't been tested. Please report bugs through GitHub issues or discord

Running Locally

Running locally can be done through Local AI (opens in a new tab) and changing the apiPath and model options.

#Configuration options to run through LocalAI
apiPath: https://api.openai.com/v1/chat/completions
# LLM model should support function calling
model: gpt-3.5-turbo