Instinct is a 7 billion parameter model. You should expect slow responses if
running on a laptop. To learn how to inference Instinct on a GPU, see our
HuggingFace model card.

1. Install Ollama
If you haven’t already installed Ollama, see our guide here.2. Download Instinct
3. Update your config.yaml
Open your config.yaml
and add Instinct to the models section: