19 februari 2025

Setting up sgpt on macos to run with ollama locally

(A command-line productivity tool powered by AI large language models (LLM))


>pip install "shell-gpt[litellm]"


Create this file in ~/.config/shell_gpt/.sgptrc


> vi ~/.config/shell_gpt/.sgptrc


add below

CHAT_CACHE_PATH=/tmp/chat_cache

CACHE_PATH=/tmp/cache

CHAT_CACHE_LENGTH=100

CACHE_LENGTH=100

REQUEST_TIMEOUT=60

DEFAULT_MODEL=ollama/mistral:7b-instruct

DEFAULT_COLOR=magenta

ROLE_STORAGE_PATH= ~/.config/shell_gpt/roles

DEFAULT_EXECUTE_SHELL_CMD=false

DISABLE_STREAMING=false

CODE_THEME=dracula

OPENAI_FUNCTIONS_PATH=~/.config/shell_gpt/functions

OPENAI_USE_FUNCTIONS=false

SHOW_FUNCTIONS_OUTPUT=false

API_BASE_URL=http://127.0.0.1:11434

PRETTIFY_MARKDOWN=true

USE_LITELLM=true

SHELL_INTERACTION=true

OS_NAME=auto

SHELL_NAME=auto


Download a model with ollama e.g mistral:7b-instruct which is the DEFAULT_MODEL. (see config file above)

> ollama pull mistral:7b-instruct


# list downloaded ollama models

> ollama list


Example of output:


NAME                         ID              SIZE      MODIFIED          

mistral:7b-instruct          f974a74358d6    4.1 GB    About an hour ago    

llama3.1:8b-instruct-q8_0    b158ded76fa0    8.5 GB    10 days ago          

deepseek-r1:14b              ea35dfe18182    9.0 GB    4 weeks ago          

llama3.2-vision:latest       085a1fdae525    7.9 GB    4 weeks ago          

llama3.2:latest              a80c4f17acd5    2.0 GB    4 weeks ago          

phi4:latest                  ac896e5b8b34    9.1 GB    4 weeks ago      


# Start ollama server

> ollama serve


# change the modelname to one of the names above e.g phi4:latest if you don't want to use the DEFAULT_MODEL=ollama/mistral:7b-instruct 

> sgpt --model ollama/phi4:latest  "What is the fibonacci sequence"


More info and docs:   https://github.com/TheR1D/shell_gpt 

Inga kommentarer:

Skicka en kommentar