Using a local LLM as a personnal engineering sidekick
what & why Lately (in the last year) I’ve been using openAI’s chatGPT 3.5-turbo in my terminal, as kind of a personnal rubberducky/sidekick engineer. In my fish config I had the following function (from my dotfiles): # needs set -U OPENAI_KEY <KEY> if command -q https; and command -q yq alias h 'hey_gpt' function hey_gpt --description "talk to gpt" set prompt (echo $argv | string join ' ') set gpt (https -b post api.openai.com/v1/chat/completions \ "Authorization: Bearer $OPENAI_KEY" \ model=gpt-3.5-turbo \ temperature:=0.25 \ stream:=true \ messages:='[{"role": "user", "content": "'$prompt'"}]') for chunk in $gpt if test $chunk = 'data: [DONE]' break else if string match -q --regex "content" $chunk yq -0 '.choices[0].delta.content' < (echo -n $chunk | string replace 'data: ' '' | psub) end end end end This allowed me to do things like this right in my terminal: ...