5297

A dream, a noob, a raspberry pi 5 and the new AI Kit.

Working AI Chat:


And a few sprinkles of code from :

Cliff notes:

RUN-

“sudo apt update

sudo apt install ffmpeg


sudo apt install espeak


sudo apt install python3-pip


sudo apt install python3-pyaudio


pip3 install openai openai-whisper RPi.GPIO pyaudio



git clone https://github.com/nickbild/local_llm_assistant


cd local_llm_assistant


wget https://huggingface.co/jartine/TinyLlama-1.1B-Chat-v1.0-GGUF/resolve/main/TinyLlama-1.1B-Chat-v1.0.Q5_K_M.llamafile?download=true


mv TinyLlama-1.1B-Chat-v1.0.Q5_K_M.llamafile?download=true TinyLlama-1.1B-Chat-v1.0.Q5_K_M.llamafile


chmod 755 TinyLlama-1.1B-Chat-v1.0.Q5_K_M.llamafile

#note: I had to change the script for GPIO stuff. Go into chatbot.py (I use nano). I uncommented out the old RPI.GPIO stuff and put in the new gpiod stuff. 

“cd local_llm_assistant
Sudo nano chatbot.py”

Edit the script by hashtagging the code below as shown:

#import RPi.GPIO as GPIO
#BUTTON = 8  …(thru)
#GPIO.setup(BUTTON, …

Then add this:

Import gpiod
BUTTON_PIN = 8

chip = gpiod.Chip('gpiochip4') 


button_line = chip.get_line(BUTTON_PIN) 


button_line.request(consumer="Button", type=gpiod.LINE_REQ_DIR_IN) 


try: while True: button_state = button_line.get_value() 


if button_state == 1: 


main()


finally: 


button_line.release()


Then save chatbot.py


Make sure you are in:


“cd local_llm_assistant”



Start up the LLM with:

./TinyLlama-1.1B-Chat-v1.0.Q5_K_M.llamafile

Then, in a different window, start the voice assistant software:

python3 chatbot.py

Wait a few seconds until you see the "Ready..." message, then press the button when you want to talk. When you see the "recording" message, speak your request. After the LLM completes its work, the response will be spoken through the speaker. (Since I’m a noob, I didnt do this, I just watched the script output. Also, if you do not attach a button, mic loop just stays open for about 3 seconds and if you talk fast you can get your question in, lol)