Skip to content

Instantly share code, notes, and snippets.

@gavrilov
Last active November 8, 2025 02:47
Show Gist options
  • Select an option

  • Save gavrilov/4537a569b7fa8e20e64a199e924d458a to your computer and use it in GitHub Desktop.

Select an option

Save gavrilov/4537a569b7fa8e20e64a199e924d458a to your computer and use it in GitHub Desktop.
Obsidian voice recognition with local Whisper model

Obsidian voice recognition with local Whisper model

Install plugin Whisper for Obsidian

plugin's settings:

create folders rec and rec_notes in Obsidian.


Install Whisper.cpp

https://github.com/ggerganov/whisper.cpp

all binaries for 1.8.1 https://github.com/ggml-org/whisper.cpp/releases/tag/v1.8.1

direct link for Whisper.cpp binaries windows-cublas (Release v1.8.1, x64, 12.4.0) https://github.com/ggml-org/whisper.cpp/releases/download/v1.8.1/whisper-cublas-12.4.0-bin-x64.zip


Download whisper.cpp model

from https://huggingface.co/ggerganov/whisper.cpp/tree/main and put in models folder
in my case - ggml-large-v3-q5_0.bin


create file start_whisper_server.bat in whisper.cpp folder and start the server

You're awesome!

win11+gpu tested
updated Oct 14 2025

@echo on
cd %~dp0
.\whisper-server.exe --convert -pp -debug -l auto -m .\models\ggml-large-v3-q5_0.bin --port 8000
pause
:: use --convert if input file not wav ar 16000 -ac 1 -c:a pcm_s16le
@Skylar1146
Copy link

Skylar1146 commented Jul 5, 2025

Thanks works great!

The api url had to be "http://127.0.0.1:8000/inference" instead of just "127.0.0.1:8000/inference" for me to get this running.

I also had to edit the batch file to be

.\whisper-server.exe --convert -pp -debug -l auto -m .\models\ggml-large-v3-turbo-q5_0.bin --port 8000

(Instead of just server.exe it was changed to whisper-server.exe on newer versions.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment