slim bazarr subgen uses local Whisper AI models to automatically create subtitles for your media files for Bazarr.
It is simply a slimmed down version of McCloudS’ subgen (thank you for making your code available!) which never worked for me without ugly hacks.
I did this because I wanted subtitles for a TV show this evening and McCloudS/subgen (again) would suddenly not start for whatever reason, and I didn’t care anymore to find out why.
|
Note
|
slim bazarr subgen only includes functionality required for Bazarr subtitle generation |
The idea is to have a reliable subtitle service on a much smaller code base which makes it easier to maintain, extend and use.
Optional, if you want to enhance accuracy
-
Enable Pass video filename to Whisper in Settings > Providers > Whisper
-
Add an TMDB API key to
.env
Python package caches and models go to /tmp (RAM). No need to bother your drive with write cycles and using up storage when there is a dedicated temporary file system for temporary files.
Software needs to STFU. Don’t just abort and complain when there is a way to continue, e.g. if GPU generation fails because of an out of memory error, automatically fall back to CPU generation and finish the job.
I’m looking at you, browser companies, that wait for user interaction to retry a failed download… morons.
-
you use Docker
-
you have an Nvidia GPU and
nvidia-container-toolkit(but works with CPU as well) -
you have a proper amount of RAM reserved for
tmpfsin/tmp
You might like those better:
Look at docker-compose.yml for limited configuration options.
Look here for available models: https://github.com/openai/whisper/blob/main/README.md#available-models-and-languages
How to configure Bazarr to use Whisper AI subtitle creation: https://wiki.bazarr.media/Additional-Configuration/Whisper-Provider/
|
Important
|
Default port is 8090
|