Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 6 additions & 18 deletions gallery/index.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -478,34 +478,22 @@
model: nvidia/parakeet-tdt-0.6b-v3
- name: voxtral-mini-4b-realtime
license: apache-2.0
url: "github:mudler/LocalAI/gallery/virtual.yaml@master"
url: "github:mudler/LocalAI/gallery/voxtral-mini-4b-realtime.yaml@master"
description: |
Voxtral Mini 4B Realtime is a speech-to-text model from Mistral AI. It is a 4B parameter model optimized for fast, accurate audio transcription with low latency, making it ideal for real-time applications. The model uses the Voxtral architecture for efficient audio processing.
Voxtral Mini 4B Realtime is a multilingual, realtime speech-transcription model from Mistral AI.
It achieves accuracy comparable to offline systems with a delay of <500ms and supports 13 languages.
This model is designed for real-time automatic speech recognition (ASR) with streaming capabilities
and benefits from vLLM's Realtime API for low-latency transcription workflows.
urls:
- https://huggingface.co/mistralai/Voxtral-Mini-4B-Realtime-2602
- https://github.com/antirez/voxtral.c
tags:
- stt
- speech-to-text
- audio-transcription
- vllm
- cpu
- metal
- mistral
overrides:
backend: voxtral
known_usecases:
- transcript
parameters:
model: voxtral-model
files:
- filename: voxtral-model/consolidated.safetensors
uri: https://huggingface.co/mistralai/Voxtral-Mini-4B-Realtime-2602/resolve/main/consolidated.safetensors
sha256: 263f178fe752c90a2ae58f037a95ed092db8b14768b0978b8c48f66979c8345d
- filename: voxtral-model/params.json
uri: https://huggingface.co/mistralai/Voxtral-Mini-4B-Realtime-2602/resolve/main/params.json
- filename: voxtral-model/tekken.json
uri: https://huggingface.co/mistralai/Voxtral-Mini-4B-Realtime-2602/resolve/main/tekken.json
sha256: 8434af1d39eba99f0ef46cf1450bf1a63fa941a26933a1ef5dbbf4adf0d00e44
- name: moonshine-tiny
license: apache-2.0
size: "108MB"
Expand Down Expand Up @@ -723,7 +711,7 @@
- "offload_to_cpu:false"
- "offload_dit_to_cpu:false"
- "init_lm:true"
- "lm_model_path:acestep-5Hz-lm-0.6B" # or acestep-5Hz-lm-4B

Check warning on line 714 in gallery/index.yaml

View workflow job for this annotation

GitHub Actions / Yamllint

714:45 [comments] too few spaces before comment: expected 2
- "lm_backend:pt"
- "temperature:0.85"
- "top_p:0.9"
Expand Down Expand Up @@ -956,7 +944,7 @@
known_usecases:
- tts
tts:
voice: Aiden # Available speakers: Vivian, Serena, Uncle_Fu, Dylan, Eric, Ryan, Aiden, Ono_Anna, Sohee

Check warning on line 947 in gallery/index.yaml

View workflow job for this annotation

GitHub Actions / Yamllint

947:20 [comments] too few spaces before comment: expected 2
parameters:
model: Qwen/Qwen3-TTS-12Hz-1.7B-CustomVoice
- !!merge <<: *qwen-tts
Expand All @@ -968,7 +956,7 @@
known_usecases:
- tts
tts:
voice: Aiden # Available speakers: Vivian, Serena, Uncle_Fu, Dylan, Eric, Ryan, Aiden, Ono_Anna, Sohee

Check warning on line 959 in gallery/index.yaml

View workflow job for this annotation

GitHub Actions / Yamllint

959:20 [comments] too few spaces before comment: expected 2
parameters:
model: Qwen/Qwen3-TTS-12Hz-0.6B-CustomVoice
- &qwen-asr
Expand Down Expand Up @@ -5013,7 +5001,7 @@
- gemma3
- gemma-3
overrides:
#mmproj: gemma-3-27b-it-mmproj-f16.gguf

Check warning on line 5004 in gallery/index.yaml

View workflow job for this annotation

GitHub Actions / Yamllint

5004:6 [comments] missing starting space in comment
parameters:
model: gemma-3-27b-it-Q4_K_M.gguf
files:
Expand All @@ -5031,7 +5019,7 @@
description: |
google/gemma-3-12b-it is an open-source, state-of-the-art, lightweight, multimodal model built from the same research and technology used to create the Gemini models. It is capable of handling text and image input and generating text output. It has a large context window of 128K tokens and supports over 140 languages. The 12B variant has been fine-tuned using the instruction-tuning approach. Gemma 3 models are suitable for a variety of text generation and image understanding tasks, including question answering, summarization, and reasoning. Their relatively small size makes them deployable in environments with limited resources such as laptops, desktops, or your own cloud infrastructure.
overrides:
#mmproj: gemma-3-12b-it-mmproj-f16.gguf

Check warning on line 5022 in gallery/index.yaml

View workflow job for this annotation

GitHub Actions / Yamllint

5022:6 [comments] missing starting space in comment
parameters:
model: gemma-3-12b-it-Q4_K_M.gguf
files:
Expand All @@ -5049,7 +5037,7 @@
description: |
Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. Gemma 3 models are multimodal, handling text and image input and generating text output, with open weights for both pre-trained variants and instruction-tuned variants. Gemma 3 has a large, 128K context window, multilingual support in over 140 languages, and is available in more sizes than previous versions. Gemma 3 models are well-suited for a variety of text generation and image understanding tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as laptops, desktops or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone. Gemma-3-4b-it is a 4 billion parameter model.
overrides:
#mmproj: gemma-3-4b-it-mmproj-f16.gguf

Check warning on line 5040 in gallery/index.yaml

View workflow job for this annotation

GitHub Actions / Yamllint

5040:6 [comments] missing starting space in comment
parameters:
model: gemma-3-4b-it-Q4_K_M.gguf
files:
Expand Down Expand Up @@ -8267,7 +8255,7 @@
sha256: 2756551de7d8ff7093c2c5eec1cd00f1868bc128433af53f5a8d434091d4eb5a
uri: huggingface://Triangle104/Nano_Imp_1B-Q8_0-GGUF/nano_imp_1b-q8_0.gguf
- &smollm
url: "github:mudler/LocalAI/gallery/chatml.yaml@master" ## SmolLM

Check warning on line 8258 in gallery/index.yaml

View workflow job for this annotation

GitHub Actions / Yamllint

8258:59 [comments] too few spaces before comment: expected 2
name: "smollm-1.7b-instruct"
icon: https://huggingface.co/datasets/HuggingFaceTB/images/resolve/main/banner_smol.png
tags:
Expand Down Expand Up @@ -8311,7 +8299,7 @@
sha256: decd2598bc2c8ed08c19adc3c8fdd461ee19ed5708679d1c54ef54a5a30d4f33
uri: huggingface://HuggingFaceTB/SmolLM2-1.7B-Instruct-GGUF/smollm2-1.7b-instruct-q4_k_m.gguf
- &llama31
url: "github:mudler/LocalAI/gallery/llama3.1-instruct.yaml@master" ## LLama3.1

Check warning on line 8302 in gallery/index.yaml

View workflow job for this annotation

GitHub Actions / Yamllint

8302:70 [comments] too few spaces before comment: expected 2
icon: https://avatars.githubusercontent.com/u/153379578
name: "meta-llama-3.1-8b-instruct"
license: llama3.1
Expand Down
27 changes: 27 additions & 0 deletions gallery/voxtral-mini-4b-realtime.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
---
name: "voxtral-mini-4b-realtime"

description: |
Voxtral Mini 4B Realtime is a multilingual, realtime speech-transcription model from Mistral AI.
It achieves accuracy comparable to offline systems with a delay of <500ms and supports 13 languages.
This model is designed for real-time automatic speech recognition (ASR) with streaming capabilities
and benefits from vLLM's Realtime API for low-latency transcription workflows.

config_file: |
name: voxtral-mini-4b-realtime
description: Voxtral Mini 4B Realtime - Real-time ASR model via vLLM
backend: vllm
parameters:
model: mistralai/Voxtral-Mini-4B-Realtime-2602
known_usecases:
- transcript
template:
use_tokenizer_template: true
prediction:
max_tokens: 45000
backend_options:
vllm:
# Recommended settings for Voxtral Realtime
# --max-model-len: 131072 (default, supports ~3h of transcription)
# Temperature should be set to 0.0 for ASR
compilation_config: '{"cudagraph_mode": "PIECEWISE"}'
Loading