When running tests it's easy to get failures by not having the correct models installed. Errors may be hidden in the large output log.
We should look at adding classifiers in tests accordingly and reporting errors at collection time, or another validation mechanism
Ollama will be the most common, but we also should consider vllm and other backends.
A lighter option is to document bit I think a programmatic check is more robust and simpler to understand
When running tests it's easy to get failures by not having the correct models installed. Errors may be hidden in the large output log.
We should look at adding classifiers in tests accordingly and reporting errors at collection time, or another validation mechanism
Ollama will be the most common, but we also should consider vllm and other backends.
A lighter option is to document bit I think a programmatic check is more robust and simpler to understand