the step where the via server is asked for models says that it will return the LLMs for summarazing. This is incorrect. They are the available VLM models for data ingestion.
"The models endpoint will return the LLM available to use for summarization requests. This is based on the the startup configuration for VSS. This LLM could be configured to point to any OpenAI compatible LLM."
notbeook: (https://github.com/brevdev/workshop-vss/blob/main/code/Intro_To_VSS.ipynb)[https://github.com/brevdev/workshop-vss/blob/main/code/Intro_To_VSS.ipynb]
the step where the via server is asked for models says that it will return the LLMs for summarazing. This is incorrect. They are the available VLM models for data ingestion.
"The models endpoint will return the LLM available to use for summarization requests. This is based on the the startup configuration for VSS. This LLM could be configured to point to any OpenAI compatible LLM."
notbeook: (https://github.com/brevdev/workshop-vss/blob/main/code/Intro_To_VSS.ipynb)[https://github.com/brevdev/workshop-vss/blob/main/code/Intro_To_VSS.ipynb]