Skip to content

求一份本地ollama配置样本! #180

@wolfBi

Description

@wolfBi

到底是配置
chat_module=ollma_api
ollma_ip=localhost

awen:latest, llama:latest,deepseek-r1

ollma_model=deepseek-r1:1.5b

还是
gpt_base_url=http://localhost:11434/v1
#gpt model engine 如:glm4、deepseek、qwen3-4b等
gpt_model_engine=deepseek-r1:1.5b

http://localhost:11434/v1 这种写法对吗?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions