This guide helps you upgrade from SingularityLLM v0.x to v1.0.0.
SingularityLLM v1.0.0 is a major release that introduces significant architectural improvements while maintaining full backward compatibility. The main changes focus on modularization, improved provider support, and enhanced developer experience.
The monolithic SingularityLLM module has been split into focused, single-responsibility modules:
SingularityLLM.Embeddings- Vector operations and similarity searchSingularityLLM.Assistants- OpenAI Assistants API supportSingularityLLM.KnowledgeBase- Semantic search and document managementSingularityLLM.Builder- Fluent chat interface for building requestsSingularityLLM.Session- Conversation state management
A new centralized provider delegation system improves consistency and reduces code duplication:
SingularityLLM.API.Delegator- Central routing for all provider callsSingularityLLM.API.Capabilities- Provider feature registrySingularityLLM.API.Transformers- Argument normalization
- Unified API - Consistent interface across all providers
- Improved Error Handling - Standardized error responses
- Better Type Safety - Enhanced typespecs and Dialyzer support
- Test Infrastructure - Comprehensive test coverage with smart caching
Good news! There are NO breaking changes in v1.0.0. All existing code will continue to work without modification.
# In mix.exs
def deps do
[
{:singularity_llm, "~> 1.0.0-rc1"}
]
endmix deps.update singularity_llm
mix deps.compileWhile not required, you can take advantage of the new modular structure for cleaner code:
# All functions through main module
{:ok, response} = SingularityLLM.embeddings(:openai, "Hello world")
results = SingularityLLM.find_similar(query_embedding, items)
{:ok, assistant} = SingularityLLM.create_assistant(:openai, name: "Helper")# Use specialized modules for clearer intent
{:ok, response} = SingularityLLM.Embeddings.generate(:openai, "Hello world")
results = SingularityLLM.Embeddings.find_similar(query_embedding, items)
{:ok, assistant} = SingularityLLM.Assistants.create_assistant(:openai, name: "Helper")Both styles work - choose based on your preference!
Enhanced vector operations with dedicated module:
# Generate embeddings
{:ok, response} = SingularityLLM.Embeddings.generate(:openai, "Hello world")
# Find similar items with multiple metrics
results = SingularityLLM.Embeddings.find_similar(query_embedding, items,
top_k: 5,
metric: :cosine,
threshold: 0.8
)
# Create searchable index
{:ok, index} = SingularityLLM.Embeddings.create_index(:openai, documents)
{:ok, results} = SingularityLLM.Embeddings.search_index(index, "query")New Gemini-powered semantic search capabilities:
# Create a knowledge base
{:ok, kb} = SingularityLLM.KnowledgeBase.create_knowledge_base(:gemini, "my_kb",
display_name: "Product Documentation"
)
# Add documents
{:ok, doc} = SingularityLLM.KnowledgeBase.add_document(:gemini, "my_kb", %{
display_name: "User Guide",
text: "Content here..."
})
# Semantic search
{:ok, results} = SingularityLLM.KnowledgeBase.semantic_search(:gemini, "my_kb",
"How do I reset my password?"
)Chain configuration calls for readable code:
{:ok, response} =
SingularityLLM.Builder.build(:openai, messages)
|> SingularityLLM.Builder.with_model("gpt-4")
|> SingularityLLM.Builder.with_temperature(0.7)
|> SingularityLLM.Builder.with_max_tokens(1000)
|> SingularityLLM.Builder.execute()Improved conversation tracking:
# Create and manage sessions
session = SingularityLLM.Session.new_session(:openai, model: "gpt-4")
# Chat with automatic context management
{:ok, response, updated_session} =
SingularityLLM.Session.chat_session(session, "Tell me a joke")
# Persist sessions
SingularityLLM.Session.save_session(updated_session, "conversation.json")
restored = SingularityLLM.Session.load_session("conversation.json")No changes required. All existing environment variables continue to work:
OPENAI_API_KEYANTHROPIC_API_KEYGEMINI_API_KEY- etc.
No changes required. Existing configuration continues to work:
# This still works
config :singularity_llm,
default_provider: :openai,
openai: [
api_key: System.get_env("OPENAI_API_KEY"),
model: "gpt-4"
]- 42% reduction in main module size for faster compilation
- Lazy loading of provider-specific code
- Improved error handling with standardized patterns
- Smart test caching for 25x faster test runs
No functions are deprecated in v1.0.0. All existing APIs continue to work.
After upgrading, run your test suite to ensure everything works:
# Run all tests
mix test
# Run specific provider tests
mix test --only provider:openai
mix test --only provider:anthropic
# Run with live API calls (if needed)
mix test --include live_apiIf you encounter any issues:
- Check the CHANGELOG for detailed changes
- Review the README for updated examples
- Open an issue on GitHub with:
- Your SingularityLLM version (
mix deps | grep singularity_llm) - Error messages or unexpected behavior
- Minimal code example reproducing the issue
- Your SingularityLLM version (
While v1.0.0 maintains full backward compatibility, consider gradually adopting the new modular structure for:
- Better code organization
- Clearer intent in your code
- Easier testing of specific functionality
- Improved compile-time optimizations
SingularityLLM v1.0.0 is a seamless upgrade that brings architectural improvements without breaking existing code. The new modular structure and enhanced features provide a solid foundation for building LLM-powered applications while maintaining the simplicity that makes SingularityLLM easy to use.
Happy coding! 🚀