Caution
This package is in pre-release and not subject to backwards compatibility guarantees. The API may change based on feedback.
Pin to a specific minor version and review the changelog before upgrading.
This package provides an OpenAI integration for the LaunchDarkly AI SDK.
pip install launchdarkly-server-sdk-ai-openaiimport asyncio
from ldclient import LDClient, Config, Context
from ldai import init
from ldai.models import AICompletionConfigDefault, ModelConfig, ProviderConfig
# Initialize LaunchDarkly client
ld_client = LDClient(Config("your-sdk-key"))
ai_client = init(ld_client)
context = Context.builder("user-123").build()
async def main():
# Create a ManagedModel backed by the OpenAI provider
model = await ai_client.create_model(
"ai-config-key",
context,
AICompletionConfigDefault(
enabled=True,
model=ModelConfig("gpt-4"),
provider=ProviderConfig("openai"),
),
)
if model:
result = await model.run("Hello, how are you?")
print(result.content)
asyncio.run(main())The recommended entry point is LDAIClient.create_model, which evaluates a
LaunchDarkly AI config flag, selects the OpenAI runner automatically, and
returns a ManagedModel that wraps the runner:
model = await ai_client.create_model("ai-config-key", context)
if model:
result = await model.run("What is feature flagging?")
print(result.content)If you need to construct a runner manually (e.g. for testing), you can use
OpenAIRunnerFactory from the ldai_openai package:
from ldai_openai import OpenAIRunnerFactory
factory = OpenAIRunnerFactory() # uses OPENAI_API_KEY from environment
runner = factory.create_model(ai_config)
result = await runner.run("Hello!")
print(result.content)Pass a JSON schema dict as output_type to request structured output:
response_structure = {
"type": "object",
"properties": {
"sentiment": {"type": "string", "enum": ["positive", "negative", "neutral"]},
"confidence": {"type": "number"},
},
"required": ["sentiment", "confidence"],
}
result = await runner.run(messages, output_type=response_structure)
print(result.parsed) # {"sentiment": "positive", "confidence": 0.95}ManagedModel.run() automatically tracks metrics via the associated
LDAIConfigTracker. For manual tracking, use the tracker directly:
model = await ai_client.create_model("ai-config-key", context)
if model:
result = await model.run("Explain feature flags.")
# Metrics are tracked automatically; access them via result.metrics
print(result.metrics.usage)The ldai_openai helper module provides several utility functions:
from ldai.models import LDMessage
from ldai_openai import convert_messages_to_openai
messages = [
LDMessage(role="system", content="You are helpful."),
LDMessage(role="user", content="Hello!"),
]
openai_messages = convert_messages_to_openai(messages)from ldai_openai import get_ai_metrics_from_response
# After getting a response from OpenAI
metrics = get_ai_metrics_from_response(response)
print(f"Success: {metrics.success}")
print(f"Tokens used: {metrics.usage.total if metrics.usage else 'N/A'}")For full documentation, please refer to the LaunchDarkly AI SDK documentation.
See CONTRIBUTING.md in the repository root.
Apache-2.0