Skip to content

Latest commit

 

History

History
169 lines (130 loc) · 4.39 KB

File metadata and controls

169 lines (130 loc) · 4.39 KB
title Using your first provider
description Learn how to connect your provider to an LLM and build a chat-enabled app

Now that you've deployed your first provider and confirmed it's working, you can connect it to an LLM like ChatGPT.

In this guide, you'll learn how to build a chat-enabled app that automatically handles tool calls from your Metorial providers.

**What you'll learn:**
  • How to use a Metorial provider
  • How to use the Metorial SDKs

Before you start:

Run the installer for your language of choice:
<CodeGroup>

```bash TypeScript
npm install metorial @metorial/openai openai
```

```bash Python
pip install metorial openai
```

</CodeGroup>
Instantiate both clients with your API keys and your provider deployment ID.
<CodeGroup>

```typescript TypeScript
import { Metorial } from 'metorial';
import { metorialOpenAI } from '@metorial/openai';
import OpenAI from 'openai';

let metorial = new Metorial({
  apiKey: '$$SECRET_TOKEN$$'
});

let openai = new OpenAI({
  apiKey: '...your-openai-api-key...'
});
```

```python Python
from metorial import Metorial
from openai import OpenAI

metorial = Metorial(api_key="$$SECRET_TOKEN$$")
openai = OpenAI(api_key="...your-openai-api-key...")
```

</CodeGroup>
Create a session that exposes your deployed provider tools.
<CodeGroup>

```typescript TypeScript
let session = await metorial.connect({
	adapter: metorialOpenAI.chatCompletions(),
	providers: [
		{ providerDeploymentId: '...your-provider-deployment-id...' }
	]
});

let tools = session.tools();
```

```python Python
async with metorial.provider_session(
    provider="openai",
    providers=[{"provider_deployment_id": "...your-provider-deployment-id..."}],
) as session:
    tools = session.tools
```

</CodeGroup>
Kick off the loop by sending an initial message.
<CodeGroup>

```typescript TypeScript
let messages = [
  { role: "user", content: "Summarize the README.md file of the metorial/websocket-explorer repository on GitHub." }
];
```

```python Python
messages = [
  {"role": "user", "content": "Summarize the README.md file of the metorial/websocket-explorer repository on GitHub."}
]
```

</CodeGroup>
1. Send `messages` to OpenAI, passing the tools. 2. If the assistant response contains `tool_calls`, invoke it:
<CodeGroup>

```typescript TypeScript
let response = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages,
  tools
});
let choice = response.choices[0]!;
let toolCalls = choice.message.tool_calls;
let toolResults = await session.callTools(toolCalls);
```

```python Python
response = openai_client.chat.completions.create(
  model="gpt-4o",
  messages=messages,
  tools=session.tools
)
choice = response.choices[0]
tool_calls = choice.message.tool_calls
tool_results = await session.call_tools(tool_calls)
```

</CodeGroup>

3. Append both the tool call requests and their results to `messages`.
4. Repeat until the assistant's response has no more `tool_calls`.
Once there are no more tool calls, your assistant's final reply is in:
<CodeGroup>

```typescript TypeScript
console.log(choice.message.content);
```

```python Python
print(choice.message.content)
```

</CodeGroup>

What's Next?

You are all set on having a production-ready provider to use in your AI apps. Next, you will learn about all the observability tooling available.

Learn how to use the observability & logging features.