| title | Using your first provider |
|---|---|
| description | Learn how to connect your provider to an LLM and build a chat-enabled app |
Now that you've deployed your first provider and confirmed it's working, you can connect it to an LLM like ChatGPT.
In this guide, you'll learn how to build a chat-enabled app that automatically handles tool calls from your Metorial providers.
**What you'll learn:**- How to use a Metorial provider
- How to use the Metorial SDKs
Before you start:
- Complete the Introduction guide
- Complete the Deploying your first provider guide
- Complete the Testing your first provider guide
<CodeGroup>
```bash TypeScript
npm install metorial @metorial/openai openai
```
```bash Python
pip install metorial openai
```
</CodeGroup>
<CodeGroup>
```typescript TypeScript
import { Metorial } from 'metorial';
import { metorialOpenAI } from '@metorial/openai';
import OpenAI from 'openai';
let metorial = new Metorial({
apiKey: '$$SECRET_TOKEN$$'
});
let openai = new OpenAI({
apiKey: '...your-openai-api-key...'
});
```
```python Python
from metorial import Metorial
from openai import OpenAI
metorial = Metorial(api_key="$$SECRET_TOKEN$$")
openai = OpenAI(api_key="...your-openai-api-key...")
```
</CodeGroup>
<CodeGroup>
```typescript TypeScript
let session = await metorial.connect({
adapter: metorialOpenAI.chatCompletions(),
providers: [
{ providerDeploymentId: '...your-provider-deployment-id...' }
]
});
let tools = session.tools();
```
```python Python
async with metorial.provider_session(
provider="openai",
providers=[{"provider_deployment_id": "...your-provider-deployment-id..."}],
) as session:
tools = session.tools
```
</CodeGroup>
<CodeGroup>
```typescript TypeScript
let messages = [
{ role: "user", content: "Summarize the README.md file of the metorial/websocket-explorer repository on GitHub." }
];
```
```python Python
messages = [
{"role": "user", "content": "Summarize the README.md file of the metorial/websocket-explorer repository on GitHub."}
]
```
</CodeGroup>
<CodeGroup>
```typescript TypeScript
let response = await openai.chat.completions.create({
model: 'gpt-4o',
messages,
tools
});
let choice = response.choices[0]!;
let toolCalls = choice.message.tool_calls;
let toolResults = await session.callTools(toolCalls);
```
```python Python
response = openai_client.chat.completions.create(
model="gpt-4o",
messages=messages,
tools=session.tools
)
choice = response.choices[0]
tool_calls = choice.message.tool_calls
tool_results = await session.call_tools(tool_calls)
```
</CodeGroup>
3. Append both the tool call requests and their results to `messages`.
4. Repeat until the assistant's response has no more `tool_calls`.
<CodeGroup>
```typescript TypeScript
console.log(choice.message.content);
```
```python Python
print(choice.message.content)
```
</CodeGroup>
You are all set on having a production-ready provider to use in your AI apps. Next, you will learn about all the observability tooling available.
Learn how to use the observability & logging features.