|
| 1 | +--- |
| 2 | +title: Humanloop |
| 3 | +description: Monitor and trace your AI SDK application with Humanloop, the LLM evals platform for enterprises. |
| 4 | +--- |
| 5 | + |
| 6 | +# Humanloop Observability |
| 7 | + |
| 8 | +[Humanloop](https://humanloop.com/) is the LLM evals platform for enterprises, giving you the tools that top teams use to ship and scale AI with confidence. Humanloop integrates with the AI SDK to provide: |
| 9 | + |
| 10 | +The AI SDK can log to [Humanloop](https://humanloop.com/) via OpenTelemetry. This integration enables trace visualization, cost/latency/error monitoring, and evaluation by code, LLM, or human judges. |
| 11 | + |
| 12 | +## Reference |
| 13 | + |
| 14 | +### Telemetry Configuration |
| 15 | + |
| 16 | +The AI SDK supports tracing through the `experimental_telemetry` parameter that can be set on each request. |
| 17 | + |
| 18 | +```ts |
| 19 | +const result = await generateText({ |
| 20 | + model: openai('gpt-4o'), |
| 21 | + prompt: 'Write a short story about a cat.', |
| 22 | + experimental_telemetry: { isEnabled: true }, |
| 23 | +}); |
| 24 | +``` |
| 25 | + |
| 26 | +### Metadata Parameters |
| 27 | + |
| 28 | +The Humanloop OpenTelemetry Receiver accepts these metadata parameters: |
| 29 | + |
| 30 | +| Parameter | Required | Description | |
| 31 | +| --------------------- | -------- | ------------------------------------------------------------------------------ | |
| 32 | +| `humanloopPromptPath` | Yes | Path to the prompt on Humanloop. Generation spans create Logs for this Prompt. | |
| 33 | +| `humanloopFlowPath` | No | Path to the flow on Humanloop. Groups steps into a single Flow Log. | |
| 34 | +| `humanloopFlowId` | No | ID of a Flow Log on Humanloop. Groups multiple calls into a single Flow Log. | |
| 35 | + |
| 36 | +## Setup |
| 37 | + |
| 38 | +### Prerequisites |
| 39 | + |
| 40 | +- A Humanloop account and API key. |
| 41 | + - [Sign up](https://app.humanloop.com/signup) or [login](https://app.humanloop.com/login) to Humanloop. |
| 42 | + - Create an API key in [Organization Settings](https://app.humanloop.com/account/api-keys). |
| 43 | +- A Vercel AI SDK application. |
| 44 | + |
| 45 | +### Telemetry Configuration |
| 46 | + |
| 47 | +When sending traces to Humanloop, these parameters are added to the telemetry object: |
| 48 | + |
| 49 | +```ts |
| 50 | +experimental_telemetry: { |
| 51 | + isEnabled: true, |
| 52 | + functionId: 'unique-function-id', // Optional identifier for the function |
| 53 | + metadata: { |
| 54 | + humanloopPromptPath: 'Path/To/Prompt', |
| 55 | + humanloopFlowPath: 'Path/To/Flow', // Optional |
| 56 | + humanloopFlowId: 'flow-log-id' // Optional |
| 57 | + }, |
| 58 | +} |
| 59 | +``` |
| 60 | + |
| 61 | +### Environment Variables |
| 62 | + |
| 63 | +When using OpenTelemetry with Humanloop, the following environment variables configure the OTLP exporter: |
| 64 | + |
| 65 | +```bash |
| 66 | +OTEL_EXPORTER_OTLP_ENDPOINT=https://api.humanloop.com/v5/import/otel |
| 67 | +OTEL_EXPORTER_OTLP_PROTOCOL=http/json |
| 68 | +OTEL_EXPORTER_OTLP_HEADERS="X-API-KEY=xxxxxx" # Humanloop API key |
| 69 | +``` |
| 70 | + |
| 71 | +## Framework Implementation |
| 72 | + |
| 73 | +<Tabs items={['Next.js', 'Node.js']}> |
| 74 | + <Tab> |
| 75 | + Next.js has support for OpenTelemetry instrumentation on the framework level. Learn more about it in the [Next.js OpenTelemetry guide](https://nextjs.org/docs/app/building-your-application/optimizing/open-telemetry). |
| 76 | + |
| 77 | + Required dependencies: |
| 78 | + |
| 79 | + <Tabs items={['pnpm', 'npm', 'yarn']}> |
| 80 | + <Tab> |
| 81 | + <Snippet |
| 82 | + text="pnpm add @vercel/otel @opentelemetry/sdk-logs @opentelemetry/api-logs @opentelemetry/instrumentation" |
| 83 | + dark |
| 84 | + /> |
| 85 | + </Tab> |
| 86 | + <Tab> |
| 87 | + <Snippet |
| 88 | + text="npm install @vercel/otel @opentelemetry/sdk-logs @opentelemetry/api-logs @opentelemetry/instrumentation" |
| 89 | + dark |
| 90 | + /> |
| 91 | + </Tab> |
| 92 | + <Tab> |
| 93 | + <Snippet |
| 94 | + text="yarn add @vercel/otel @opentelemetry/sdk-logs @opentelemetry/api-logs @opentelemetry/instrumentation" |
| 95 | + dark |
| 96 | + /> |
| 97 | + </Tab> |
| 98 | + </Tabs> |
| 99 | + |
| 100 | + Update your `.env.local` file to configure the OTLP Exporter: |
| 101 | + |
| 102 | + ```bash filename=".env.local" |
| 103 | + OTEL_EXPORTER_OTLP_ENDPOINT=https://api.humanloop.com/v5/import/otel |
| 104 | + OTEL_EXPORTER_OTLP_PROTOCOL=http/json |
| 105 | + OTEL_EXPORTER_OTLP_HEADERS="X-API-KEY=xxxxxx" # Humanloop API key |
| 106 | + ``` |
| 107 | + |
| 108 | + Register the OpenTelemetry SDK `instrumentation.ts` file (in root or src/ dir): |
| 109 | + |
| 110 | + ```ts filename="instrumentation.ts" |
| 111 | + import { registerOTel } from '@vercel/otel'; |
| 112 | + |
| 113 | + export function register() { |
| 114 | + registerOTel({ |
| 115 | + serviceName: 'humanloop-vercel-ai-nextjs', |
| 116 | + }); |
| 117 | + } |
| 118 | + ``` |
| 119 | + |
| 120 | + Your calls to the AI SDK should now be logged to Humanloop. |
| 121 | + |
| 122 | + </Tab> |
| 123 | + <Tab> |
| 124 | + |
| 125 | + ### Node.js Implementation |
| 126 | + |
| 127 | + OpenTelemetry has a package to auto-instrument Node.js applications. Learn more about it in the [OpenTelemetry Node.js guide](https://opentelemetry.io/docs/languages/js/getting-started/nodejs/). |
| 128 | + |
| 129 | + Required dependencies: |
| 130 | + |
| 131 | + <Tabs items={['pnpm', 'npm', 'yarn']}> |
| 132 | + <Tab> |
| 133 | + <Snippet |
| 134 | + text="pnpm add @opentelemetry/sdk-node @opentelemetry/auto-instrumentations-node @opentelemetry/exporter-trace-otlp-http" |
| 135 | + dark |
| 136 | + /> |
| 137 | + </Tab> |
| 138 | + <Tab> |
| 139 | + <Snippet |
| 140 | + text="npm install @opentelemetry/sdk-node @opentelemetry/auto-instrumentations-node @opentelemetry/exporter-trace-otlp-http" |
| 141 | + dark |
| 142 | + /> |
| 143 | + </Tab> |
| 144 | + <Tab> |
| 145 | + <Snippet |
| 146 | + text="yarn add @opentelemetry/sdk-node @opentelemetry/auto-instrumentations-node @opentelemetry/exporter-trace-otlp-http" |
| 147 | + dark |
| 148 | + /> |
| 149 | + </Tab> |
| 150 | + </Tabs> |
| 151 | + |
| 152 | + Update your `.env` file to configure the OTLP Exporter: |
| 153 | + |
| 154 | + ```bash filename=".env" |
| 155 | + OTEL_EXPORTER_OTLP_ENDPOINT=https://api.humanloop.com/v5/import/otel |
| 156 | + OTEL_EXPORTER_OTLP_PROTOCOL=http/json |
| 157 | + OTEL_EXPORTER_OTLP_HEADERS="X-API-KEY=xxxxxx" # Humanloop API key |
| 158 | + ``` |
| 159 | + |
| 160 | + Register the OpenTelemetry SDK and add Humanloop metadata to the spans. The `humanloopPromptPath` specifies the (Prompt File)[http://localhost:3001/docs/v5/explanation/prompts] in Humanloop to which the spans will be logged. |
| 161 | + |
| 162 | + ```ts highlight="3-13,19" |
| 163 | + import { openai } from '@ai-sdk/openai'; |
| 164 | + import { generateText } from 'ai'; |
| 165 | + import { NodeSDK } from '@opentelemetry/sdk-node'; |
| 166 | + import { getNodeAutoInstrumentations } from '@opentelemetry/auto-instrumentations-node'; |
| 167 | + import dotenv from 'dotenv'; |
| 168 | + |
| 169 | + dotenv.config(); |
| 170 | + |
| 171 | + const sdk = new NodeSDK({ |
| 172 | + instrumentations: [getNodeAutoInstrumentations()], |
| 173 | + }); |
| 174 | + |
| 175 | + sdk.start(); |
| 176 | + |
| 177 | + async function main() { |
| 178 | + // ... Vercel AI SDK calls ... |
| 179 | + |
| 180 | + // Must call shutdown to flush traces |
| 181 | + await sdk.shutdown(); |
| 182 | + } |
| 183 | + |
| 184 | + main().catch(console.error); |
| 185 | + ``` |
| 186 | + |
| 187 | + Your calls to the AI SDK should now be logged to Humanloop. |
| 188 | + |
| 189 | + </Tab> |
| 190 | +</Tabs> |
| 191 | + |
| 192 | +## Trace Grouping |
| 193 | + |
| 194 | +To group multiple AI SDK calls into a single Flow Log, create and pass a Flow Log ID to the telemetry metadata of each AI SDK call. |
| 195 | + |
| 196 | +1. Create a Flow Log in Humanloop |
| 197 | +2. Pass the Flow Log ID to each AI SDK call |
| 198 | +3. Update the Flow Log when all executions are complete |
| 199 | + |
| 200 | +The Flow Log serves as a parent container for all related Prompt Logs in Humanloop. |
| 201 | + |
| 202 | +```ts |
| 203 | +import { HumanloopClient } from 'humanloop'; |
| 204 | + |
| 205 | +const humanloop = new HumanloopClient(); |
| 206 | + |
| 207 | +async function main() { |
| 208 | + const flow = await humanloop.flows.upsert({ |
| 209 | + path: 'Plethora of Poetry', |
| 210 | + attributes: {}, |
| 211 | + }); |
| 212 | + const flowLog = await humanloop.flows.log({ |
| 213 | + id: flow.id, |
| 214 | + }); |
| 215 | + |
| 216 | + const outputs = []; |
| 217 | + |
| 218 | + for (const poetName of ['Edgar Allan Poe', 'Mary Shelley', 'Lord Byron']) { |
| 219 | + const result = await generateText({ |
| 220 | + model: openai('gpt-3.5-turbo'), |
| 221 | + maxTokens: 50, |
| 222 | + prompt: `Write me a poem in the style of ${poetName}.`, |
| 223 | + experimental_telemetry: { |
| 224 | + isEnabled: true, |
| 225 | + functionId: `poet-${poetName.toLowerCase().replace(' ', '-')}`, |
| 226 | + metadata: { |
| 227 | + humanloopFlowId: flowLog.id, |
| 228 | + humanloopPromptPath: `Poets/${poetName}`, |
| 229 | + }, |
| 230 | + }, |
| 231 | + }); |
| 232 | + |
| 233 | + outputs.push(result.text); |
| 234 | + } |
| 235 | + |
| 236 | + await humanloop.flows.updateLog(flowLog.id, { |
| 237 | + traceStatus: 'complete', |
| 238 | + output: outputs.join('\n\n'), |
| 239 | + }); |
| 240 | + |
| 241 | + await sdk.shutdown(); |
| 242 | +} |
| 243 | +``` |
| 244 | + |
| 245 | +## Debugging |
| 246 | + |
| 247 | +If you aren't using Next.js 15+, you will also need to enable the experimental instrumentation hook (available in 13.4+). |
| 248 | + |
| 249 | +```javascript filename="next.config.js" |
| 250 | +module.exports = { |
| 251 | + experimental: { |
| 252 | + instrumentationHook: true, |
| 253 | + }, |
| 254 | +}; |
| 255 | +``` |
| 256 | + |
| 257 | +## Resources |
| 258 | + |
| 259 | +To see a full example of instrumenting your application, check out the Humanloop [AI SDK Guides](https://humanloop.com/docs/v5/vercel-ai-sdk). |
| 260 | + |
| 261 | +After instrumenting your AI SDK application with Humanloop, you can then: |
| 262 | + |
| 263 | +- Experiment with different [versions of Prompts](https://humanloop.com/docs/v5/guides/evals/comparing-prompts) and try them out in the Editor |
| 264 | +- Create [custom Evaluators](https://humanloop.com/docs/v5/explanation/evaluators) -- Human, Code, or LLM -- to monitor and benchmark your AI application |
| 265 | +- Set up [live monitoring](https://humanloop.com/docs/v5/guides/observability/monitoring) of your logs to continuously track your application's performance |
0 commit comments