official javascript/typescript sdk for inference.sh — the ai agent runtime for serverless ai inference.
run ai models, build ai agents, and deploy generative ai applications with a simple api. access 250+ models including flux, stable diffusion, llms (claude, gpt, gemini), video generation (veo, seedance), and more.
npm install @inferencesh/sdk
# or
yarn add @inferencesh/sdk
# or
pnpm add @inferencesh/sdkGet your API key from the inference.sh dashboard.
import { inference, TaskStatusCompleted } from '@inferencesh/sdk';
const client = inference({ apiKey: 'your-api-key' });
// Run a task and wait for the result
const result = await client.tasks.run({
app: 'your-app',
input: {
prompt: 'Hello, world!'
}
});
if (result.status === TaskStatusCompleted) {
console.log(result.output);
}import { inference, TaskStatusCompleted } from '@inferencesh/sdk';
const client = inference({ apiKey: 'your-api-key' });
// Wait for result (default behavior)
const result = await client.tasks.run({
app: 'my-app',
input: { prompt: 'Generate something amazing' }
});
if (result.status === TaskStatusCompleted) {
console.log('Output:', result.output);
}Setup parameters configure the app instance (e.g., model selection). Workers with matching setup are "warm" and skip setup:
const result = await client.tasks.run({
app: 'my-app',
setup: { model: 'schnell' }, // Setup parameters
input: { prompt: 'hello' }
});// Get task info immediately without waiting
const task = await client.tasks.run(
{ app: 'my-app', input: { prompt: 'hello' } },
{ wait: false }
);
console.log('Task ID:', task.id);
console.log('Status:', task.status);const result = await client.tasks.run(
{ app: 'my-app', input: { prompt: 'hello' } },
{
onUpdate: (update) => {
console.log('Status:', update.status);
console.log('Progress:', update.logs);
}
}
);async function processImages(images: string[]) {
const results = [];
for (const image of images) {
const result = await client.tasks.run({
app: 'image-processor',
input: { image }
}, {
onUpdate: (update) => console.log(`Processing: ${update.status}`)
});
results.push(result);
}
return results;
}// Upload from base64
const file = await client.files.upload('data:image/png;base64,...', {
filename: 'image.png',
contentType: 'image/png'
});
// Use the uploaded file in a task
const result = await client.tasks.run({
app: 'image-app',
input: { image: file.uri }
});const task = await client.tasks.run(
{ app: 'long-running-app', input: {} },
{ wait: false }
);
// Cancel if needed
await client.tasks.cancel(task.id);Sessions allow you to maintain state across multiple task invocations. The worker stays warm between calls, preserving loaded models and in-memory state.
// Start a new session
const result = await client.tasks.run({
app: 'my-stateful-app',
input: { prompt: 'hello' },
session: 'new'
});
const sessionId = result.session_id;
console.log('Session ID:', sessionId);
// Continue the session with another call
const result2 = await client.tasks.run({
app: 'my-stateful-app',
input: { prompt: 'remember what I said?' },
session: sessionId
});By default, sessions expire after 60 seconds of inactivity. You can customize this with session_timeout (1-3600 seconds):
// Create a session with 5-minute idle timeout
const result = await client.tasks.run({
app: 'my-stateful-app',
input: { prompt: 'hello' },
session: 'new',
session_timeout: 300 // 5 minutes
});
// Session stays alive for 5 minutes after each callNotes:
session_timeoutis only valid whensession: 'new'- Minimum timeout: 1 second
- Maximum timeout: 3600 seconds (1 hour)
- Each successful call resets the idle timer
For complete session documentation including error handling, best practices, and advanced patterns, see the Sessions Developer Guide.
Chat with AI agents using client.agents.create().
Use an existing agent from your workspace by its namespace/name@shortid:
import { inference } from '@inferencesh/sdk';
const client = inference({ apiKey: 'your-api-key' });
// Create agent from template
const agent = client.agents.create('my-org/assistant@abc123');
// Send a message with streaming
await agent.sendMessage('Hello!', {
onMessage: (msg) => {
if (msg.content) {
for (const c of msg.content) {
if (c.type === 'text' && c.text) {
process.stdout.write(c.text);
}
}
}
}
});
// Clean up
agent.disconnect();Create agents on-the-fly without saving to your workspace:
import { inference, tool, string } from '@inferencesh/sdk';
const client = inference({ apiKey: 'your-api-key' });
// Create ad-hoc agent
const agent = client.agents.create({
coreApp: 'infsh/claude-sonnet-4@abc123', // LLM to use
systemPrompt: 'You are a helpful assistant.',
tools: [
tool('get_weather')
.description('Get current weather')
.params({ city: string('City name') })
.handler(async (args) => {
// Your tool logic here
return JSON.stringify({ temp: 72, conditions: 'sunny' });
})
.build()
]
});
await agent.sendMessage('What is the weather in Paris?', {
onMessage: (msg) => console.log(msg),
onToolCall: async (call) => {
// Tool handlers are auto-executed if defined
}
});Use output_schema to get structured JSON responses:
const agent = client.agents.create({
core_app: { ref: 'infsh/claude-sonnet-4@latest' },
output_schema: {
type: 'object',
properties: {
summary: { type: 'string' },
sentiment: { type: 'string', enum: ['positive', 'negative', 'neutral'] },
confidence: { type: 'number' },
},
required: ['summary', 'sentiment', 'confidence'],
},
internal_tools: { finish: true },
});
const response = await agent.sendMessage('Analyze: Great product!');| Method | Description |
|---|---|
sendMessage(text, options?) |
Send a message to the agent |
getChat(chatId?) |
Get chat history |
stopChat(chatId?) |
Stop current generation |
submitToolResult(toolId, resultOrAction) |
Submit result for a client tool (string or {action, form_data}) |
streamMessages(chatId?, options?) |
Stream message updates |
streamChat(chatId?, options?) |
Stream chat updates |
disconnect() |
Clean up streams |
reset() |
Start a new conversation |
Creates a new inference client.
| Parameter | Type | Required | Description |
|---|---|---|---|
config.apiKey |
string |
Yes | Your inference.sh API key |
config.baseUrl |
string |
No | Custom API URL (default: https://api.inference.sh) |
Runs a task on inference.sh.
Parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
params.app |
string |
Yes | App identifier (e.g., 'username/app-name') |
params.input |
object |
Yes | Input parameters for the app |
params.setup |
object |
No | Setup parameters (affects worker warmth/scheduling) |
params.infra |
string |
No | Infrastructure: 'cloud' or 'private' |
params.variant |
string |
No | App variant to use |
params.session |
string |
No | Session ID or 'new' to start a new session |
params.session_timeout |
number |
No | Session timeout in seconds (1-3600, only with session: 'new') |
Options:
| Option | Type | Default | Description |
|---|---|---|---|
wait |
boolean |
true |
Wait for task completion |
onUpdate |
function |
- | Callback for status updates |
autoReconnect |
boolean |
true |
Auto-reconnect on connection loss |
maxReconnects |
number |
5 |
Max reconnection attempts |
reconnectDelayMs |
number |
1000 |
Delay between reconnects (ms) |
Gets a task by ID.
Cancels a running task.
Uploads a file to inference.sh.
Parameters:
| Parameter | Type | Description |
|---|---|---|
data |
string | Blob |
Base64 string, data URI, or Blob |
options.filename |
string |
Filename |
options.contentType |
string |
MIME type |
options.public |
boolean |
Make file publicly accessible |
Creates an agent instance from a template or ad-hoc configuration.
Template mode:
const agent = client.agents.create('namespace/name@version');Ad-hoc mode:
const agent = client.agents.create({
coreApp: 'infsh/claude-sonnet-4@abc123',
systemPrompt: 'You are helpful.',
tools: [...]
});import {
TaskStatusQueued,
TaskStatusRunning,
TaskStatusCompleted,
TaskStatusFailed,
TaskStatusCancelled
} from '@inferencesh/sdk';
if (task.status === TaskStatusCompleted) {
console.log('Done!');
}This SDK is written in TypeScript and includes full type definitions. All types are exported:
import type { Task, ApiTaskRequest, RunOptions } from '@inferencesh/sdk';- Node.js 18.0.0 or higher
- Modern browsers with
fetchsupport
- documentation — getting started guides and api reference
- blog — tutorials on ai agents, image generation, and more
- app store — browse 250+ ai models
- discord — community support
- github — open source projects
MIT © inference.sh