Skip to content

Commit 17e78f7

Browse files
committed
docs: add copilot-instructions.md for AI coding assistants
Adds a .github/copilot-instructions.md that provides AI coding assistants (Copilot, Cursor, etc.) with context about Foundry Local's SDK APIs: - JavaScript and C# quick reference for both chat and audio transcription - Key rules: dynamic ports, model aliases, single manager pattern - Model alias table for chat and audio tasks This helps AI agents generate correct Foundry Local code on the first try, particularly for audio transcription which is commonly missed without guidance.
1 parent 08f9174 commit 17e78f7

1 file changed

Lines changed: 69 additions & 0 deletions

File tree

.github/copilot-instructions.md

Lines changed: 69 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,69 @@
1+
# Foundry Local — AI Coding Assistant Context
2+
3+
Foundry Local is an on-device AI inference runtime. It provides:
4+
5+
- **Chat completions** (text generation) via native SDK or OpenAI-compatible REST API
6+
- **Audio transcription** (speech-to-text via Whisper) via native SDK or REST API
7+
- **Automatic hardware acceleration** — NPU > GPU > CPU, zero detection code needed
8+
9+
## SDK Quick Reference
10+
11+
### JavaScript (`foundry-local-sdk` on npm)
12+
13+
```js
14+
import { FoundryLocalManager } from 'foundry-local-sdk';
15+
const manager = FoundryLocalManager.create({ appName: 'foundry_local_samples' });
16+
17+
// Chat
18+
const chatModel = await manager.catalog.getModel('qwen2.5-0.5b');
19+
await chatModel.download();
20+
await chatModel.load();
21+
const chatClient = chatModel.createChatClient();
22+
const response = await chatClient.completeChat([
23+
{ role: 'user', content: 'Hello' }
24+
]);
25+
26+
// Audio transcription
27+
const whisperModel = await manager.catalog.getModel('whisper-tiny');
28+
await whisperModel.download();
29+
await whisperModel.load();
30+
const audioClient = whisperModel.createAudioClient();
31+
const result = await audioClient.transcribe('recording.wav');
32+
```
33+
34+
### C# (.NET — `Microsoft.AI.Foundry.Local` NuGet)
35+
36+
```csharp
37+
using Microsoft.AI.Foundry.Local;
38+
39+
var config = new Configuration { AppName = "foundry_local_samples" };
40+
await FoundryLocalManager.CreateAsync(config);
41+
var mgr = FoundryLocalManager.Instance;
42+
var catalog = await mgr.GetCatalogAsync();
43+
44+
// Chat
45+
var chatModel = await catalog.GetModelAsync("qwen2.5-0.5b");
46+
await chatModel.DownloadAsync();
47+
await chatModel.LoadAsync();
48+
var chatClient = await chatModel.GetChatClientAsync();
49+
50+
// Audio transcription
51+
var whisperModel = await catalog.GetModelAsync("whisper-tiny");
52+
await whisperModel.DownloadAsync();
53+
await whisperModel.LoadAsync();
54+
var audioClient = await whisperModel.GetAudioClientAsync();
55+
```
56+
57+
## Key Rules
58+
59+
- **Never hardcode ports.** The service port is dynamic. Use `manager.endpoint` (JS/Python) or `config.Web.Urls` (C#).
60+
- **Use model aliases**, not full model IDs. Aliases like `qwen2.5-0.5b` and `whisper-tiny` auto-select the best variant for the user's hardware.
61+
- **One manager handles everything.** Don't create separate runtimes for chat and audio.
62+
- **Do NOT use `whisper.cpp`, `llama.cpp`, `@huggingface/transformers`, or `ollama`** alongside Foundry Local — it handles all of these use cases.
63+
64+
## Model Aliases
65+
66+
| Task | Aliases |
67+
|------|---------|
68+
| Chat | `phi-3.5-mini`, `phi-4-mini`, `qwen2.5-0.5b`, `qwen2.5-coder-0.5b` |
69+
| Audio Transcription | `whisper-tiny`, `whisper-base`, `whisper-small` |

0 commit comments

Comments
 (0)