Skip to content

Commit eae608b

Browse files
committed
feat: simplify models endpoint, add CI/CD and docs
- /models now returns plain array of model names - Add GitHub Actions workflow for packaging .vsix - Add test_model_selection.sh for testing model selection - Update README with complete API documentation - Add pnpm run package script
1 parent 0fbd1b5 commit eae608b

5 files changed

Lines changed: 190 additions & 17 deletions

File tree

.github/workflows/package.yml

Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,49 @@
1+
name: Package Extension
2+
3+
on:
4+
push:
5+
tags:
6+
- 'v*'
7+
workflow_dispatch:
8+
9+
jobs:
10+
package:
11+
runs-on: ubuntu-latest
12+
13+
steps:
14+
- uses: actions/checkout@v4
15+
16+
- uses: pnpm/action-setup@v4
17+
with:
18+
version: 8
19+
20+
- uses: actions/setup-node@v4
21+
with:
22+
node-version: '20'
23+
cache: 'pnpm'
24+
25+
- name: Install dependencies
26+
run: pnpm install
27+
28+
- name: Compile
29+
run: pnpm run compile
30+
31+
- name: Install vsce
32+
run: pnpm add -g @vscode/vsce
33+
34+
- name: Package extension
35+
run: pnpm run package
36+
37+
- name: Upload artifact
38+
uses: actions/upload-artifact@v4
39+
with:
40+
name: windsurf-api-vsix
41+
path: '*.vsix'
42+
43+
- name: Create release
44+
if: startsWith(github.ref, 'refs/tags/')
45+
uses: softprops/action-gh-release@v2
46+
with:
47+
files: '*.vsix'
48+
env:
49+
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

README.md

Lines changed: 97 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -51,13 +51,19 @@ This allows both our extension and Windsurf to work.
5151

5252
Windsurf uses gRPC for communication with its server, and it's not a public API. We need to get the protos. Thankfully JS is fucking stupid and straightforward - having a compiled gRPC client, we can do some magic (described in [./decompile/DECOMPILE.md](./decompile/DECOMPILE.md)) and get the source protos back, just to compile them back to TypeScript. Fucking cycle of nonsense.
5353

54+
**Update:** Improved the decompiler to recover full gRPC service definitions with proper streaming annotations. This makes adding new API features way easier - just look at the proto, call the RPC method through the generated client.
55+
5456
## Problem 3: Understanding the protocol
5557

5658
With ports and protos, we can finally send messages to the server. Go to the network tab, observe what chat.js sends, decode protobufs to get an idea of what to send and where. Write a client and that's it.
5759

58-
Wrap that shit in a REST server, and now we can start a new cascade and send messages from any app. The only thing the user needs to do is manually open a newly created chat. I didn't implement continuing an existing chat (though I found how - look at `GetAllCascadeTrajectoriesRequest` if you need this).
60+
Wrap that shit in a REST server with proper queueing (Windsurf does this UI-side instead of at gRPC level), and now you can:
61+
- Start new conversations
62+
- Continue existing conversations by cascadeId
63+
- List available models and select which one to use
64+
- Send messages without blocking - queue handles cascade status automatically
5965

60-
Also, there's a model selector that uses numbers as model IDs. No idea how to map them to readable names, so I hardcoded whatever I had selected while reversing this. You can easily change it - just use `./scripts/decode_request.js` to decode the body of `SendUserCascadeMessage` captured by sending any message to any chat.
66+
The queue system is per-cascade, so multiple conversations can process concurrently. If a cascade is idle, messages send immediately. If busy, they queue and wait.
6167

6268
# Usage
6369

@@ -66,26 +72,111 @@ Also, there's a model selector that uses numbers as model IDs. No idea how to ma
6672
- `windsurfapi.port` - HTTP server port (default: 47923)
6773
- `windsurfapi.autoStart` - Auto-start server on Windsurf init (default: false)
6874
3. Start server manually via command palette: `Windsurf API: Start Server` (or enable autoStart)
69-
4. Send requests to `http://localhost:47923/prompt`:
75+
76+
## API Endpoints
77+
78+
### POST /prompt
79+
Send a message to Windsurf. Returns immediately with status and cascadeId.
7080

7181
```bash
82+
# New conversation
7283
curl -X POST http://localhost:47923/prompt \
7384
-H "Content-Type: application/json" \
7485
-d '{"text": "Hello from API"}'
75-
```
7686

77-
With images:
87+
# Continue existing conversation
88+
curl -X POST http://localhost:47923/prompt \
89+
-H "Content-Type: application/json" \
90+
-d '{"text": "Follow-up question", "cascadeId": "cascade-id-here"}'
7891

79-
```bash
92+
# With images
8093
curl -X POST http://localhost:47923/prompt \
8194
-H "Content-Type: application/json" \
8295
-d '{
8396
"text": "What is in this image?",
8497
"images": [{"base64": "iVBORw0KGgo...", "mime": "image/png"}]
8598
}'
99+
100+
# With model selection
101+
curl -X POST http://localhost:47923/prompt \
102+
-H "Content-Type: application/json" \
103+
-d '{"text": "Use GPT-5", "model": "GPT-5 (low reasoning)"}'
104+
```
105+
106+
Response:
107+
```json
108+
{
109+
"status": "sent", // or "queued" if cascade is busy
110+
"messageId": "message-uuid",
111+
"cascadeId": "cascade-uuid",
112+
"queuePosition": 1 // only present if queued
113+
}
114+
```
115+
116+
### GET /models
117+
Get list of available models.
118+
119+
```bash
120+
curl http://localhost:47923/models
86121
```
87122

123+
Returns: `["Claude Sonnet 4.5 (promo)", "SWE-1", "GPT-5 (low reasoning)", ...]`
124+
125+
### GET /trajectories
126+
List all conversations.
127+
128+
```bash
129+
curl http://localhost:47923/trajectories
130+
```
131+
132+
Returns array of conversations with cascadeId, name, status, timestamps, etc.
133+
134+
### GET /queue
135+
View queued messages. Optional `?cascadeId=xxx` to filter by cascade.
136+
137+
```bash
138+
curl http://localhost:47923/queue
139+
curl http://localhost:47923/queue?cascadeId=cascade-id
140+
```
141+
142+
### GET /queue/:messageId
143+
Check status of a specific message.
144+
145+
```bash
146+
curl http://localhost:47923/queue/message-id
147+
```
148+
149+
### GET /status?cascadeId=xxx
150+
Check if a cascade is idle or busy.
151+
152+
```bash
153+
curl http://localhost:47923/status?cascadeId=cascade-id
154+
```
155+
156+
## Test Scripts
157+
158+
Located in `tests/`:
159+
- `continue.sh` - Interactive script to list and continue conversations
160+
- `queue_test.sh` - Send 5 messages, monitor queue until empty
161+
- `test_text.sh` - Basic text message test
162+
- `test_image.sh` - Image message test
163+
- `test_models.sh` - List available models
164+
- `test_model_selection.sh` - Test sending messages with different models
165+
- `test_trajectories.sh` - List all conversations
166+
88167
# Commands
89168

90169
- `Windsurf API: Start Server` - Start HTTP server
91170
- `Windsurf API: Stop Server` - Stop HTTP server
171+
172+
# Development
173+
174+
## Packaging
175+
176+
To build a `.vsix` file:
177+
178+
```bash
179+
pnpm run package
180+
```
181+
182+
The GitHub Actions workflow automatically builds and releases the extension on git tags (e.g., `v0.0.2`).

package.json

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,8 @@
4646
"watch": "tsc -watch -p ./",
4747
"pretest": "pnpm run compile && pnpm run lint",
4848
"lint": "eslint src",
49-
"test": "vscode-test"
49+
"test": "vscode-test",
50+
"package": "vsce package"
5051
},
5152
"devDependencies": {
5253
"@bufbuild/buf": "^1.55.1",

src/http_server.ts

Lines changed: 3 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -43,16 +43,9 @@ export class HttpServer {
4343
this.server.get("/models", async (request, reply) => {
4444
try {
4545
const models = await this.client.getModels();
46-
const modelMap = models.reduce((acc, model) => {
47-
if (model.modelOrAlias) {
48-
acc[model.label] = {
49-
model: model.modelOrAlias.model,
50-
alias: model.modelOrAlias.alias,
51-
};
52-
}
53-
return acc;
54-
}, {} as Record<string, { model: number; alias: number }>);
55-
return modelMap;
46+
return models
47+
.filter((m) => m.modelOrAlias)
48+
.map((m) => m.label);
5649
} catch (error) {
5750
return reply.status(500).send({
5851
error: error instanceof Error ? error.message : String(error),

tests/test_model_selection.sh

Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
#!/bin/bash
2+
3+
echo "=== Testing Model Selection ==="
4+
echo ""
5+
6+
# First, get list of available models
7+
echo "Available models:"
8+
MODELS=$(curl -s http://localhost:47923/models)
9+
echo "$MODELS" | jq -r '.[]' | head -10
10+
echo ""
11+
12+
# Test with Claude Sonnet 4.5
13+
echo "Sending message with Claude Sonnet 4.5 (promo)..."
14+
RESPONSE=$(curl -s -X POST http://localhost:47923/prompt \
15+
-H "Content-Type: application/json" \
16+
-d '{"text": "Hello, using Claude Sonnet 4.5", "model": "Claude Sonnet 4.5 (promo)"}')
17+
18+
echo "Response:"
19+
echo "$RESPONSE" | jq '.'
20+
echo ""
21+
22+
# Test with GPT-5
23+
echo "Sending message with GPT-5 (low reasoning)..."
24+
RESPONSE=$(curl -s -X POST http://localhost:47923/prompt \
25+
-H "Content-Type: application/json" \
26+
-d '{"text": "Hello, using GPT-5", "model": "GPT-5 (low reasoning)"}')
27+
28+
echo "Response:"
29+
echo "$RESPONSE" | jq '.'
30+
echo ""
31+
32+
# Test with o3
33+
echo "Sending message with SWE-1..."
34+
RESPONSE=$(curl -s -X POST http://localhost:47923/prompt \
35+
-H "Content-Type: application/json" \
36+
-d '{"text": "Hello, using SWE-1", "model": "SWE-1"}')
37+
38+
echo "Response:"
39+
echo "$RESPONSE" | jq '.'

0 commit comments

Comments
 (0)