You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
feat: simplify models endpoint, add CI/CD and docs
- /models now returns plain array of model names
- Add GitHub Actions workflow for packaging .vsix
- Add test_model_selection.sh for testing model selection
- Update README with complete API documentation
- Add pnpm run package script
Copy file name to clipboardExpand all lines: README.md
+97-6Lines changed: 97 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -51,13 +51,19 @@ This allows both our extension and Windsurf to work.
51
51
52
52
Windsurf uses gRPC for communication with its server, and it's not a public API. We need to get the protos. Thankfully JS is fucking stupid and straightforward - having a compiled gRPC client, we can do some magic (described in [./decompile/DECOMPILE.md](./decompile/DECOMPILE.md)) and get the source protos back, just to compile them back to TypeScript. Fucking cycle of nonsense.
53
53
54
+
**Update:** Improved the decompiler to recover full gRPC service definitions with proper streaming annotations. This makes adding new API features way easier - just look at the proto, call the RPC method through the generated client.
55
+
54
56
## Problem 3: Understanding the protocol
55
57
56
58
With ports and protos, we can finally send messages to the server. Go to the network tab, observe what chat.js sends, decode protobufs to get an idea of what to send and where. Write a client and that's it.
57
59
58
-
Wrap that shit in a REST server, and now we can start a new cascade and send messages from any app. The only thing the user needs to do is manually open a newly created chat. I didn't implement continuing an existing chat (though I found how - look at `GetAllCascadeTrajectoriesRequest` if you need this).
60
+
Wrap that shit in a REST server with proper queueing (Windsurf does this UI-side instead of at gRPC level), and now you can:
61
+
- Start new conversations
62
+
- Continue existing conversations by cascadeId
63
+
- List available models and select which one to use
64
+
- Send messages without blocking - queue handles cascade status automatically
59
65
60
-
Also, there's a model selector that uses numbers as model IDs. No idea how to map them to readable names, so I hardcoded whatever I had selected while reversing this. You can easily change it - just use `./scripts/decode_request.js` to decode the body of `SendUserCascadeMessage` captured by sending any message to any chat.
66
+
The queue system is per-cascade, so multiple conversations can process concurrently. If a cascade is idle, messages send immediately. If busy, they queue and wait.
61
67
62
68
# Usage
63
69
@@ -66,26 +72,111 @@ Also, there's a model selector that uses numbers as model IDs. No idea how to ma
66
72
-`windsurfapi.port` - HTTP server port (default: 47923)
67
73
-`windsurfapi.autoStart` - Auto-start server on Windsurf init (default: false)
68
74
3. Start server manually via command palette: `Windsurf API: Start Server` (or enable autoStart)
69
-
4. Send requests to `http://localhost:47923/prompt`:
75
+
76
+
## API Endpoints
77
+
78
+
### POST /prompt
79
+
Send a message to Windsurf. Returns immediately with status and cascadeId.
0 commit comments