Skip to content

Commit 8be756d

Browse files
committed
new post
1 parent eb4028d commit 8be756d

17 files changed

Lines changed: 444 additions & 94 deletions

File tree

content/posts/images/clawd.jpeg

400 KB
Loading
Lines changed: 126 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,126 @@
1+
---
2+
title: "How Clawd Found Me A Girlfriend"
3+
date: 2026-01-26T02:41:23-03:00
4+
description: "why is the shiny new thing kind of a big deal"
5+
draft: false
6+
cover: "posts/images/clawd.jpeg"
7+
toc: false
8+
images:
9+
- post-cover.png
10+
tags:
11+
---
12+
13+
okay I’ll admit that was clickbait. an LLM did not find me a gf or bf at all.
14+
15+
so clawd is the shiny new thing this week on twitter. so what? why is that interesting?
16+
17+
I think this new wave of adoption of agents will drive some very interesting changes in the AI bubble.
18+
19+
so let’s discuss that instead of why I don’t have a partner ◝(ᵔᗜᵔ)◜
20+
21+
# what's so interesting about clawd adoption?
22+
23+
we've seen a massive wave of adoption of this new harness for LLMs.
24+
25+
so what makes this new thingy so interesting? why shouldn't we just dismiss it as the new shiny thing on X?
26+
27+
freedom to choose.
28+
29+
unlike most (mainstream) LLM powered applications, it allows the user to swap to any given model and to leverage the capabilities of agents that have been possible for a while, but weren't felt by the majority of AI power users. even the ones using claude code.
30+
31+
the thing with clawd is that you _can_ own your data, even if you don't right now. or at least you have the possibility.
32+
33+
why do i say this? clawd let's you use any LLM provider, and holds your data in whatever device you're running it on. it also holds the credentials to all the services you connect it to.
34+
35+
so even if openai, anthropic or whoever has the hottest SOTA this week goes down, you can just swap to another model. so if you’re filthy rich you could just host some open-source SOTA yourself and keep it running even if we nuke each other out for some time.
36+
37+
this way of using LLMs by owning the harness rather than borrowing it will drive some changes.
38+
39+
TLDR:
40+
1. Providers will price this in.
41+
2. Models will become more reliable.
42+
43+
# providers will price this in
44+
45+
if you didn't know it already let me break you the news. the AI industry is heavily subsidized by almost anyone they can borrow money from by selling them the promise of a path to profitability in the future.
46+
47+
that's why Microsoft burns cash like you could just print money.. oh wait. sorry, I meant that they burn money like Microsoft _owned_ the federal reserve.
48+
49+
it's a long-term bet. it's an arms-race that requires patience and a huge pile of cash, or rather a pile of heads willing to borrow you that cash.
50+
51+
so how do they even convince those heads to borrow money from them for a completely uncertain endeavor?
52+
53+
they promise green pastures far far away...
54+
55+
and what lies in these promised lands?
56+
57+
hoards of users that use the harness you tailor made for your model. locked into your ecosystem.
58+
59+
oh right, your model, that massive pile of floats (weights) that can be hold on an SD card and costs dozens of billions, or even hundreds of billions to train.
60+
61+
so we don't really know how much cash they're burning vs revenue.
62+
63+
but using an h100 for a whole day for $200 a month? that's a steal right?
64+
65+
bc it kinda is, there's no way they're making money right now. everyone knows this.
66+
67+
everyone on twitter and their mom are tweeting on how to use the Oauth credentials of low-price/high-quota plans to avoid paying hundreds or thousands in api credits.
68+
69+
all of that cash they're burning?
70+
all of that is so you use their harness, buy into their ecosystem and they collect data about how you use it for fine-tuning and improving their models. that’s their moat.
71+
72+
*_or at least that must be what they must tell to their investors_*
73+
74+
how do you think they're gonna justify to their investors subsidizing *_you_* if they don't own the harness you use and you give them scattered usage data?
75+
76+
I mean, you use their platform so they _do_ have your data when you generate tokens, but you can just switch up to another provider to another SOTA model without a massive impact in your day to day use.
77+
78+
this is somehting that has been said for a while, but now more than ever it's going to be felt in inference provider's pricing.
79+
80+
that $200 subscription for unlimited model use? who knows how long it's going to last.
81+
82+
that awesome chinese model that offers 20M tokens per day for $20 a month? gone. _(eventually, depending on adoption speed)_
83+
84+
some providers subsidize API prices more than others, but if the dozens of billions are to keep rolling for model training, new promises will have to be made, and some hard numbers might be needed to convince _the heads_.
85+
86+
so we will probably see unlimited or high-usage flat tiers going up or disappearing, and API prices going up. my bet is before 2027.
87+
88+
# all models will be more reliable in high-value tasks for real users
89+
90+
yes, we do have some standards at this point to establish how a model should talk to a tool, or API (ex: MCP, openai standard), but the agentic flow and the way they perform tasks are not the same along most providers. and the range of tasks each model can do differs.
91+
92+
some models perform agentic tasks better than others based solely on the harness. for example codex performs way better on the codex-cli harness than on say claude-code or open code.
93+
94+
the cool thing with apps like clawd is that it will generate new benchmarks of real high-value tasks for the end users that all providers will try to optimize for.
95+
96+
if it ends up being something people keep using companies are gonna end up training on that data and optimizing for those use cases.
97+
98+
so we will end up with a very robust ecosystem where agents can reliably perform a lot of tasks users really want done by their agents daily.
99+
100+
it will enforce a soft-default standard on how agents should act, and what minimum range of tasks a model should be able to bring to the table to even be considered to be SOTA-ish.
101+
102+
# the best thing about owning the shoggoth harness :D
103+
104+
so I wanted to have a personal assistant for gamification of goals for some time.
105+
106+
never took it to a place where it was a real project or product at all.
107+
108+
but the idea was, eventually I want to be having a personal coach with a personality sitting in my home. not in some data center where all my personal data is exposed and can be taken away. *_i want it to be mine_*
109+
110+
but then, reality check: any open-source SOTA model needs a big GPU, and that means to invest a lot upfront, for a model that might be mediocre, and may be unusable (at that time llama 3 or equivalents were the only options).
111+
112+
so the question is, how do I make this today while preparing for when I _can_ have this at home?
113+
114+
make the whole car and borrow the motor.
115+
116+
make your own harness. you own* the data, you can choose the provider dynamically, and you choose what goes into the model vs what's just done with scripts to save tokens.
117+
118+
so I tried building this idea a couple of times, but it never came out very useful, until model gains and standardization of agent tools made my harness kinda useful with most models. still is not as good as the clawd harness.
119+
120+
so really clawd is what I've been looking for some time made real in this specific point in time.
121+
122+
and guess what, when hardware gains reach to a level of having a Claude opus 4.5 in your home for the price of a gaming desktop you will just be able to switch providers to localhost and be done borrowing the motor of your car and sending your data to multiple billion dollar companies *_yay ^ ^_*
123+
124+
so even though _it is_ the flashy new thing, it's a good thing!
125+
126+
and this new massive wave of adoption it's a really good first step on real world agent-reliability and data ownership.

public/404.html

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44

55
<title>
66
404 Page not found ::
7-
Samsyntaxerror blog — samsyntaxerror personal blog
7+
Samsyntaxerror blog
88
</title>
99

1010
<meta http-equiv="content-type" content="text/html; charset=utf-8" />
@@ -15,7 +15,7 @@
1515
/>
1616
<meta
1717
name="keywords"
18-
content="blog, samsyntaxerror, developer, web, ai, programming, software, vr"
18+
content=""
1919
/>
2020
<meta name="robots" content="noodp" />
2121
<link rel="canonical" href="//localhost:1313/404.html" />
@@ -52,7 +52,7 @@
5252
<meta property="og:url" content="//localhost:1313/404.html">
5353
<meta property="og:site_name" content="Samsyntaxerror blog">
5454
<meta property="og:title" content="404 Page not found">
55-
<meta property="og:locale" content="en">
55+
<meta property="og:locale" content="en_us">
5656
<meta property="og:type" content="website">
5757

5858

@@ -196,7 +196,7 @@ <h1 class="post-title">
196196
</a>
197197

198198
<div class="copyright">
199-
<span> 2024</span>
199+
<span> 2026</span>
200200
</div>
201201

202202
</div>

public/about/index.html

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44

55
<title>
66
About me!! :) ::
7-
Samsyntaxerror blog — samsyntaxerror personal blog
7+
Samsyntaxerror blog
88
</title>
99

1010
<meta http-equiv="content-type" content="text/html; charset=utf-8" />
@@ -19,7 +19,7 @@
1919
/>
2020
<meta
2121
name="keywords"
22-
content="blog, samsyntaxerror, developer, web, ai, programming, software, vr"
22+
content=""
2323
/>
2424
<meta name="robots" content="noodp" />
2525
<link rel="canonical" href="//localhost:1313/about/" />
@@ -65,10 +65,10 @@
6565
I love automation, AI, and open-source ~i use arch btw. Right now, I’m in my third year of mechatronics engineering, but don’t let that fool you - I can be extremely dumb.
6666
Warning: Excessive goofiness and occasional cringe attacks may occur. Proceed with caution.
6767
I’m going to be writing here about the things I learn, my thoughts, and maybe some fiction!">
68-
<meta property="og:locale" content="en">
68+
<meta property="og:locale" content="en_us">
6969
<meta property="og:type" content="article">
70-
<meta property="article:published_time" content="2023-04-20T21:03:12-03:00">
71-
<meta property="article:modified_time" content="2023-04-20T21:03:12-03:00">
70+
<meta property="article:published_time" content="2026-01-23T18:27:22-03:00">
71+
<meta property="article:modified_time" content="2026-01-23T18:27:22-03:00">
7272
<meta property="og:image" content="//localhost:1313/post-cover.png">
7373

7474

@@ -183,7 +183,7 @@ <h1 class="post-title">About me!! :)</h1>
183183
<div class="post-meta">
184184

185185
<time class="post-date">
186-
2023-04-20
186+
2026-01-23
187187
</time>
188188

189189

@@ -270,7 +270,7 @@ <h3 id="other-sections">
270270
</a>
271271

272272
<div class="copyright">
273-
<span> 2024</span>
273+
<span> 2026</span>
274274
</div>
275275

276276
</div>

0 commit comments

Comments
 (0)