Skip to content

Commit 0275263

Browse files
committed
first commit
1 parent d0b6f37 commit 0275263

29 files changed

Lines changed: 1751 additions & 123 deletions

.claude/settings.local.json

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
{
2+
"permissions": {
3+
"allow": [
4+
"Bash(pnpm dev)",
5+
"WebSearch",
6+
"WebFetch(domain:docs.x.ai)",
7+
"Bash(npx tsc:*)",
8+
"Bash(grep:*)",
9+
"Bash(tree:*)",
10+
"Bash(pnpm build:*)",
11+
"Bash(psql:*)"
12+
],
13+
"deny": [],
14+
"ask": []
15+
}
16+
}

.env.example

Lines changed: 23 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -1,23 +1,30 @@
1-
# Generate a random secret: https://generate-secret.vercel.app/32 or `openssl rand -base64 32`
2-
AUTH_SECRET=****
1+
# ===========================================
2+
# REQUIRED VARIABLES
3+
# ===========================================
34

4-
# The following keys below are automatically created and
5-
# added to your environment when you deploy on Vercel
5+
# Generate a random secret: https://generate-secret.vercel.app/32
6+
# Or run: openssl rand -base64 32
7+
AUTH_SECRET=
68

7-
# Instructions to create an AI Gateway API key here: https://vercel.com/ai-gateway
8-
# API key required for non-Vercel deployments
9-
# For Vercel deployments, OIDC tokens are used automatically
10-
# https://vercel.com/ai-gateway
11-
AI_GATEWAY_API_KEY=****
9+
# xAI API key (for direct Grok access)
10+
# Get it from: https://console.x.ai
11+
XAI_API_KEY=
1212

13+
# PostgreSQL Database URL
14+
# For Docker setup (recommended): postgresql://chatbot:chatbot@localhost:5432/chatbot
15+
# For Vercel/Neon: Get from https://vercel.com/docs/postgres
16+
POSTGRES_URL=
1317

14-
# Instructions to create a Vercel Blob Store here: https://vercel.com/docs/vercel-blob
15-
BLOB_READ_WRITE_TOKEN=****
18+
# ===========================================
19+
# OPTIONAL VARIABLES (not needed for dev)
20+
# ===========================================
1621

17-
# Instructions to create a PostgreSQL database here: https://vercel.com/docs/postgres
18-
POSTGRES_URL=****
22+
# Vercel AI Gateway API key (only if you prefer routing through the gateway)
23+
# Get it from: https://vercel.com/ai-gateway
24+
AI_GATEWAY_API_KEY=
1925

26+
# Vercel Blob storage (for file uploads)
27+
# BLOB_READ_WRITE_TOKEN=
2028

21-
# Instructions to create a Redis store here:
22-
# https://vercel.com/docs/redis
23-
REDIS_URL=****
29+
# Redis (for resumable streams)
30+
# REDIS_URL=

README.md

Lines changed: 4 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -36,15 +36,13 @@
3636

3737
## Model Providers
3838

39-
This template uses the [Vercel AI Gateway](https://vercel.com/docs/ai-gateway) to access multiple AI models through a unified interface. The default configuration includes [xAI](https://x.ai) models (`grok-2-vision-1212`, `grok-3-mini`) routed through the gateway.
39+
The chatbot uses [xAI](https://x.ai) Grok models directly through the AI SDK (`grok-2-vision-1212`, `grok-3-mini`). Set `XAI_API_KEY` in `.env.local` to authenticate requests.
4040

41-
### AI Gateway Authentication
41+
### Optional: Vercel AI Gateway
4242

43-
**For Vercel deployments**: Authentication is handled automatically via OIDC tokens.
43+
If you prefer routing traffic through the [Vercel AI Gateway](https://vercel.com/docs/ai-gateway) for logging or usage limits, provide `AI_GATEWAY_API_KEY`. Vercel deployments automatically handle authentication via OIDC tokens; self-hosted deployments must set the key manually.
4444

45-
**For non-Vercel deployments**: You need to provide an AI Gateway API key by setting the `AI_GATEWAY_API_KEY` environment variable in your `.env.local` file.
46-
47-
With the [AI SDK](https://ai-sdk.dev/docs/introduction), you can also switch to direct LLM providers like [OpenAI](https://openai.com), [Anthropic](https://anthropic.com), [Cohere](https://cohere.com/), and [many more](https://ai-sdk.dev/providers/ai-sdk-providers) with just a few lines of code.
45+
With the [AI SDK](https://ai-sdk.dev/docs/introduction), you can also switch to other providers like [OpenAI](https://openai.com), [Anthropic](https://anthropic.com), [Cohere](https://cohere.com/), and [many more](https://ai-sdk.dev/providers/ai-sdk-providers) with just a few lines of code.
4846

4947
## Deploy Your Own
5048

SETUP.md

Lines changed: 88 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,88 @@
1+
# Local Development Setup
2+
3+
Quick setup guide for Windows and Mac.
4+
5+
## Prerequisites
6+
7+
- [Docker Desktop](https://www.docker.com/products/docker-desktop/) installed and running
8+
- [Node.js 18+](https://nodejs.org/)
9+
- [pnpm](https://pnpm.io/) (`npm install -g pnpm`)
10+
11+
## Quick Start
12+
13+
### 1. Start the Database
14+
15+
```bash
16+
# Start PostgreSQL in Docker
17+
docker-compose up -d
18+
19+
# Verify it's running
20+
docker-compose ps
21+
```
22+
23+
### 2. Configure Environment
24+
25+
```bash
26+
# Copy example env file
27+
cp .env.example .env.local
28+
29+
# Edit .env.local and fill in:
30+
# - AUTH_SECRET (generate at https://generate-secret.vercel.app/32)
31+
# - XAI_API_KEY (from https://console.x.ai)
32+
# - POSTGRES_URL=postgresql://chatbot:chatbot@localhost:5432/chatbot
33+
# (Optional) AI_GATEWAY_API_KEY if you want to proxy through Vercel AI Gateway
34+
```
35+
36+
### 3. Install Dependencies & Run Migrations
37+
38+
```bash
39+
# Install packages
40+
pnpm install
41+
42+
# Run database migrations
43+
pnpm db:migrate
44+
```
45+
46+
### 4. Start Development Server
47+
48+
```bash
49+
pnpm dev
50+
```
51+
52+
Open [http://localhost:3000](http://localhost:3000)
53+
54+
## Common Commands
55+
56+
```bash
57+
# Stop database
58+
docker-compose down
59+
60+
# Stop and remove all data
61+
docker-compose down -v
62+
63+
# View database logs
64+
docker-compose logs -f postgres
65+
66+
# Open Drizzle Studio (database UI)
67+
pnpm db:studio
68+
```
69+
70+
## Troubleshooting
71+
72+
### Port 5432 already in use
73+
Stop any existing PostgreSQL or change the port in docker-compose.yml:
74+
```yaml
75+
ports:
76+
- "5433:5432" # Use 5433 instead
77+
```
78+
Then update POSTGRES_URL to use port 5433.
79+
80+
### Database connection refused
81+
Make sure Docker Desktop is running and the container is up:
82+
```bash
83+
docker-compose ps
84+
docker-compose up -d
85+
```
86+
87+
### Migration fails
88+
Check your POSTGRES_URL is correct and the database is running.

app/(chat)/api/chat/route.ts

Lines changed: 36 additions & 40 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,7 @@ import { type RequestHints, systemPrompt } from "@/lib/ai/prompts";
2424
import { myProvider } from "@/lib/ai/providers";
2525
import { createDocument } from "@/lib/ai/tools/create-document";
2626
import { getWeather } from "@/lib/ai/tools/get-weather";
27+
import { analyzeMarketingMoment } from "@/lib/ai/tools/marketing-moment-analyzer";
2728
import { requestSuggestions } from "@/lib/ai/tools/request-suggestions";
2829
import { updateDocument } from "@/lib/ai/tools/update-document";
2930
import { isProductionEnvironment } from "@/lib/constants";
@@ -49,22 +50,6 @@ export const maxDuration = 60;
4950

5051
let globalStreamContext: ResumableStreamContext | null = null;
5152

52-
const getTokenlensCatalog = cache(
53-
async (): Promise<ModelCatalog | undefined> => {
54-
try {
55-
return await fetchModels();
56-
} catch (err) {
57-
console.warn(
58-
"TokenLens: catalog fetch failed, using default catalog",
59-
err
60-
);
61-
return; // tokenlens helpers will fall back to defaultCatalog
62-
}
63-
},
64-
["tokenlens-catalog"],
65-
{ revalidate: 24 * 60 * 60 } // 24 hours
66-
);
67-
6853
export function getStreamContext() {
6954
if (!globalStreamContext) {
7055
try {
@@ -73,18 +58,31 @@ export function getStreamContext() {
7358
});
7459
} catch (error: any) {
7560
if (error.message.includes("REDIS_URL")) {
76-
console.log(
77-
" > Resumable streams are disabled due to missing REDIS_URL"
78-
);
61+
console.log(" > Resumable streams disabled (missing REDIS_URL)");
7962
} else {
8063
console.error(error);
8164
}
8265
}
8366
}
84-
8567
return globalStreamContext;
8668
}
8769

70+
const getTokenlensCatalog = cache(
71+
async (): Promise<ModelCatalog | undefined> => {
72+
try {
73+
return await fetchModels();
74+
} catch (err) {
75+
console.warn(
76+
"TokenLens: catalog fetch failed, using default catalog",
77+
err
78+
);
79+
return; // tokenlens helpers will fall back to defaultCatalog
80+
}
81+
},
82+
["tokenlens-catalog"],
83+
{ revalidate: 24 * 60 * 60 } // 24 hours
84+
);
85+
8886
export async function POST(request: Request) {
8987
let requestBody: PostRequestBody;
9088

@@ -179,20 +177,27 @@ export async function POST(request: Request) {
179177

180178
const stream = createUIMessageStream({
181179
execute: ({ writer: dataStream }) => {
180+
const activeTools:
181+
| ("getWeather" | "createDocument" | "updateDocument" | "requestSuggestions" | "analyzeMarketingMoment")[]
182+
| undefined =
183+
selectedChatModel === "chat-model-reasoning"
184+
? undefined
185+
: [
186+
"getWeather",
187+
"createDocument",
188+
"updateDocument",
189+
"requestSuggestions",
190+
"analyzeMarketingMoment",
191+
];
192+
193+
const systemPromptContent = systemPrompt({ selectedChatModel, requestHints });
194+
182195
const result = streamText({
183196
model: myProvider.languageModel(selectedChatModel),
184-
system: systemPrompt({ selectedChatModel, requestHints }),
197+
system: systemPromptContent,
185198
messages: convertToModelMessages(uiMessages),
186199
stopWhen: stepCountIs(5),
187-
experimental_activeTools:
188-
selectedChatModel === "chat-model-reasoning"
189-
? []
190-
: [
191-
"getWeather",
192-
"createDocument",
193-
"updateDocument",
194-
"requestSuggestions",
195-
],
200+
experimental_activeTools: activeTools,
196201
experimental_transform: smoothStream({ chunking: "word" }),
197202
tools: {
198203
getWeather,
@@ -202,6 +207,7 @@ export async function POST(request: Request) {
202207
session,
203208
dataStream,
204209
}),
210+
analyzeMarketingMoment: analyzeMarketingMoment({ dataStream }),
205211
},
206212
experimental_telemetry: {
207213
isEnabled: isProductionEnvironment,
@@ -278,16 +284,6 @@ export async function POST(request: Request) {
278284
},
279285
});
280286

281-
// const streamContext = getStreamContext();
282-
283-
// if (streamContext) {
284-
// return new Response(
285-
// await streamContext.resumableStream(streamId, () =>
286-
// stream.pipeThrough(new JsonToSseTransformStream())
287-
// )
288-
// );
289-
// }
290-
291287
return new Response(stream.pipeThrough(new JsonToSseTransformStream()));
292288
} catch (error) {
293289
const vercelId = request.headers.get("x-vercel-id");

app/(chat)/layout.tsx

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@ import Script from "next/script";
33
import { AppSidebar } from "@/components/app-sidebar";
44
import { DataStreamProvider } from "@/components/data-stream-provider";
55
import { SidebarInset, SidebarProvider } from "@/components/ui/sidebar";
6+
import { MarketingProgressProvider } from "@/hooks/use-marketing-progress";
67
import { auth } from "../(auth)/auth";
78

89
export const experimental_ppr = true;
@@ -22,10 +23,12 @@ export default async function Layout({
2223
strategy="beforeInteractive"
2324
/>
2425
<DataStreamProvider>
25-
<SidebarProvider defaultOpen={!isCollapsed}>
26-
<AppSidebar user={session?.user} />
27-
<SidebarInset>{children}</SidebarInset>
28-
</SidebarProvider>
26+
<MarketingProgressProvider>
27+
<SidebarProvider defaultOpen={!isCollapsed}>
28+
<AppSidebar user={session?.user} />
29+
<SidebarInset>{children}</SidebarInset>
30+
</SidebarProvider>
31+
</MarketingProgressProvider>
2932
</DataStreamProvider>
3033
</>
3134
);

app/test-marketing/page.tsx

Lines changed: 70 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,70 @@
1+
"use client";
2+
3+
import { useState } from "react";
4+
5+
export default function TestMarketingPage() {
6+
const [status, setStatus] = useState("Ready to test");
7+
const [response, setResponse] = useState<any>(null);
8+
9+
const testChat = async () => {
10+
try {
11+
setStatus("Testing...");
12+
const res = await fetch("/api/chat", {
13+
method: "POST",
14+
headers: {
15+
"Content-Type": "application/json",
16+
},
17+
body: JSON.stringify({
18+
id: "test-" + Date.now(),
19+
message: {
20+
id: "msg-" + Date.now(),
21+
role: "user",
22+
parts: [
23+
{
24+
type: "text",
25+
text: "Is it a good time to launch a coffee shop in Seattle?",
26+
},
27+
],
28+
},
29+
selectedChatModel: "chat-model-default",
30+
selectedVisibilityType: "private",
31+
}),
32+
});
33+
34+
if (!res.ok) {
35+
const errorText = await res.text();
36+
setStatus(`Error: ${res.status} - ${errorText}`);
37+
setResponse({ error: errorText });
38+
return;
39+
}
40+
41+
setStatus("Success!");
42+
setResponse({ ok: true });
43+
} catch (error: any) {
44+
setStatus(`Exception: ${error.message}`);
45+
setResponse({ error: error.message });
46+
}
47+
};
48+
49+
return (
50+
<div className="p-8">
51+
<h1 className="text-2xl font-bold mb-4">Marketing Tool Test</h1>
52+
<div className="space-y-4">
53+
<button
54+
onClick={testChat}
55+
className="px-4 py-2 bg-blue-500 text-white rounded hover:bg-blue-600"
56+
>
57+
Test Marketing Query
58+
</button>
59+
<div>
60+
<strong>Status:</strong> {status}
61+
</div>
62+
{response && (
63+
<pre className="bg-gray-100 p-4 rounded overflow-auto">
64+
{JSON.stringify(response, null, 2)}
65+
</pre>
66+
)}
67+
</div>
68+
</div>
69+
);
70+
}

0 commit comments

Comments
 (0)