Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions kits/assistant/medical-assistant/.env.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
MEDICAL_ASSISTANT_CHAT = "MEDICAL_ASSISTANT_CHAT Flow ID"
LAMATIC_API_URL = "LAMATIC_API_URL"
LAMATIC_PROJECT_ID = "LAMATIC_PROJECT_ID"
LAMATIC_API_KEY = "LAMATIC_API_KEY"
28 changes: 28 additions & 0 deletions kits/assistant/medical-assistant/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
# See https://help.github.com/articles/ignoring-files/ for more about ignoring files.

# dependencies
/node_modules

# next.js
/.next/
/out/

# production
/build

# debug
npm-debug.log*
yarn-debug.log*
yarn-error.log*
.pnpm-debug.log*

# env files
.env
.env*.local

# vercel
.vercel

# typescript
*.tsbuildinfo
next-env.d.ts
32 changes: 32 additions & 0 deletions kits/assistant/medical-assistant/.stylelintrc.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
{
"rules": {
"at-rule-no-unknown": [
true,
{
"ignoreAtRules": [
"apply",
"custom-variant",
"import",
"layer",
"plugin",
"tailwind",
"theme"
]
}
],
"scss/at-rule-no-unknown": [
true,
{
"ignoreAtRules": [
"apply",
"custom-variant",
"import",
"layer",
"plugin",
"tailwind",
"theme"
]
}
]
}
}
129 changes: 129 additions & 0 deletions kits/assistant/medical-assistant/README.md
Copy link
Copy Markdown
Contributor

@d-pamneja d-pamneja Mar 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you add a live demo link to this file? would allow us to view the functionality better, and allow users to see the kit before forking it. otherwise, LGTM. Thanks!

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, if you could add the entry in the README.md of the main readme as well?

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, I’ve added the live demo link to the main README as well. Please let me know if anything else is needed.

Original file line number Diff line number Diff line change
@@ -0,0 +1,129 @@
# Medical Assistant by Lamatic.ai

<p align="center">
<a href="https://medical-assistant-mu.vercel.app/" target="_blank">
<img src="https://img.shields.io/badge/Live%20Demo-black?style=for-the-badge" alt="Live Demo" />
</a>
</p>

**Medical Assistant** is an AI-powered chatbot built with [Lamatic.ai](https://lamatic.ai) that provides general medical information, symptom checks, and health guidance through a conversational interface. It uses intelligent workflows to process medical queries and return evidence-based information with markdown rendering.

> ⚠️ **Disclaimer:** This tool provides general medical information only. It is NOT a substitute for professional medical advice, diagnosis, or treatment. Always seek the advice of a qualified healthcare provider.

---

## Lamatic Setup (Pre and Post)

Before running this project, you must build and deploy the flow in Lamatic, then wire its config into this codebase.

### Pre: Build in Lamatic

1. Sign in or sign up at https://lamatic.ai
2. Create a project (if you don't have one yet)
3. Click "+ New Flow" and build a medical assistant flow:
- Add an **API Trigger** with a `query` input (string)
- Add an **LLM Node** with a medical-aware system prompt
- Configure the LLM to never diagnose, always recommend professional consultation
4. Deploy the flow in Lamatic and obtain your .env keys
5. Copy the Flow ID and API credentials from your studio

### Post: Wire into this repo

1. Create a `.env` file and set the keys
2. Install and run locally:
- `npm install`
- `npm run dev`
3. Deploy (Vercel recommended):
- Import your repo, set the project's Root Directory to `kits/assistant/medical-assistant`
- Add env vars in Vercel (same as your `.env`)
- Deploy and test your live URL

---

## 🔑 Setup

### Required Keys and Config

You'll need these to run this project locally:

| Item | Purpose | Where to Get It |
| ----------- | ---------------------------------------------------- | -------------------------------- |
| `.env` Keys | Authentication for Lamatic AI APIs and Orchestration | [lamatic.ai](https://lamatic.ai) |

### 1. Environment Variables

Create `.env` with:

```bash
# Lamatic
MEDICAL_ASSISTANT_CHAT=your_flow_id_here
LAMATIC_API_URL=your_lamatic_api_url_here
LAMATIC_PROJECT_ID=your_project_id_here
LAMATIC_API_KEY=your_api_key_here
```

### 2. Install & Run

```bash
npm install
npm run dev
# Open http://localhost:3000
```

---

## 📂 Repo Structure

```text
/actions
└── orchestrate.ts # Lamatic workflow orchestration for medical queries
/app
├── globals.css # Teal-themed design system
├── layout.tsx # Root layout with SEO metadata
└── page.tsx # Chat-style medical assistant UI
/components
├── header.tsx # Header with medical branding
└── disclaimer.tsx # Medical disclaimer components
/lib
├── lamatic-client.ts # Lamatic SDK client
└── utils.ts # Tailwind class merge utility
/flows
└── medical-assistant-chat/
├── config.json # Flow configuration
├── inputs.json # Input schema
├── meta.json # Flow metadata
└── README.md # Flow documentation
```

---

## 🏥 Features

- **Conversational Interface** — Chat-style Q&A with message history
- **Symptom Guidance** — Describe symptoms to get relevant medical information
- **Suggested Prompts** — Quick-start chips for common health questions
- **Markdown Rendering** — Rich formatted responses with headers, lists, and emphasis
- **Medical Disclaimers** — Persistent disclaimer banner + per-response reminders
- **Copy to Clipboard** — Easy sharing of responses
- **Emergency Detection** — Advises calling emergency services when appropriate

---

## 🔒 Privacy & Data Handling

- No user data is stored persistently — chat history exists only in the browser session
- Queries are sent to the Lamatic flow for processing and are not logged client-side
- No personal health information (PHI) is collected or stored
- Review your Lamatic project's data handling policies for server-side processing details

---

## 🤝 Contributing

We welcome contributions! Please see [CONTRIBUTING.md](../../../CONTRIBUTING.md) for guidelines.

---

## 📜 License

MIT License — see [LICENSE](../../../LICENSE).
161 changes: 161 additions & 0 deletions kits/assistant/medical-assistant/actions/orchestrate.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,161 @@
"use server"

import { lamaticClient } from "@/lib/lamatic-client"
import {config} from "../orchestrate"

function summarizeMessage(message: unknown): string | undefined {
if (typeof message !== "string") {
return undefined
}

const trimmed = message.replace(/\s+/g, " ").trim()
if (!trimmed) {
return undefined
}

return trimmed.length > 160 ? `${trimmed.slice(0, 157)}...` : trimmed
}

function normalizePayload(payload: unknown): string {
if (payload === null) {
return "null"
}
if (payload === undefined) {
return ""
}
if (typeof payload === "string") {
return payload
}

return JSON.stringify(payload)
}

export async function sendMedicalQuery(
query: string,
): Promise<{
success: boolean
data?: string
error?: string
}> {
let correlationId: string | undefined
try {
console.log("[medical-assistant] Processing query, length:", query.length)

if (!process.env.MEDICAL_ASSISTANT_CHAT) {
throw new Error(
"MEDICAL_ASSISTANT_CHAT environment variable is not set. Please add it to your .env.local file."
)
}

// Get the first workflow from the config
const flows = config.flows
const firstFlowKey = Object.keys(flows)[0]

if (!firstFlowKey) {
throw new Error("No workflows found in configuration")
}

const flow = flows[firstFlowKey as keyof typeof flows] as (typeof flows)[keyof typeof flows];
console.log("[medical-assistant] Using workflow:", flow.name, flow.workflowId);

// Prepare inputs based on the flow's input schema
const inputs: Record<string, any> = {
query,
}

// Map to schema if needed
for (const inputKey of Object.keys(flow.inputSchema || {})) {
if (inputKey === "query" || inputKey === "question" || inputKey === "message") {
inputs[inputKey] = query
}
}

console.log("[medical-assistant] Sending inputs for workflow:", flow.workflowId)

if(!flow.workflowId){
throw new Error("Workflow not found in config.")
}
let resData = await lamaticClient.executeFlow(flow.workflowId, inputs)
console.log("[medical-assistant] Response received, status:", resData?.status)

// Check for API-level errors first
if (resData?.status === "error") {
const apiError = resData?.message || "Unknown workflow error"
throw new Error(`Lamatic workflow error: ${apiError}. Please check your workflow configuration on the Lamatic dashboard.`)
}

// Handle async response - if we get a requestId, poll for the result
if (resData?.result?.requestId && !resData?.result?.answer) {
const requestId = resData.result.requestId
correlationId = requestId
console.log("[medical-assistant] Async response, polling with requestId:", requestId)

const pollStartedAt = Date.now()
const asyncResult = await lamaticClient.checkStatus(requestId, 2, 60)
const durationMs = Date.now() - pollStartedAt
console.log("[medical-assistant] Async poll metadata:", {
requestId,
status: asyncResult?.status,
statusCode: (asyncResult as any)?.statusCode ?? (asyncResult as any)?.code,
durationMs,
message: summarizeMessage(asyncResult?.message),
})

Comment on lines +91 to +103
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid logging raw workflow payloads/errors in a medical context.

Line [59] logs full asyncResult and Line [93] logs raw error. These can include user medical content or provider internals and should be sanitized.

🔒 Suggested logging hardening
-      console.log("[medical-assistant] Async poll result:", asyncResult)
+      console.log("[medical-assistant] Async poll status:", asyncResult?.status)

@@
-    console.error("[medical-assistant] Query error:", error)
+    const safeError =
+      error instanceof Error ? { name: error.name, message: error.message } : { message: "Unknown error" }
+    console.error("[medical-assistant] Query error:", safeError)

Also applies to: 93-93

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@kits/assistant/medical-assistant/actions/orchestrate.ts` around lines 56 -
60, Replace raw payload logging in orchestrate.ts by removing verbose prints of
asyncResult and error; instead log only non-sensitive metadata (e.g., requestId,
status code, duration, and a redacted/summary message). Specifically update the
console.log calls around lamaticClient.checkStatus (where requestId and
asyncResult are logged) and the catch/log that prints error to redact or
stringify only safe fields (avoid user medical content or provider internals)
and add a correlation id if available; keep the detailed payloads available only
for secure debug endpoints or structured logs behind a privacy guard.

if (asyncResult?.status === "error") {
throw new Error(`Workflow execution failed: ${asyncResult?.message || "Unknown error"}`)
}

resData = asyncResult
}

// Parse the answer - handle multiple response structures
const rawAnswer = resData?.result?.answer
|| (resData as any)?.data?.output?.result?.answer
|| resData?.result?.output?.answer
|| (typeof resData?.result === "string" ? resData.result : null)

// If the answer is an object (LLM output), extract the generatedResponse text
let answer: string | null = null
if (typeof rawAnswer === "object" && rawAnswer !== null) {
answer = rawAnswer.generatedResponse || rawAnswer.text || rawAnswer.content || JSON.stringify(rawAnswer)
} else if (typeof rawAnswer === "string" && rawAnswer.length > 0) {
answer = rawAnswer
}

const normalizedAnswer = normalizePayload(answer ?? rawAnswer)
console.log("[medical-assistant] Parsed answer:", normalizedAnswer ? `[${normalizedAnswer.length} chars]` : "null")

if (!normalizedAnswer) {
throw new Error("No answer found in response. Check workflow output configuration.")
}

return {
success: true,
data: normalizedAnswer,
}
} catch (error) {
const errorMeta = {
correlationId: (error as any)?.requestId ?? correlationId,
name: (error as any)?.name,
statusCode: (error as any)?.statusCode ?? (error as any)?.code,
message: summarizeMessage((error as any)?.message),
}
console.error("[medical-assistant] Query error metadata:", errorMeta)

let errorMessage = "Unknown error occurred"
if (error instanceof Error) {
errorMessage = error.message
if (error.message.includes("fetch failed")) {
errorMessage =
"Network error: Unable to connect to the service. Please check your internet connection and try again."
} else if (error.message.includes("API key")) {
errorMessage = "Authentication error: Please check your API configuration."
}
}

return {
success: false,
error: errorMessage,
}
}
}
Loading
Loading