This is a self-contained prototype web app that simulates an AI mental-health consultant with:
- Empathetic chatbot trained on 200+ mock data points (yoga, breathing, consoling).
- Face emotion recognition (using face-api.js in-browser).
- Voice emotion heuristics (Web Audio API + optional SpeechRecognition for transcripts).
- Session aggregation and an end-session report (negative or normal depending on sadness signals).
Files added:
index.html— Front-end UIstatic/style.css— Stylesstatic/client.js— App logic (chat, emotion detection, report)scripts/generate_data.js— Generatesdata/mock_data.jsonwith 200 entriesdata/mock_data.json— (generated by script)server.js— Lightweight Node static serverpackage.json— dev dependencies and start script
Quick start (Windows PowerShell):
- Install Node.js (if not installed).
- Generate mock data:
cd d:/AI_CONSULTANT
node scripts/generate_data.js- Start the server:
node server.js- Open
http://localhost:3000in a Chromium-based browser (Chrome/Edge) to access camera/mic.
Notes:
- Face emotion detection uses models loaded from CDN — an internet connection is required to download the weights on first use.
- SpeechRecognition (voice-to-text) works best in Chrome. If unavailable, use typed input.
If you want I can run the generator and start the server for you, or adjust thresholds and UX.
This project supports proxying chat requests to an external LLM (e.g., Gemini). To enable it:
- Copy
.env.exampleto.envand setGEMINI_API_URLand eitherGEMINI_API_KEYorGEMINI_BEARER_TOKEN. - Restart the server so
server.jscan pick up the environment variables.
NOTE: Do NOT commit your real .env into version control. Store keys securely (environment variables, secrets manager, etc.).