Structured interview engine for Senior Frontend Engineers (React, TypeScript, JavaScript, System Design)
Live Demo: https://volkov85.github.io/frontend-meta-prompts/
This project helps you run a repeatable interview practice loop:
- Generate a structured interview prompt
- Run interview in external LLM/chat
- Save score and notes back to local session history
Interview prep is usually random and hard to track. This project makes it structured:
- Template-driven interview topics
- Shared prompt composition core for CLI and Web UI
- Session history persistence
- External evaluation recording (LLM/interviewer score + notes)
data/
interviews.json # Interview templates and defaults
sessions.json # Runtime session history for CLI mode
core/
composeInterviewPrompt.ts # Shared prompt builder (single source of truth)
types.ts # Shared interview domain types
engine/
composeInterviewPrompt.ts # Re-export of shared prompt builder for CLI imports
sessionRunner.ts # Session create/update and persistence
scorer.ts # Optional local scorer (not required by CLI flow)
index.ts # CLI entry point
web-ui/
src/lib/composePrompt.ts # Re-export of shared prompt builder for browser mode
src/lib/types.ts # Re-export of shared core types for the UI layer
src/lib/localSessions.ts # localStorage session persistence
src/App.tsx # React UI
Separation of concerns:
- Data layer: declarative interview templates
- Core layer: shared prompt generation and domain types
- Engine layer: CLI runtime and filesystem persistence
- Web layer: browser UI and localStorage persistence
- Runtime layer: generated sessions and evaluation results
- Structured interview templates (junior/middle/senior)
- Prompt generation with mode overrides from a shared core module
- Bilingual prompt generation (
en/ru) in Web UI - Template-level prompt overrides (
promptOverrides) for per-template tuning - CLI to list templates and generate interviews
- CLI mode to record external LLM evaluation
- Session persistence in JSON (CLI) and
localStorage(Web UI) - Interview progress chart in Web UI with recent scores, averages, and coverage
The project uses a shared core/ module so CLI and Web UI do not duplicate
prompt-building logic or domain types.
What lives in core/:
core/composeInterviewPrompt.ts- canonical prompt buildercore/types.ts- canonical interview config and session types
Benefits:
- One source of truth for prompt generation behavior
- Consistent output across CLI and browser flows
- Lower maintenance cost when adding new template options or modes
Templates in data/interviews.json can define promptOverrides:
followUps- overrides default follow-up countinclude- overrides default output sectionsplainLanguage- forces simpler wording and shorter phrasinggoodAnswerCriteria- adds explicit "GOOD ANSWER CRITERIA" block to output
Example:
{
"id": "junior-javascript-fundamentals",
"promptOverrides": {
"followUps": 2,
"plainLanguage": true,
"include": ["idealAnswer", "commonMistakes", "edgeCases", "scoringRubric"],
"goodAnswerCriteria": ["Explains solution in simple, correct steps"]
}
}Junior templates (junior-*) now use softer defaults:
- fewer follow-ups (
2) - plain language enabled
- explicit good answer criteria
- reduced include sections (without senior-focused blocks)
Install:
npm installRun CLI help:
npm run interview -- --helpRun Web UI in dev mode:
npm run webThen open http://localhost:5173.
Format code:
npm run formatRun full local quality gate:
npm run checkCreates a new session (unless --no-session) and prints prompt.
npm run interview -- --template js-deep-dive-core --level seniorOptional flags:
--stack react,typescript,javascript--focus event-loop,closures--extra "Your company/project context"--simulation true|false--timebox 30--english--no-session
Use this after interview is completed in external LLM/chat.
npm run interview -- --record-eval --session-id <id> --score 8.5 --notes "Strong trade-offs, missed edge cases"Rules:
--record-evalrequires--session-id--record-evalrequires--score(0..10)--notesis optional and expected to come from external LLM/interviewer
npm run interview -- --list-templatesThe project includes a browser interface powered by React + TypeScript + MUI, built with Vite. This mode is fully static and GitHub Pages compatible.
Capabilities:
- Select template and level
- Configure stack, focus, context, timebox, simulation mode
- Switch interface language (
EN/RU) in the top bar - Generate interview prompt in browser in selected language
- Auto-create session id in browser
- Save score + notes into localStorage
- Track recent interview momentum with a score trend chart and summary stats
- View latest local sessions
- Persist selected UI language in localStorage between reloads
Implementation:
web-ui/vite.config.ts- Vite configweb-ui/src/App.tsx- main UIweb-ui/src/components/ProgressChartCard.tsx- scored session trend chart and KPI cardsweb-ui/src/main.tsx- frontend entryweb-ui/src/theme.ts- MUI themeweb-ui/src/styles.css- visual theme and layout stylesweb-ui/src/lib/composePrompt.ts- Web-facing re-export of shared prompt composition logicweb-ui/src/lib/types.ts- Web-facing re-export of shared interview typesweb-ui/src/lib/localSessions.ts- localStorage persistence
Production build:
npm run web:build
npm run web:previewThen open the preview URL printed in terminal.
GitHub Pages build (repo path base):
VITE_BASE_PATH=/YOUR_REPO_NAME/ npm run web:buildThe project now includes:
- Unit tests for shared prompt composition logic
- Unit tests for browser session persistence (
localStorage) - Integration tests for core React UI flows
- End-to-end tests in a real browser with Playwright
Run test suites:
npm run test
npm run test:watch
npm run test:ui
npm run test:coverageRun e2e:
npx playwright install chromium
npm run e2e
npm run e2e:uiLocal checks:
npm run typechecknpm run lintnpm run format:checknpm run testnpm run check(runs all of the above in sequence)
Git hooks:
pre-commit->lint-staged(ESLint + Prettier on staged files)commit-msg->commitlintwith Conventional Commits rules
Workflows:
CI Quality Gate(.github/workflows/ci.yml)Deploy to GitHub Pages(.github/workflows/deploy-pages.yml)
Pipeline logic:
- Quality gate runs on push and pull request for
main - Deploy runs automatically only after successful
CI Quality Gateonmain - Deploy can also be started manually with
workflow_dispatch
Test files:
web-ui/src/lib/composePrompt.test.tsweb-ui/src/lib/localSessions.test.tsweb-ui/src/App.test.tsxe2e/app.spec.ts
- Generate prompt and create session:
npm run interview -- --template react-performance-profiling --level senior- Copy prompt into your interview chat with LLM.
- After interview, take LLM's final score/notes.
- Save result:
npm run interview -- --record-eval --session-id <id> --score 8 --notes "Good depth, improve rollout strategy"{
"id": "uuid",
"date": "2026-02-26T12:20:55.985Z",
"templateId": "react-performance-profiling",
"level": "senior",
"score": 8.5,
"notes": "Strong architecture trade-offs"
}- TypeScript
- Node.js
- ts-node
- Vite
- Vitest
- React Testing Library
- Playwright
- Prettier
- JSON-driven configuration
