Welcome to the Chatfield project! This guide will help you get started with development, testing, and contributing to both the Python and TypeScript/JavaScript implementations.
- Project Overview
- Prerequisites
- Setting Up Development Environment
- API Keys and Environment Variables
- Running Test Suites
- Development Workflow
- Code Quality Tools
- Project Structure
- Debugging Tips
- Contributing Guidelines
Chatfield is a dual-implementation library that transforms data collection from rigid forms into natural conversations powered by LLMs. It provides both Python (v1.0.0a2) and TypeScript/JavaScript (v1.0.0a2) implementations with feature parity as the goal.
Core Features:
- LLM-powered conversational data collection
- Smart validation and transformation of responses
- LangGraph-based conversation orchestration
- Fluent builder pattern API
- Full TypeScript type safety
- React and CopilotKit integrations
- Git
- Node.js 20.0.0+ (for TypeScript implementation)
- Python 3.8+ (for Python implementation)
- OpenAI API key (or other supported LLM provider)
- VSCode with Python and TypeScript extensions
- Docker (optional, for containerized development)
- Make (for Python development shortcuts)
- Clone the repository:
git clone https://github.com/yourusername/chatfield.git
cd chatfield/Python- Create and activate a virtual environment:
# Create virtual environment
python -m venv .venv
# Activate virtual environment
# On Linux/Mac:
source .venv/bin/activate
# On Windows:
.venv\Scripts\activate- Install the package with development dependencies:
# Install in editable mode with all dev dependencies
pip install -e ".[dev]"
# Or use the Makefile shortcut:
make install-dev- Set up pre-commit hooks (optional but recommended):
pre-commit install- Navigate to the TypeScript directory:
cd chatfield/TypeScript- Install dependencies:
npm install- Build the project:
npm run build- Set up watch mode for development:
npm run dev # Watches for changes and rebuilds automaticallyChatfield requires an OpenAI API key (or other LLM provider keys). You have three options for configuration:
export OPENAI_API_KEY="your-api-key-here"Create a .env file in the project root:
# /home/dev/src/Chatfield/.env
OPENAI_API_KEY=your-api-key-here
# Optional: LangSmith tracing configuration
LANGSMITH_TRACING=true
LANGSMITH_ENDPOINT=https://api.smith.langchain.com
LANGSMITH_PROJECT=chatfield
LANGSMITH_API_KEY=your-langsmith-api-key # If you have a LangSmith account# Python
from chatfield import Interviewer
interviewer = Interviewer(interview, api_key="your-api-key")// TypeScript
import { Interviewer } from '@chatfield/core'
const interviewer = new Interviewer(interview, { apiKey: "your-api-key" })- Add
.envto.gitignore(should already be configured) - Use
.env.examplefor documenting required variables:
# Create .env.example with dummy values
OPENAI_API_KEY=your-api-key-here
LANGSMITH_API_KEY=your-langsmith-key-here- For production, use secure secret management:
- Environment variables from CI/CD
- Secret management services (AWS Secrets Manager, Azure Key Vault, etc.)
- Kubernetes secrets
The Python test suite uses pytest with pytest-describe for BDD-style test organization.
# Using pytest directly
python -m pytest
# Using Makefile
make test
# With coverage report
make test-cov
# HTML coverage report will be in htmlcov/index.html# Run specific test file
python -m pytest tests/test_interview.py
# Run specific test by name pattern
python -m pytest -k "test_field_validation"
# Run tests in a describe block
python -m pytest tests/test_builder.py::describe_builder# Skip tests that require API keys
python -m pytest -m "not requires_api_key"
# Skip slow tests
python -m pytest -m "not slow"
# Run only unit tests
python -m pytest -m "unit"
# Run only integration tests
python -m pytest -m "integration"# Generate HTML coverage report
python -m pytest --cov=chatfield --cov-report=html
# Open coverage report
# Linux/Mac:
open htmlcov/index.html
# Windows:
start htmlcov/index.htmlThe TypeScript test suite uses Jest with a structure that mirrors the Python tests.
# Run all tests
npm test
# Run tests in watch mode (re-runs on file changes)
npm run test:watch
# Run with coverage
npm test -- --coverage
# Coverage report will be in coverage/lcov-report/index.html# Run specific test file
npm test interview.test.ts
# Run tests matching a pattern
npm test -- --testNamePattern="field validation"
# Run integration tests only
npm test -- integration/# Run tests with verbose output
npm test -- --verbose
# Debug with Node inspector
node --inspect-brk node_modules/.bin/jest --runInBand
# Run single test file in band (sequential)
npm test -- --runInBand interview.test.ts# Format code with Black and isort
make format
# Run linting checks
make lint
# Run type checking with mypy
make typecheck
# Run all checks (format, lint, typecheck, test)
make dev
# Build distribution packages
make build
# Clean build artifacts
make clean# Build TypeScript to JavaScript
npm run build
# Watch mode (rebuilds on changes)
npm run dev
# Run linting
npm run lint
# Clean build directory
npm run clean
# Run minimal OpenAI test
npm run mincd Python/examples
python job_interview.py
python restaurant_order.py
python tech_request.py
python favorite_number.pycd TypeScript
npx tsx examples/basic-usage.ts
npx tsx examples/job-interview.ts
npx tsx examples/restaurant-order.ts
npx tsx examples/schema-based.ts- Black: Code formatter (line length: 100)
- isort: Import sorter
- flake8: Linting tool
- mypy: Static type checker
- pytest: Testing framework
- pytest-cov: Coverage reporting
Configuration files:
pyproject.toml: Main Python project configurationMakefile: Development command shortcuts
- ESLint: Linting and code quality
- TypeScript: Strict mode with all checks enabled
- Jest: Testing framework
- ts-jest: TypeScript support for Jest
Configuration files:
package.json: Node.js project configurationtsconfig.json: TypeScript compiler configurationjest.config.js: Jest testing configuration
Chatfield/
├── Documentation/ # Project-wide documentation
│ └── TEST_HARMONIZATION.md # Test synchronization guide
├── Python/ # Python implementation (v1.0.0a2)
│ ├── chatfield/ # Core package
│ ├── tests/ # Test suite (pytest-describe)
│ ├── examples/ # Usage examples
│ ├── evals/ # Security evaluation suite
│ ├── Makefile # Development shortcuts
│ └── pyproject.toml # Package configuration
└── TypeScript/ # TypeScript implementation (v1.0.0a2)
├── src/ # Source code
├── tests/ # Test suite (Jest)
├── examples/ # Usage examples
├── package.json # Package configuration
└── tsconfig.json # TypeScript configuration
# Enable debug logging
import logging
logging.basicConfig(level=logging.DEBUG)
# Inspect Interview structure
print(interview._chatfield)
# Check LangGraph state
print(interviewer.graph.get_state())// Enable LangSmith tracing
process.env.LANGCHAIN_TRACING_V2 = "true"
// Inspect Interview state
console.log(interview._chatfield)
// Debug in VSCode
// Add breakpoints and use Debug > Start Debugging- ImportError: Ensure package is installed:
pip install -e . - Missing API Key: Set
OPENAI_API_KEYenvironment variable - Slow Tests: Mark with
@pytest.mark.slowand skip with-m "not slow"
- Module not found: Run
npm installandnpm run build - Type errors: Check
tsconfig.jsonand ensure strict mode settings - Test timeout: Increase timeout with
jest.setTimeout(10000)
- Check existing issues on GitHub
- Read the CLAUDE.md files for implementation details
- Ensure tests pass before making changes
- Maintain feature parity between Python and TypeScript
- Follow existing patterns and conventions
- Write tests for new features using BDD style
- Update documentation when adding features
- Use descriptive commit messages
- Test names must match between Python and TypeScript implementations
- Use BDD-style organization (describe/it blocks)
- Write both unit and integration tests
- Mock external dependencies for unit tests
- Mark API-dependent tests appropriately
- Use Black formatter (100 char line limit)
- Follow PEP 8 with Black's modifications
- Use type hints where appropriate
- Write docstrings for public methods
- Follow ESLint configuration
- Use strict TypeScript settings
- Provide full type coverage
- Document with JSDoc comments
- Fork the repository
- Create a feature branch:
git checkout -b feature/your-feature - Make your changes with appropriate tests
- Run all checks:
- Python:
make dev - TypeScript:
npm run lint && npm test
- Python:
- Commit with clear messages:
git commit -m "Add feature: description" - Push to your fork:
git push origin feature/your-feature - Create a Pull Request with:
- Clear description of changes
- Link to related issues
- Test results/coverage
When making changes that affect the API:
- Update both Python and TypeScript implementations
- Ensure test parity (see
Documentation/TEST_HARMONIZATION.md) - Update version numbers in both
pyproject.tomlandpackage.json
- Main CLAUDE.md: Project overview and architecture
- Python/CLAUDE.md: Python-specific implementation details
- TypeScript/CLAUDE.md: TypeScript-specific implementation details
- Documentation/TEST_HARMONIZATION.md: Test synchronization guide
- LangGraph Documentation
- LangChain Documentation
- OpenAI API Documentation
- Jest Testing Documentation
- Pytest Documentation
- GitHub Issues: Report bugs or request features
- Discussions: Ask questions and share ideas
- YouTube Streams: Watch development in progress
# Python
cd Python
source .venv/bin/activate # Activate virtual environment
pip install -e ".[dev]" # Install with dev dependencies
make dev # Run all checks
python -m pytest # Run tests
make test-cov # Run tests with coverage
# TypeScript
cd TypeScript
npm install # Install dependencies
npm run dev # Watch mode
npm test # Run tests
npm test -- --coverage # Run with coverage
npm run build # Build project
# Both
export OPENAI_API_KEY="..." # Set API keyHappy coding! 🚀