Skip to content

Alan-Jowett/CoPilot-For-Consensus

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

720 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Copilot-for-Consensus

License: MIT Python 3.11+ Docker Code style: black

An open-source AI assistant that ingests mailing list discussions, summarizes threads, and surfaces consensus for technical working groups.

📚 Documentation | 🚀 Quick Start | 🏗️ Architecture | 🤝 Contributing | 📋 Governance


Table of Contents


Overview

Copilot-for-Consensus is designed to scale institutional memory and accelerate decision-making in technical communities like IETF working groups. It uses LLM-powered summarization and insight extraction to help participants keep up with mailing list traffic, track draft evolution, and identify consensus or dissent.

This project aims to be:

  • Containerized for easy deployment
  • Microservice-based for modularity and scalability
  • Deployable locally using lightweight open-source LLMs or in Azure Cloud for enterprise-scale workloads
  • Built primarily in Python for accessibility and community contribution
  • Production-ready with comprehensive observability, error handling, and testing

Key Features

Core Capabilities

  • Mailing List Ingestion: Fetch archives via rsync, IMAP, or HTTP from multiple sources
  • Parsing & Normalization: Extract structured data from .mbox files with thread detection and RFC/draft mention tracking
  • Semantic Chunking: Token-aware splitting with semantic coherence for optimal embedding
  • Vector Search: Fast similarity search using Qdrant with configurable backends (FAISS, in-memory)
  • LLM-Powered Summarization: Extractive + abstractive summaries with configurable backends:
    • Local: Ollama (Mistral, Llama 2, etc.) for fully offline operation
    • Cloud: Azure OpenAI, OpenAI API for production scale
    • Alternative: llama.cpp with AMD GPU support
  • Consensus Detection: Identify agreement/dissent signals in threads (in development)
  • Draft Tracking: Monitor mentions and evolution of RFC drafts (in development)
  • Transparency: Inline citations linking summaries to original messages

Production Features

  • Event-Driven Architecture: Asynchronous message bus (RabbitMQ) for loose coupling
  • Observability Stack: Prometheus metrics, Grafana dashboards, Loki logging, and Promtail log aggregation
  • Error Handling: Retry policies, failed queue management, and centralized error reporting
  • Idempotency: All operations are idempotent with deduplication support
  • GPU Acceleration: Optional NVIDIA (Ollama) or AMD (llama.cpp) GPU support for 10-100x faster inference
  • Schema Validation: JSON schema validation for all messages and events
  • Health Checks: Comprehensive health checks for all services
  • TLS/HTTPS Support: API Gateway supports TLS with configurable certificates for secure communication

Long-Term Vision

Beyond summarization, Copilot-for-Consensus will evolve into an interactive subject matter expert that:

  • Understands RFCs and mailing list history for deep contextual answers
  • Provides semantic search and Q&A across technical archives
  • Supports multi-modal knowledge (text, diagrams, code snippets)
  • Offers real-time collaboration tools for chairs and contributors
  • Integrates with standards governance workflows for better decision tracking

Architecture

The system follows a microservice-based, event-driven architecture where services communicate asynchronously through a message bus (RabbitMQ) and store data in MongoDB and Qdrant. This design ensures loose coupling, scalability, and resilience.

For detailed architecture documentation, design patterns, and service interactions, see docs/architecture/overview.md.

Cloud-Agnostic API Gateway

The API Gateway provides a unified entry point for all services with multi-cloud portability:

  • Local Development: NGINX-based gateway for single-machine deployments (default)
  • Cloud Deployments: Native gateway support for Azure (APIM), AWS (API Gateway), and GCP (Cloud Endpoints)
  • Single Source of Truth: OpenAPI 3.0 specification drives all gateway configurations
  • Automated Generation: CLI tool transforms OpenAPI spec into provider-specific configs

See Gateway Documentation for architecture, deployment guides, and how to extend to new providers.

Services Overview

Service Purpose Port(s) Status Auth
Processing Pipeline
Ingestion Fetches mailing list archives from remote sources 8001 (localhost) Production Admin
Parsing Extracts and normalizes email messages from archives - Production Processor
Chunking Splits messages into semantic chunks for embedding - Production Processor
Embedding Generates vector embeddings for semantic search - Production Processor
Orchestrator Coordinates RAG workflow and summarization - Production Orchestrator
Summarization Creates summaries using configurable LLM backends - Production Processor
User-Facing
API Gateway Reverse proxy unifying service endpoints 443 (HTTPS, public) Production -
Reporting API HTTP API for accessing summaries and insights via 443 (/reporting) Production Reader
Web UI React SPA for viewing reports via 443 (/ui) Production Public
Auth Service OIDC authentication with local JWT minting via 443 (/auth) Production Public
Infrastructure
MongoDB Document storage for messages and summaries 27017 (localhost) Production
Qdrant Vector database for semantic search 6333 (localhost) Production
RabbitMQ Message broker for event-driven communication 5672, 15672 (localhost) Production
Ollama Local LLM runtime (offline capable) 11434 (localhost) Production
llama.cpp Alternative LLM runtime with AMD GPU support 8081 (localhost) Optional
Observability
Prometheus Metrics collection and aggregation 9090 (localhost) Production
Grafana Monitoring dashboards and visualization via 8080 (/grafana) Production
Loki Log aggregation 3100 (localhost) Production
Promtail Log scraping from Docker containers - Production
Pushgateway Metrics push gateway for batch jobs - Production

Note: Services marked as "public" (0.0.0.0) are accessible from outside the host. All other services are bound to localhost (127.0.0.1) for security. See docs/operations/exposed-ports.md for security details.

Microservices

Processing Pipeline

  • Ingestion Service: Fetches mailing list archives from various sources (rsync, IMAP, HTTP)
  • Parsing Service: Extracts and normalizes email messages from .mbox files, identifies threads, detects RFC/draft mentions
  • Chunking Service: Splits messages into token-aware, semantically coherent chunks suitable for embedding
  • Embedding Service: Generates vector embeddings using local (SentenceTransformers, Ollama) or cloud (Azure OpenAI) models
  • Orchestrator Service: Coordinates workflow across services, manages retrieval-augmented generation (RAG)
  • Summarization Service: Creates summaries using LLMs with configurable backends (OpenAI, Azure OpenAI, Ollama, llama.cpp)

User-Facing Services

  • Reporting Service: Provides HTTP API for accessing summaries and insights (port 8080)
  • Web UI: React SPA for browsing reports and insights (port 8084)
  • Auth Service: OIDC authentication with local JWT token minting (port 8090)
    • Supports GitHub, Google, and Microsoft authentication
    • Issues service-scoped JWTs with custom claims
    • Provides JWKS endpoint for distributed token validation
    • See auth/README.md for details

Infrastructure Components

Storage Layer

  • MongoDB (documentdb): Document storage for messages, chunks, threads, and summaries
  • Qdrant (vectorstore): Vector database for semantic search and similarity queries

Integration Layer

  • RabbitMQ (messagebus): Message broker enabling asynchronous, event-driven communication between services
  • Ollama: Local LLM runtime for embeddings and text generation (fully offline capable)
  • llama.cpp (optional): Alternative local LLM runtime with AMD GPU support (Vulkan/ROCm) - see AMD GPU Setup Guide

Observability Stack

The system includes a production-ready observability stack with comprehensive metrics, logging, alerting, and tracing capabilities. See the Observability RFC for complete details.

Metrics (Prometheus + Grafana)

  • Prometheus scrapes metrics from all services on port 9090
  • Grafana provides visualization dashboards accessible via http://localhost:8080/grafana/
  • 57 alert rules covering service health, queue lag, latency SLOs, error rates, and resource limits
  • Pre-configured dashboards for:
    • System health and service uptime
    • Service metrics (latency P95/P99, throughput, error rates)
    • Queue status (depth, age, consumer count)
    • Document processing pipeline
    • Resource usage (CPU, memory, disk, network)
    • Failed queue monitoring
    • MongoDB, RabbitMQ, Qdrant status

Access Grafana via the Gateway at http://localhost:8080/grafana/ (default credentials: admin/admin).

For Developers: See Metrics Integration Guide for adding metrics to services.

Logging (Loki + Promtail)

  • Loki aggregates logs from all services on port 3100
  • Promtail scrapes Docker container logs automatically
  • Structured JSON logging with trace correlation
  • Logs are labeled by service, container, and level
  • Query logs through Grafana's Explore interface

Alerting and Runbooks

  • Prometheus Alertmanager evaluates alert rules every 60 seconds
  • Alert severity levels: Info → Warning → Error → Critical → Emergency
  • Operator Runbooks with diagnosis, resolution, and escalation procedures:
  • SLO-based alerts for latency (P95/P99), error rate, and queue lag

View active alerts: http://localhost:9090/alerts

Failed Queue Management

  • Failed queues capture messages that fail after retry exhaustion
  • Automated alerts for queue buildup (Warning >50, Critical >200, Emergency >1000)
  • CLI tool for inspection, requeue, and purge operations: scripts/manage_failed_queues.py
  • Dedicated Grafana dashboard: Failed Queues Overview

Documentation:

Adapters

The system uses adapter modules to decouple core business logic from external dependencies:

  • copilot_archive_fetcher: Fetches archives from remote sources
  • copilot_archive_store: Archive storage abstraction
  • copilot_auth: Authentication and authorization
  • copilot_chunking: Text chunking algorithms
  • copilot_config: Unified configuration management with schema validation
  • copilot_consensus: Consensus detection logic
  • copilot_draft_diff: RFC draft difference tracking
  • copilot_embedding: Embedding generation abstraction
  • copilot_message_bus: Event publishing, subscription, and schema validation
  • copilot_logging: Structured logging
  • copilot_metrics: Metrics collection (Prometheus)
  • copilot_error_reporting: Error reporting
  • copilot_schema_validation: JSON schema validation for messages and events
  • copilot_startup: Service startup coordination
  • copilot_storage: Document store abstraction (MongoDB, in-memory)
  • copilot_summarization: Summarization logic abstraction
  • copilot_vectorstore: Vector store abstraction (Qdrant, FAISS)

See adapters/README.md for detailed adapter documentation.


Quick Start

For detailed local development setup, see docs/LOCAL_DEVELOPMENT.md.

Prerequisites

  • Docker and Docker Compose
  • 8GB+ RAM recommended
  • 20GB+ disk space for models and data

Running the Stack

Using Pre-built Docker Images

Pre-built Docker images are automatically published to GitHub Container Registry (GHCR) on every successful CI run to the main branch. Images are tagged with:

  • latest: Most recent build from main
  • <commit-sha>: Specific commit SHA for reproducible deployments

To use pre-built images, update your docker-compose.yml to reference GHCR images:

services:
  parsing:
    image: ghcr.io/alan-jowett/copilot-for-consensus/parsing:latest
  chunking:
    image: ghcr.io/alan-jowett/copilot-for-consensus/chunking:latest
  # ... and so on for other services

Building Locally

  1. Clone the repository:
git clone https://github.com/Alan-Jowett/CoPilot-For-Consensus.git
cd CoPilot-For-Consensus
  1. Start all services:
docker compose up -d

Note: By default, Docker Compose uses docker-compose.yml together with docker-compose.services.yml and will build images locally from the Dockerfiles. To use pre-built images from GHCR instead of building locally, either update the image: references in docker-compose.services.yml as shown above, or (recommended) create a docker-compose.override.yml that overrides the relevant services with your desired image: tags.

  1. Initialize the database:
# Database initialization runs automatically via db-init service
docker compose logs db-init
  1. Access the services:

For the full list of exposed ports and security considerations, see docs/operations/exposed-ports.md.

Note: The Mistral LLM model is automatically downloaded on first startup via the ollama-model-loader service when using the default Ollama backend. This may take several minutes depending on your internet connection. Models are stored in the ollama_models Docker volume for persistence.

  1. (Optional) Enable GPU acceleration for 10-100x faster inference:

    • NVIDIA GPU (recommended): See documents/OLLAMA_GPU_SETUP.md
      • Requires NVIDIA GPU with drivers and nvidia-container-toolkit
      • Verify GPU support:
        • Linux/macOS/WSL2: ./scripts/check_ollama_gpu.sh
        • Windows PowerShell: .\scripts\check_ollama_gpu.ps1
        • Or directly: docker exec ollama nvidia-smi
    • AMD GPU (experimental): See AMD GPU Setup Guide to enable llama.cpp with Vulkan/ROCm
  2. Run ingestion to process test data:

    Option A: Using test fixtures (recommended for first-time users):

    # Start the continuously running ingestion service (exposes REST API on 8001)
    docker compose up -d ingestion
    
    # Copy the sample mailbox into the running container
    INGESTION_CONTAINER=$(docker compose ps -q ingestion)
    docker exec "$INGESTION_CONTAINER" mkdir -p /tmp/test-mailbox
    docker cp tests/fixtures/mailbox_sample/test-archive.mbox "$INGESTION_CONTAINER":/tmp/test-mailbox/test-archive.mbox
    
    # Create the source via REST API

curl -f -X POST http://localhost:8080/ingestion/api/sources
-H "Content-Type: application/json"
-d '{"name":"test-mailbox","source_type":"local","url":"/tmp/test-mailbox/test-archive.mbox","enabled":true}'

Trigger ingestion via REST API

curl -f -X POST http://localhost:8080/ingestion/api/sources/test-mailbox/trigger


**Option B: Using PowerShell helper (Windows):**
```powershell
.\run_ingestion_test.ps1

After ingestion completes, summaries will be available via the Reporting API at http://localhost:8080/reporting/api/reports

Viewing Logs

View logs for all services:

docker compose logs -f

View logs for a specific service:

docker compose logs -f parsing

Query centralized logs in Grafana:

  1. Open http://localhost:3000
  2. Navigate to Explore
  3. Select "Loki" datasource
  4. Query: {service="parsing"} to see parsing service logs

Troubleshooting

Services won't start:

  • Ensure Docker has at least 8GB RAM allocated
  • Check for port conflicts:
    • Linux/macOS: netstat -tuln | grep -E '(3000|8080|27017|5672|6333)'
    • Windows PowerShell: Get-NetTCPConnection -LocalPort 3000,8080,27017,5672,6333 -ErrorAction SilentlyContinue
    • Or check service status: docker compose ps
  • View service logs: docker compose logs <service-name>

Ollama model pull fails:

  • Wait for ollama service to be healthy: docker compose ps ollama
  • Check connectivity: docker compose exec ollama ollama list
  • Retry: The ollama-model-loader service retries up to 5 times

Database connection errors:

  • Verify MongoDB is healthy: docker compose ps documentdb
  • Check credentials match .env file
  • Run smoke test (see Database Smoke Testing section below)

RabbitMQ connection errors:

  • Verify RabbitMQ is healthy: docker compose ps messagebus
  • Check management UI: http://localhost:15672 (guest/guest)

For more troubleshooting, see docs/LOCAL_DEVELOPMENT.md.

Authentication Setup

The system includes an authentication service that supports GitHub, Google, and Microsoft login providers. By default, only the providers you configure will be available.

To enable authentication providers:

  1. Generate JWT signing keys (required for all providers):

    python auth/generate_keys.py
    # This creates secrets/jwt_private_key and secrets/jwt_public_key
  2. Configure provider credentials in the ./secrets/ directory:

    For GitHub:

    • Create an OAuth app at https://github.com/settings/developers
    • Set callback URL: http://localhost:8080/ui/callback
    • Store credentials:
      1. Copy the example files:
        cp secrets/github_oauth_client_id.example secrets/github_oauth_client_id
        cp secrets/github_oauth_client_secret.example secrets/github_oauth_client_secret
      2. Edit the copied files and replace the placeholder values with your actual GitHub OAuth credentials
      3. Save the files

    For Google:

    • Create OAuth credentials at https://console.cloud.google.com/
    • Set authorized redirect URI: http://localhost:8080/ui/callback
    • Store credentials:
      1. Copy the example files:
        cp secrets/google_oauth_client_id.example secrets/google_oauth_client_id
        cp secrets/google_oauth_client_secret.example secrets/google_oauth_client_secret
      2. Edit the copied files and replace the placeholder values with your actual Google OAuth credentials
      3. Save the files

    For Microsoft:

    • Local Development: Create an app registration at https://entra.microsoft.com/

      • Set redirect URI: http://localhost:8080/ui/callback
      • Store credentials:
        1. Copy the example files:
          cp secrets/microsoft_oauth_client_id.example secrets/microsoft_oauth_client_id
          cp secrets/microsoft_oauth_client_secret.example secrets/microsoft_oauth_client_secret
        2. Edit the copied files and replace the placeholder values with your actual Microsoft OAuth credentials
        3. Save the files
    • Azure Deployment (Automated): When deploying to Azure, Microsoft OAuth can be automatically configured. See infra/azure/ENTRA_APP_AUTOMATION.md for details. The Bicep template will:

      • Automatically create the Entra app registration
      • Configure redirect URIs based on your deployed gateway
      • Generate and store client secrets in Azure Key Vault
      • Wire credentials to the auth service via managed identity

    Note: On Windows, use Copy-Item instead of cp in PowerShell.

  3. Restart the auth service to pick up the new credentials:

    docker compose restart auth
  4. Verify provider availability:

    curl http://localhost:8080/auth/providers

For detailed setup instructions, including production deployment with HTTPS, see:

Security Note - First User Admin Access:

  • By default, auto-promotion of the first user to admin is disabled for security
  • For production deployments, the initial admin must be created in a controlled manner:
    • Perform initial setup in a strictly isolated environment (private network, maintenance window)
    • Temporarily enable auto-promotion (AUTH_FIRST_USER_AUTO_PROMOTION_ENABLED=true)
    • Have the intended administrator authenticate once to receive admin role
    • Immediately disable auto-promotion (AUTH_FIRST_USER_AUTO_PROMOTION_ENABLED=false) and restart before exposing to untrusted users
  • A dedicated bootstrap-token mechanism is planned but not yet implemented

Note: If you don't configure any providers, the login page will show buttons but clicking them will return an error indicating the provider is not configured.

Demo vs Production Setup

Current setup is optimized for local development and integration testing:

  • Uses local Ollama for LLM inference (no API keys needed)
  • All services run in Docker Compose on a single host
  • In-memory queues (RabbitMQ) and local storage
  • No authentication or TLS on most services

For production deployments, consider:

  • LLM Backend: Use Azure OpenAI or OpenAI API for better performance and reliability
    • Set LLM_BACKEND=azure and configure AZURE_OPENAI_KEY and AZURE_OPENAI_ENDPOINT
  • Message Queue: Use managed RabbitMQ (e.g., CloudAMQP) or Azure Service Bus for durability
  • Storage: Use Azure Cosmos DB or managed MongoDB for high availability
  • Vector Store: Use Qdrant in Azure Container Apps (default, ~$8-15/month) or Azure AI Search (optional, ~$74/month)
  • Observability: Use Azure Monitor, Datadog, or New Relic for production monitoring
  • Security: Enable TLS, authentication, and network policies (see SECURITY.md)
  • Scaling: Deploy services independently with Kubernetes or Azure Container Apps

Azure Deployment:

  • We provide an Azure Resource Manager (ARM) template for automated deployment to Azure
  • All services deploy as Azure Container Apps with managed identities
  • Azure-optimized Docker images reduce image size by ~70% (from ~8.7GB to ~2.7GB total)
    • Excludes local LLM models (uses Azure OpenAI instead)
    • Uses lightweight base images (python:3.11-slim vs pytorch/pytorch)
    • See docs/AZURE_OPTIMIZED_IMAGES.md for details
  • See infra/azure/README.md for complete deployment guide
  • One-click deployment with proper RBAC, networking, and observability

See docs/architecture/overview.md for detailed production architecture guidance.


Database Smoke Testing

A simple smoke test script is available to verify MongoDB connectivity and schema acceptance:

# Run from within a MongoDB container (after stack is up)
docker compose exec documentdb mongosh \
  "mongodb://${DOC_DB_ADMIN_USERNAME:-admin}:${DOC_DB_ADMIN_PASSWORD:-PLEASE_CHANGE_ME}@localhost:27017/admin" \
  /test/test_insert.js

# Or from the host (if mongosh is installed locally)
mongosh "mongodb://${DOC_DB_ADMIN_USERNAME:-admin}:${DOC_DB_ADMIN_PASSWORD:-PLEASE_CHANGE_ME}@localhost:27017/admin" \
  ./infra/test/test_insert.js

This script:

  • Inserts a minimal messages document with required fields
  • Verifies the insert was acknowledged by MongoDB
  • Prints success confirmation with the inserted message_id
  • Helps isolate connection vs. schema issues during troubleshooting

Note: The test inserts a document with message_id: "smoke-test-message-001". You may want to clean it up after testing:

docker compose exec documentdb mongosh \
  -u ${DOC_DB_ADMIN_USERNAME:-admin} \
  -p ${DOC_DB_ADMIN_PASSWORD:-PLEASE_CHANGE_ME} \
  --authenticationDatabase admin \
  copilot \
  --eval 'db.messages.deleteOne({message_id: "smoke-test-message-001"})'

Documentation

Comprehensive documentation is available throughout the repository:

Core Documentation

Technical Documentation

Gateway Documentation

Development Guides

Service Documentation

Each microservice has a comprehensive README:

Adapter Documentation


Development

Local Development Setup

For detailed local development instructions, see docs/LOCAL_DEVELOPMENT.md.

Quick setup:

  1. Clone the repository
  2. Install pre-commit hooks: pip install pre-commit && pre-commit install
  3. Start services: docker compose up -d
  4. Run tests: See docs/TESTING_STRATEGY.md

Pre-commit Hooks

This project uses pre-commit hooks to enforce code quality and license headers:

# Install pre-commit
pip install pre-commit

# Install the hooks
pre-commit install

# Run manually on all files
pre-commit run --all-files

Running Tests

Integration tests:

# Run all integration tests
docker compose -f docker-compose.yml up -d
python -m pytest tests/integration/

# Run specific service tests
python -m pytest tests/integration/test_parsing.py

Unit tests:

# Run unit tests for a specific service
cd parsing
python -m pytest tests/

Port exposure validation:

python tests/test_port_exposure.py
python tests/validate_port_changes.py

For comprehensive testing documentation, see docs/TESTING_STRATEGY.md.

Code Quality

Static Analysis and Validation: This project uses comprehensive static analysis to catch attribute errors, type issues, and other problems before deployment:

# Install validation tools
pip install -r requirements-dev.txt

# Run all validation checks
python scripts/validate_python.py

# Run specific checks
python scripts/validate_python.py --tool ruff      # Fast linting
python scripts/validate_python.py --tool mypy      # Type checking
python scripts/validate_python.py --tool pyright   # Advanced type checking
python scripts/validate_python.py --tool pylint    # Attribute checking
python scripts/validate_python.py --tool import-tests  # Import smoke tests

# Auto-fix issues (where possible)
python scripts/validate_python.py --tool ruff --fix

The CI pipeline enforces:

  • Ruff: Fast Python linter for syntax and style
  • MyPy: Static type checker with strict mode
  • Pyright: Advanced type checker for catching attribute errors
  • Pylint: Attribute and member access validation
  • Import Tests: Ensures all modules load without errors

License headers: All source files must include SPDX license headers. Verify compliance:

python scripts/check_license_headers.py --root .

Linting and formatting: The project uses ruff for Python formatting and pre-commit hooks for enforcement.


Contributing

We welcome contributions from the community! This project follows open-source governance principles.

How to Contribute

  1. Read the guidelines: See CONTRIBUTING.md for detailed instructions
  2. Follow the Code of Conduct: Read CODE_OF_CONDUCT.md
  3. Understand governance: Review GOVERNANCE.md for project structure
  4. Report security issues: Follow SECURITY.md for vulnerability reporting

Contribution Areas

  • Core features: Implement new microservices or enhance existing ones
  • Adapters: Add support for new storage, messaging, or LLM backends
  • Documentation: Improve guides, add examples, fix errors
  • Testing: Add test coverage, improve CI/CD
  • Performance: Optimize processing pipelines, reduce latency
  • Observability: Enhance metrics, dashboards, and logging

Development Workflow

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature/your-feature
  3. Make your changes with appropriate tests
  4. Ensure all tests pass and pre-commit hooks succeed
  5. Submit a pull request with a clear description

All pull requests are reviewed according to the governance process documented in GOVERNANCE.md.


License

This project is distributed under the MIT License. See LICENSE for details.

License Headers

All files that support comments must include an SPDX license identifier and copyright header:

SPDX-License-Identifier: MIT
Copyright (c) 2025 Copilot-for-Consensus contributors

Header formats by file type:

  • Python, Bash, Docker Compose: Use # comments
    # SPDX-License-Identifier: MIT
    # Copyright (c) 2025 Copilot-for-Consensus contributors
  • Markdown, HTML, XML: Use <!-- --> comments
    <!-- SPDX-License-Identifier: MIT
         Copyright (c) 2025 Copilot-for-Consensus contributors -->
  • JavaScript/TypeScript: Use // comments
    // SPDX-License-Identifier: MIT
    // Copyright (c) 2025 Copilot-for-Consensus contributors

All contributions must include appropriate SPDX headers. The pre-commit hook and CI enforce this requirement.