Skip to content

Latest commit

 

History

History
89 lines (60 loc) · 2.34 KB

File metadata and controls

89 lines (60 loc) · 2.34 KB

Pipeline Events

Docs Home | API | Configuration | Examples | Basic | Caching | LLM | Architecture | Agent-Native | Benchmarks | Ecosystem

Pipeline emits lifecycle events so you can instrument runs without modifying processors.

Event Names

  • PipelineEvent.RunStart (run.start)
  • PipelineEvent.RunEnd (run.end)
  • PipelineEvent.ProcessorStart (processor.start)
  • PipelineEvent.ProcessorEnd (processor.end)
  • PipelineEvent.Error (error)
  • PipelineEvent.LLMCall (llm.call, currently reserved)

Subscribe/Unsubscribe

import { Pipeline, PipelineEvent } from 'qirrel';

const pipeline = new Pipeline();

const onRunStart = ({ context }: any) => {
  console.log('run started', context.meta?.requestId);
};

pipeline.on(PipelineEvent.RunStart, onRunStart);
await pipeline.process('Contact support@example.com');
pipeline.off(PipelineEvent.RunStart, onRunStart);

Payload Contracts

RunStart

{ context: QirrelContext }

RunEnd

{ context: QirrelContext, duration: number }

ProcessorStart

{ processorName: string, context: QirrelContext }

ProcessorEnd

{ processorName: string, context: QirrelContext, duration: number }

Error

{ error: Error, context?: QirrelContext, stage?: 'run' | 'processor' | 'llm' }

Error Semantics

  • If an event handler throws, Qirrel logs the handler error and continues pipeline execution.
  • If a processor throws during process, Qirrel emits PipelineEvent.Error and rethrows.

Metrics Pattern

import { Pipeline, PipelineEvent } from 'qirrel';

const pipeline = new Pipeline();

pipeline.on(PipelineEvent.ProcessorEnd, ({ processorName, duration }: any) => {
  console.log(`[metric] processor=${processorName} duration_ms=${duration}`);
});

pipeline.on(PipelineEvent.Error, ({ error, stage }: any) => {
  console.error(`[metric] stage=${stage ?? 'unknown'} error=${error.message}`);
});

Operational Guidance

  • Keep handlers lightweight; handlers execute inside the request path.
  • Avoid blocking I/O in high-volume paths.
  • Prefer async fire-and-forget queueing if your telemetry backend is slow.