Docs Home | API | Configuration | Examples | Basic | Caching | LLM | Architecture | Agent-Native | Benchmarks | Ecosystem
Pipeline emits lifecycle events so you can instrument runs without modifying processors.
PipelineEvent.RunStart(run.start)PipelineEvent.RunEnd(run.end)PipelineEvent.ProcessorStart(processor.start)PipelineEvent.ProcessorEnd(processor.end)PipelineEvent.Error(error)PipelineEvent.LLMCall(llm.call, currently reserved)
import { Pipeline, PipelineEvent } from 'qirrel';
const pipeline = new Pipeline();
const onRunStart = ({ context }: any) => {
console.log('run started', context.meta?.requestId);
};
pipeline.on(PipelineEvent.RunStart, onRunStart);
await pipeline.process('Contact support@example.com');
pipeline.off(PipelineEvent.RunStart, onRunStart);{ context: QirrelContext }{ context: QirrelContext, duration: number }{ processorName: string, context: QirrelContext }{ processorName: string, context: QirrelContext, duration: number }{ error: Error, context?: QirrelContext, stage?: 'run' | 'processor' | 'llm' }- If an event handler throws, Qirrel logs the handler error and continues pipeline execution.
- If a processor throws during
process, Qirrel emitsPipelineEvent.Errorand rethrows.
import { Pipeline, PipelineEvent } from 'qirrel';
const pipeline = new Pipeline();
pipeline.on(PipelineEvent.ProcessorEnd, ({ processorName, duration }: any) => {
console.log(`[metric] processor=${processorName} duration_ms=${duration}`);
});
pipeline.on(PipelineEvent.Error, ({ error, stage }: any) => {
console.error(`[metric] stage=${stage ?? 'unknown'} error=${error.message}`);
});- Keep handlers lightweight; handlers execute inside the request path.
- Avoid blocking I/O in high-volume paths.
- Prefer async fire-and-forget queueing if your telemetry backend is slow.