Conversation
…gest under debug - `analyze` now runs the full ingest pipeline (Drain + semantic labeling + DuckDB storage) before launching the AI agent - Move `ingest` command under `debug ingest` for step-by-step debugging - Extract shared pipeline helpers into `cmd/lapp/pipeline.go` - Remove top-level `templates` command - Add workspace path constraint to analyzer system prompt to prevent the agent from scanning files outside the workspace directory - Add Langfuse tracing support with docker-compose for local dev - Update CLAUDE.md with new CLI structure and code style notes
Instrument the entire pipeline with OTel spans: CLI commands, multiline merge, Drain parsing, semantic labeling, DuckDB storage, and analyzer. HTTP clients for LLM calls use otelhttp transport for deep request traces. - Add pkg/tracing/otel.go with OTLP HTTP exporter (env-gated via OTEL_TRACING_ENABLED) - Add Jaeger service to docker-compose.yml (UI on port 16686) - Wire InitOTel in main.go with graceful shutdown - Add ctx parameter to DrainParser.Feed/Templates and multiline.Merge/MergeSlice - Wrap eino OpenRouter HTTP clients with otelhttp.NewTransport
|
@codex review please |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 5bd60cfeb6
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
Extract AnalyzeWithTemplates() that accepts pre-computed templates, so the analyze command passes the same DrainParser output to both DuckDB storage and the workspace builder. Previously, Analyze() created a second DrainParser with fresh UUIDs, causing template IDs in the workspace to diverge from those in the database.
|
@codex review |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 3ba13ad2d1
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| if err := s.Init(ctx); err != nil { | ||
| return errors.Errorf("store init: %w", err) | ||
| } | ||
|
|
||
| semanticIDMap, patternCount, templateCount, err := discoverAndSavePatterns(ctx, s, drainParser, mergedLines, semantic.Config{ |
There was a problem hiding this comment.
Avoid appending analyze runs into the shared DB
analyze now always opens --db and inserts patterns/log entries, but this flow never scopes data to a run or clears prior rows before writing. Because the schema/query APIs have no source-file/run discriminator, rerunning lapp analyze (especially on different files) silently mixes datasets and inflates counts, which corrupts later PatternSummaries/query results for that DB path. This command previously had no persistent side effects, so the new default behavior is a data-integrity regression for repeated analyses.
Useful? React with 👍 / 👎.
Adds a pre-flight check in both analyze and debug ingest commands that errors out if the target DB file already exists, preventing accidental re-entry that would silently corrupt query results.
|
@codex review please |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 8af29fbad2
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| if _, err := os.Stat(dbPath); err == nil { | ||
| return errors.Errorf("database %q already exists; remove it first or choose a different --db path", dbPath) |
There was a problem hiding this comment.
Remove partially created DB on ingest failure
This existence guard makes retries fail after transient errors: runAnalyze creates and initializes dbPath before the LLM labeling/storage steps, but if discoverAndSavePatterns or later ingest work returns an error (for example, temporary OpenRouter/API failures), the command exits without deleting the new file; the next invocation with the same --db immediately aborts here with "database already exists." That turns a recoverable failure into manual cleanup work for every retry.
Useful? React with 👍 / 👎.
Keep workspace-based CLI from our branch, discard old analyze/debug/ingest commands and pipeline.go that master modified. Preserve OTel tracing from both branches.
analyzenow runs the full ingest pipeline (Drain + semantic labeling +DuckDB storage) before launching the AI agent
ingestcommand underdebug ingestfor step-by-step debuggingcmd/lapp/pipeline.gotemplatescommandthe agent from scanning files outside the workspace directory