This document explains how the tekton-dag project integrates with Tekton Pipelines and ArgoCD.
The goal is to provide context so future development (especially using Cursor AI) understands:
- What
tekton-dagis responsible for - What Tekton is responsible for
- What ArgoCD is responsible for
- How builds/tests are executed
- What resources are managed via GitOps vs runtime
The system consists of three main layers:
Git Events / CLI
↓
tekton-dag (orchestrator)
↓
Tekton Pipelines (execution engine)
↓
Kubernetes Pods (build + test)
Infrastructure is managed by:
ArgoCD
- installs Tekton
- installs Tekton Tasks
- installs Tekton Pipelines
- installs tekton-dag service
ArgoCD manages desired cluster state.
It installs and maintains:
- Tekton CRDs
- Tekton controllers
- Tekton Tasks
- Tekton Pipelines
- Tekton Triggers
- RBAC
- tekton-dag deployment
ArgoCD does NOT run pipelines.
ArgoCD only reconciles YAML resources.
- Task
- Pipeline
- TriggerTemplate
- TriggerBinding
- EventListener
- ServiceAccount
- Role / RoleBinding
- Deployment (tekton-dag)
These are runtime resources:
- PipelineRun
- TaskRun
- Pod
These are created dynamically by tekton-dag or Tekton Triggers.
Tekton is the pipeline execution engine.
Conceptual model:
Step → container command
Task → group of steps
Pipeline → DAG of tasks
PipelineRun → execution instance
Example pipeline graph:
clone
↓
build
↓
unit-tests
├── integration-tests
└── security-scan
↓
deploy-preview
Each task runs as a Kubernetes Pod.
tekton-dag is a pipeline orchestration tool built on top of Tekton.
Responsibilities:
- Define high-level pipeline workflows
- Generate or trigger Tekton PipelineRuns
- Coordinate multi-platform builds
- Run test DAGs
- Collect test results
- Store pipeline output in database or storage
Conceptually:
tekton-dag
- receives PR event
- determines pipeline DAG
- creates PipelineRun
↓
Tekton
↓
Pods execute tasks
GitHub or Gitea PR triggers:
Webhook
↓
tekton-dag
Example resource created dynamically:
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
generateName: pr-test-
spec:
pipelineRef:
name: pr-pipeline
params:
- name: repo
value: https://github.com/org/app
Tekton schedules tasks:
clone
↓
build
↓
unit-tests
↓
integration-tests
↘ playwright
↓
publish-results
Each task runs in a pod.
Recommended repo structure:
repo
├── argocd
│ └── applications
│
├── tekton
│ ├── tasks
│ │ ├── build.yaml
│ │ ├── unit-test.yaml
│ │ └── playwright.yaml
│ │
│ └── pipelines
│ └── pr-pipeline.yaml
│
└── tekton-dag
└── deployment.yaml
ArgoCD points at this repository.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: tekton-dag
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/jmjava/tekton-dag
targetRevision: main
path: k8s
destination:
server: https://kubernetes.default.svc
namespace: tekton-dag
syncPolicy:
automated:
prune: true
selfHeal: true
Tekton CRDs must exist before pipelines are applied.
Recommended sync order:
Wave -1
Tekton CRDs
Wave 0
RBAC
ServiceAccounts
Wave 1
Tasks
Wave 2
Pipelines
Wave 3
Triggers
EventListeners
Wave 4
tekton-dag deployment
Annotation example:
argocd.argoproj.io/sync-wave: "2"
Pipeline outputs should be exported.
Typical storage options:
- S3
- Database
- Object storage
- Logs
- Artifacts
Example artifact structure:
test-results/
- junit.xml
- playwright-report.html
- coverage.json
These can be visualized via:
- Retool
- Grafana
- Custom dashboard
Tekton YAML defines execution graph.
They should not be stored in Git.
ArgoCD installs pipelines but does not execute them.
It decides when and how pipelines run.
Future work may include:
Instead of static pipelines:
tekton-dag → generate pipeline spec
Example fan-out:
- java
- python
- node
Tekton supports:
- fan-out
- fan-in
- conditional execution
Centralized system for:
- test results
- build artifacts
- coverage
- logs
Store:
- pipeline_run_id
- status
- duration
- artifacts
- logs
Display:
- pipeline DAG
- task logs
- test reports
Pipeline definitions tied to git SHA.
System architecture:
ArgoCD → installs infrastructure
tekton-dag → orchestrates workflows
Tekton → executes DAG pipelines
Kubernetes → runs containers
This separation provides:
- reproducible infrastructure
- scalable pipeline execution
- flexible orchestration logic
- GitOps compliance