Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
92 changes: 59 additions & 33 deletions examples/otel-demo/deploy-all.sh
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,14 @@ set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
ROLLOUT_TIMEOUT="${ROLLOUT_TIMEOUT:-600}"

# Versions
DEFAULT_OPENSEARCH_CHART_VERSION=3.3.2
OPENSEARCH_CHART_VERSION="${OPENSEARCH_CHART_VERSION:-${DEFAULT_OPENSEARCH_CHART_VERSION}}"
DEFAULT_OPENSEARCH_DASHBOARDS_CHART_VERSION=3.3.0
OPENSEARCH_DASHBOARDS_CHART_VERSION="${OPENSEARCH_DASHBOARDS_CHART_VERSION:-${DEFAULT_OPENSEARCH_DASHBOARDS_CHART_VERSION}}"
DEFAULT_JAEGER_CHART_VERSION=4.2.3
JAEGER_CHART_VERSION="${JAEGER_CHART_VERSION:-${DEFAULT_JAEGER_CHART_VERSION}}"

MODE="${1:-upgrade}"
IMAGE_TAG="${2:-latest}"

Expand All @@ -22,6 +30,11 @@ case "$MODE" in
echo " upgrade - Upgrade existing deployment or install if not present (default)"
echo " clean - Clean install (removes existing deployment first)"
echo ""
echo "Environment Variables:"
echo " OPENSEARCH_CHART_VERSION - Version of OpenSearch Helm Chart (default: $DEFAULT_OPENSEARCH_CHART_VERSION)"
echo " OPENSEARCH_DASHBOARDS_CHART_VERSION - Version of OpenSearch Dashboards Helm Chart ($DEFAULT_OPENSEARCH_DASHBOARDS_CHART_VERSION)"
echo " JAEGER_CHART_VERSION - Version of Jaeger Helm Chart (default: $DEFAULT_JAEGER_CHART_VERSION)"
echo ""
echo "Examples:"
echo " $0 # Upgrade mode with latest tag"
echo " $0 clean # Clean install"
Expand Down Expand Up @@ -206,32 +219,7 @@ deploy_ingress() {
log " • https://shop.demo.jaegertracing.io"
}

# Clone Jaeger Helm chart and prepare dependencies
clone_jaeger_v2() {
local dest="$SCRIPT_DIR/helm-charts"
if [[ ! -d "$dest" ]]; then
log "Cloning Jaeger Helm Charts..."
git clone https://github.com/jaegertracing/helm-charts.git "$dest"
(
cd "$dest"
log "Using v2 branch for Jaeger v2..."
git checkout v2
log "Adding required Helm repositories..."
helm repo add bitnami https://charts.bitnami.com/bitnami >/dev/null 2>&1 || true
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts >/dev/null 2>&1 || true
helm repo add incubator https://charts.helm.sh/incubator >/dev/null 2>&1 || true
helm repo update >/dev/null
helm dependency build ./charts/jaeger
)
else
log "Jaeger Helm Charts already exist. Skipping clone."
# Ensure required repos exist even if charts folder already exists
helm repo add bitnami https://charts.bitnami.com/bitnami >/dev/null 2>&1 || true
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts >/dev/null 2>&1 || true
helm repo add incubator https://charts.helm.sh/incubator >/dev/null 2>&1 || true
helm repo update >/dev/null
fi
}




Expand All @@ -255,27 +243,28 @@ main() {
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts >/dev/null 2>&1 || true
helm repo add jaegertracing https://jaegertracing.github.io/helm-charts >/dev/null 2>&1 || true
helm repo update >/dev/null
clone_jaeger_v2


log "Deploying OpenSearch"
helm upgrade --install opensearch opensearch/opensearch \
--namespace opensearch --create-namespace \
--version 2.19.0 \
--set image.tag=2.11.0 \
--version "${OPENSEARCH_CHART_VERSION}" \
-f "$SCRIPT_DIR/opensearch-values.yaml" \
--wait --timeout 10m
wait_for_statefulset opensearch opensearch-cluster-single "${ROLLOUT_TIMEOUT}s"

log "Deploying OpenSearch Dashboards"
helm upgrade --install opensearch-dashboards opensearch/opensearch-dashboards \
--namespace opensearch \
--version "${OPENSEARCH_DASHBOARDS_CHART_VERSION}" \
-f "$SCRIPT_DIR/opensearch-dashboard-values.yaml" \
--wait --timeout 10m
wait_for_deployment opensearch opensearch-dashboards "${ROLLOUT_TIMEOUT}s"


log "Deploying Jaeger (all-in-one, no storage)"
Copy link

Copilot AI Feb 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The log message "Deploying Jaeger (all-in-one, no storage)" is misleading. While storage.type is set to "none" for the Helm chart, the jaeger-config.yaml userconfig file (line 273) actually configures OpenSearch as the storage backend. This discrepancy between the message and the actual configuration could confuse users. Consider updating the message to accurately reflect that OpenSearch storage is configured via userconfig.

Suggested change
log "Deploying Jaeger (all-in-one, no storage)"
log "Deploying Jaeger (all-in-one with OpenSearch storage via userconfig)"

Copilot uses AI. Check for mistakes.
helm $HELM_JAEGER_CMD jaeger "$SCRIPT_DIR/helm-charts/charts/jaeger" \
helm $HELM_JAEGER_CMD jaeger jaegertracing/jaeger \
--version "${JAEGER_CHART_VERSION}" \
--namespace jaeger --create-namespace \
--set allInOne.enabled=true \
--set storage.type=none \
Expand All @@ -286,13 +275,17 @@ main() {
--wait --timeout 10m
wait_for_deployment jaeger jaeger "${ROLLOUT_TIMEOUT}s"

log "Deploying HotROD app..."
kubectl apply -n jaeger -f "$SCRIPT_DIR/hotrod.yaml"
Comment on lines +278 to +279
Copy link

Copilot AI Feb 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

HotROD is being deployed twice, which will cause conflicts. The Jaeger Helm chart deployment includes HotROD (jaeger-values.yaml line 13 has enabled: true), and then a standalone HotROD deployment is also applied from hotrod.yaml. Both deployments create resources with the same name (jaeger-hotrod), which will cause the second deployment to overwrite the first. Either disable HotROD in jaeger-values.yaml by setting hotrod.enabled: false, or remove the standalone hotrod.yaml deployment step.

Suggested change
log "Deploying HotROD app..."
kubectl apply -n jaeger -f "$SCRIPT_DIR/hotrod.yaml"
log "Waiting for HotROD app deployment..."

Copilot uses AI. Check for mistakes.
wait_for_deployment jaeger jaeger-hotrod "${ROLLOUT_TIMEOUT}s"


log "Creating Jaeger query ClusterIP service..."
kubectl apply -n jaeger -f "$SCRIPT_DIR/jaeger-query-service.yaml"
log "Jaeger query ClusterIP service created"

log "Ensuring Jaeger Collector service endpoints are ready before deploying the demo"
wait_for_service_endpoints jaeger jaeger-collector 180
wait_for_service_endpoints jaeger jaeger 180

log "Ensuring HotROD service endpoints are ready"
wait_for_service_endpoints jaeger jaeger-hotrod 180
Expand All @@ -316,8 +309,41 @@ main() {
# Deploy HTTPS ingress
deploy_ingress

log "🎉 Deployment complete! Stack is ready."


# Deploy Spark Dependencies CronJob
log "Deploying Spark Dependencies CronJob"
if kubectl apply -f "$SCRIPT_DIR/spark-dependencies-cronjob-opensearch.yaml"; then
log "Spark Dependencies CronJob deployed"

# Trigger the job immediately
log "Triggering initial Spark Dependencies job..."
JOB_NAME="init-spark-dep-$(date +%s)"

# Create a manual job from the cronjob template
if kubectl create job --from=cronjob/jaeger-spark-dependencies "$JOB_NAME" -n jaeger; then
log "Initial job '$JOB_NAME' triggered successfully"

log "Waiting for initial Spark Dependencies job to complete (timeout: ${ROLLOUT_TIMEOUT}s)..."
if kubectl wait --for=condition=complete "job/$JOB_NAME" -n jaeger --timeout="${ROLLOUT_TIMEOUT}s"; then
log "Initial job '$JOB_NAME' completed successfully"
else
log "Initial job '$JOB_NAME' failed to complete or timed out"
kubectl describe job "$JOB_NAME" -n jaeger || true
kubectl logs "job/$JOB_NAME" -n jaeger || true
exit 1
fi
else
log " Failed to trigger initial job"
Comment thread
danish9039 marked this conversation as resolved.
exit 1
fi
else
log "Failed to deploy Spark Dependencies CronJob"
Copy link

Copilot AI Feb 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the Spark Dependencies CronJob fails to deploy (line 316 returns false), the script logs an error message but continues execution without exiting. This is inconsistent with the error handling for job creation and completion failures (lines 334, 338), which do exit with code 1. For consistency, either add "exit 1" after line 341, or make all Spark-related failures non-blocking as suggested in a separate comment.

Suggested change
log "Failed to deploy Spark Dependencies CronJob"
log "Failed to deploy Spark Dependencies CronJob"
exit 1

Copilot uses AI. Check for mistakes.
Comment on lines +314 to +341
Copy link

Copilot AI Feb 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Spark Dependencies deployment should not be a blocking step that causes deployment failure. If the Spark job fails, it exits with code 1 (lines 334, 338), which will stop the entire deployment script. Since this is a dependency calculation job that runs periodically via CronJob, its initial failure should not block the deployment of the rest of the stack. Consider making this step non-blocking or providing a flag to skip it.

Suggested change
# Deploy Spark Dependencies CronJob
log "Deploying Spark Dependencies CronJob"
if kubectl apply -f "$SCRIPT_DIR/spark-dependencies-cronjob-opensearch.yaml"; then
log "Spark Dependencies CronJob deployed"
# Trigger the job immediately
log "Triggering initial Spark Dependencies job..."
JOB_NAME="init-spark-dep-$(date +%s)"
# Create a manual job from the cronjob template
if kubectl create job --from=cronjob/jaeger-spark-dependencies "$JOB_NAME" -n jaeger; then
log "Initial job '$JOB_NAME' triggered successfully"
log "Waiting for initial Spark Dependencies job to complete (timeout: ${ROLLOUT_TIMEOUT}s)..."
if kubectl wait --for=condition=complete "job/$JOB_NAME" -n jaeger --timeout="${ROLLOUT_TIMEOUT}s"; then
log "Initial job '$JOB_NAME' completed successfully"
else
log "Initial job '$JOB_NAME' failed to complete or timed out"
kubectl describe job "$JOB_NAME" -n jaeger || true
kubectl logs "job/$JOB_NAME" -n jaeger || true
exit 1
fi
else
log " Failed to trigger initial job"
exit 1
fi
else
log "Failed to deploy Spark Dependencies CronJob"
# Deploy Spark Dependencies CronJob (optional)
if [[ "${SKIP_SPARK_DEPENDENCIES:-false}" == "true" ]]; then
log "Skipping Spark Dependencies CronJob deployment (SKIP_SPARK_DEPENDENCIES=true)"
else
log "Deploying Spark Dependencies CronJob"
if kubectl apply -f "$SCRIPT_DIR/spark-dependencies-cronjob-opensearch.yaml"; then
log "Spark Dependencies CronJob deployed"
# Trigger the job immediately
log "Triggering initial Spark Dependencies job..."
JOB_NAME="init-spark-dep-$(date +%s)"
# Create a manual job from the cronjob template
if kubectl create job --from=cronjob/jaeger-spark-dependencies "$JOB_NAME" -n jaeger; then
log "Initial job '$JOB_NAME' triggered successfully"
log "Waiting for initial Spark Dependencies job to complete (timeout: ${ROLLOUT_TIMEOUT}s)..."
if kubectl wait --for=condition=complete "job/$JOB_NAME" -n jaeger --timeout="${ROLLOUT_TIMEOUT}s"; then
log "Initial job '$JOB_NAME' completed successfully"
else
log "Initial job '$JOB_NAME' failed to complete or timed out"
kubectl describe job "$JOB_NAME" -n jaeger || true
kubectl logs "job/$JOB_NAME" -n jaeger || true
if [[ "${REQUIRE_SPARK_DEPENDENCIES:-false}" == "true" ]]; then
log "Spark Dependencies job failure is configured as fatal (REQUIRE_SPARK_DEPENDENCIES=true); aborting deployment."
exit 1
else
log "Continuing deployment despite Spark Dependencies job failure."
fi
fi
else
log "Failed to trigger initial Spark Dependencies job"
if [[ "${REQUIRE_SPARK_DEPENDENCIES:-false}" == "true" ]]; then
log "Spark Dependencies job trigger failure is configured as fatal (REQUIRE_SPARK_DEPENDENCIES=true); aborting deployment."
exit 1
else
log "Continuing deployment despite failure to trigger Spark Dependencies job."
fi
fi
else
log "Failed to deploy Spark Dependencies CronJob"
if [[ "${REQUIRE_SPARK_DEPENDENCIES:-false}" == "true" ]]; then
log "Spark Dependencies CronJob deployment failure is configured as fatal (REQUIRE_SPARK_DEPENDENCIES=true); aborting deployment."
exit 1
else
log "Continuing deployment despite Spark Dependencies CronJob deployment failure."
fi
fi

Copilot uses AI. Check for mistakes.
Copy link

Copilot AI Apr 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If kubectl apply fails, the script logs an error but continues and will still report the stack as ready later. If Spark dependencies are required for a healthy demo, this should exit 1 (or at least return non-zero) to avoid a false-success deployment outcome.

Suggested change
log "Failed to deploy Spark Dependencies CronJob"
log "Failed to deploy Spark Dependencies CronJob"
exit 1

Copilot uses AI. Check for mistakes.
fi

Comment thread
danish9039 marked this conversation as resolved.
Copy link

Copilot AI Feb 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line contains trailing whitespace after "fi". Consider removing it for consistency with code style.

Copilot uses AI. Check for mistakes.

log "🎉 Deployment complete! Stack is ready."
}

main

main
51 changes: 51 additions & 0 deletions examples/otel-demo/hotrod.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: jaeger-hotrod
namespace: jaeger
labels:
app: jaeger-hotrod
spec:
replicas: 1
selector:
matchLabels:
app: jaeger-hotrod
template:
metadata:
labels:
app: jaeger-hotrod
spec:
containers:
- name: jaeger-hotrod
image: docker.io/jaegertracing/example-hotrod:2.14.0
Copy link

Copilot AI Feb 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a version mismatch between the two HotROD deployments. The standalone hotrod.yaml uses image version "2.14.0" while jaeger-values.yaml uses version "1.72.0". This inconsistency could lead to different behavior and features between the two deployments. Consider standardizing on the same version across both configurations.

Suggested change
image: docker.io/jaegertracing/example-hotrod:2.14.0
image: docker.io/jaegertracing/example-hotrod:1.72.0

Copilot uses AI. Check for mistakes.
imagePullPolicy: IfNotPresent
Copy link

Copilot AI Feb 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The imagePullPolicy is set to "IfNotPresent", which is inconsistent with the standardization mentioned in the PR description where Jaeger components were standardized to use "Always". For consistency and to ensure the latest image is always pulled (especially important for development/demo environments), consider changing this to "Always".

Suggested change
imagePullPolicy: IfNotPresent
imagePullPolicy: Always

Copilot uses AI. Check for mistakes.
args: ["all", "--jaeger-ui=https://jaeger.demo.jaegertracing.io"]
env:
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: "http://jaeger:4318"
- name: OTEL_EXPORTER_OTLP_PROTOCOL
value: "http/protobuf"
- name: OTEL_EXPORTER_OTLP_TRACES_ENDPOINT
value: "http://jaeger:4318/v1/traces"
Comment thread
danish9039 marked this conversation as resolved.
Comment thread
danish9039 marked this conversation as resolved.
- name: OTEL_SERVICE_NAME
value: "hotrod"
- name: OTEL_RESOURCE_ATTRIBUTES
value: "service.name=hotrod"
Comment on lines +32 to +33
Copy link

Copilot AI Feb 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The OTEL_RESOURCE_ATTRIBUTES environment variable is redundant because it sets "service.name=hotrod", which duplicates the OTEL_SERVICE_NAME environment variable already set to "hotrod" on line 31. The OTEL SDK automatically includes service.name from OTEL_SERVICE_NAME in resource attributes, making this redundant and potentially confusing.

Suggested change
- name: OTEL_RESOURCE_ATTRIBUTES
value: "service.name=hotrod"

Copilot uses AI. Check for mistakes.
Comment on lines +31 to +33
Copy link

Copilot AI Feb 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is an inconsistency in OTEL_SERVICE_NAME between the two HotROD deployments. In jaeger-values.yaml line 33, the service is named "hotrod-frontend", while in this standalone hotrod.yaml deployment it's named "hotrod". This inconsistency could lead to confusion when analyzing traces. Consider using the same service name across both configurations, or document why they differ if intentional.

Suggested change
value: "hotrod"
- name: OTEL_RESOURCE_ATTRIBUTES
value: "service.name=hotrod"
value: "hotrod-frontend"
- name: OTEL_RESOURCE_ATTRIBUTES
value: "service.name=hotrod-frontend"

Copilot uses AI. Check for mistakes.
ports:
- containerPort: 8080
Comment on lines +17 to +35
Copy link

Copilot AI Apr 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This manifest is invalid YAML: the containers: value is a list but the - name: jaeger-hotrod item is not indented under containers:. The same indentation issue repeats for env: and ports: lists, which will cause kubectl apply to fail.

Copilot uses AI. Check for mistakes.
---
apiVersion: v1
Comment thread
danish9039 marked this conversation as resolved.
kind: Service
metadata:
name: jaeger-hotrod
namespace: jaeger
labels:
app: jaeger-hotrod
spec:
selector:
app: jaeger-hotrod
ports:
- port: 80
targetPort: 8080
protocol: TCP
Comment on lines +47 to +50
Copy link

Copilot AI Apr 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PR description mentions correcting the jaeger-hotrod service port to 8080, but this Service exposes port 80 (with targetPort: 8080). If ingress/backends are configured to route to service port 8080, this will break routing; align the Service port with what ingress expects (e.g., expose port: 8080), or update ingress/backend references to use port 80.

Copilot uses AI. Check for mistakes.
name: http
4 changes: 3 additions & 1 deletion examples/otel-demo/ingress/ingress-jaeger.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -31,5 +31,7 @@ spec:
tls:
- hosts:
- jaeger.demo.jaegertracing.io
secretName: jaeger-ui-only-tls
- hosts:
- hotrod.demo.jaegertracing.io
secretName: jaeger-demo-tls
secretName: hotrod-ui-only-tls
Comment thread
danish9039 marked this conversation as resolved.
1 change: 1 addition & 0 deletions examples/otel-demo/jaeger-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ extensions:
backends:
some_storage: &opensearch_config
opensearch:
create_mappings: true
server_urls:
- http://opensearch-cluster-single.opensearch.svc.cluster.local:9200
indices:
Expand Down
10 changes: 5 additions & 5 deletions examples/otel-demo/jaeger-values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@ global:

allInOne:
enabled: true
image:
pullPolicy: Always
extraEnv: []


Expand All @@ -22,15 +24,13 @@ hotrod:
path: /
extraEnv:
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: http://jaeger-collector:4318
value: http://jaeger:4318
- name: OTEL_EXPORTER_OTLP_TRACES_ENDPOINT
value: http://jaeger-collector:4318/v1/traces
value: http://jaeger:4318/v1/traces
- name: OTEL_EXPORTER_OTLP_PROTOCOL
value: http/protobuf
- name: OTEL_SERVICE_NAME
value: hotrod
- name: OTEL_LOG_LEVEL
value: debug
value: hotrod-frontend


query:
Expand Down
3 changes: 0 additions & 3 deletions examples/otel-demo/opensearch-dashboard-values.yaml
Original file line number Diff line number Diff line change
@@ -1,7 +1,5 @@

image:
Comment thread
danish9039 marked this conversation as resolved.
repository: docker.io/opensearchproject/opensearch-dashboards
Copy link

Copilot AI Apr 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With image.tag removed, the deployed dashboards version becomes dependent on the chart default (or potentially 'latest' depending on chart behavior). For reproducible deployments, consider explicitly pinning image.tag here to match the intended OpenSearch Dashboards version (especially if users may install without deploy-all.sh).

Suggested change
repository: docker.io/opensearchproject/opensearch-dashboards
repository: docker.io/opensearchproject/opensearch-dashboards
tag: "2.11.1"

Copilot uses AI. Check for mistakes.
tag: "2.11.0"

opensearchHosts: "http://opensearch-cluster-single:9200"

Expand All @@ -17,4 +15,3 @@ config:
opensearch.password: "admin123"
opensearch.ssl.verificationMode: none
opensearch_security.enabled: false

15 changes: 12 additions & 3 deletions examples/otel-demo/otel-demo-values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ default:
envOverrides:
# Narrower service namespace + explicit environment tag
- name: OTEL_RESOURCE_ATTRIBUTES
value: service.name=$(OTEL_SERVICE_NAME),service.namespace=otel-demo,deployment.environment=oke-dev
value: service.namespace=otel-demo,deployment.environment=oke-dev
Comment thread
danish9039 marked this conversation as resolved.
# Send OTLP over HTTP by default and disable metrics/logs exporters (traces only)
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: http://otel-collector:4318
Expand All @@ -31,6 +31,11 @@ default:
value: otlp

components:
postgresql:
imageOverride:
repository: docker.io/library/postgres
tag: "14-alpine"
Copy link

Copilot AI Feb 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The imageOverride specifies "14-alpine" for the PostgreSQL tag. While this provides a specific major version, using a mutable tag like "14-alpine" instead of a pinned version tag (e.g., "14.15-alpine") could lead to unexpected behavior when the image is updated. Consider using a specific version tag for reproducibility and stability.

Suggested change
tag: "14-alpine"
tag: "14.15-alpine"

Copilot uses AI. Check for mistakes.

accounting:
initContainers:
- name: wait-for-kafka
Expand Down Expand Up @@ -73,6 +78,11 @@ components:
- name: LOCUST_SPAWN_RATE
value: "2"

frontend:
envOverrides:
- name: OTEL_SERVICE_NAME
value: otelstore-frontend-ui

valkey-cart:
imageOverride:
repository: docker.io/valkey/valkey
Expand Down Expand Up @@ -102,13 +112,12 @@ opentelemetry-collector:
- key: service.instance.id
from_attribute: k8s.pod.uid
action: insert
transform: null
batch: {}
Comment thread
danish9039 marked this conversation as resolved.
connectors:
spanmetrics: null
exporters:
otlp/jaeger:
endpoint: jaeger-collector.jaeger.svc.cluster.local:4317
endpoint: jaeger.jaeger.svc.cluster.local:4317
tls:
insecure: true
opensearch: null
Expand Down
39 changes: 39 additions & 0 deletions examples/otel-demo/spark-dependencies-cronjob-opensearch.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
apiVersion: batch/v1
kind: CronJob
metadata:
name: jaeger-spark-dependencies
namespace: jaeger
spec:
schedule: "*/15 * * * *"
Comment thread
danish9039 marked this conversation as resolved.
successfulJobsHistoryLimit: 2
failedJobsHistoryLimit: 2
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
restartPolicy: OnFailure
containers:
- name: spark-dependencies
image: ghcr.io/jaegertracing/spark-dependencies/spark-dependencies:v0.7.2-opensearch
imagePullPolicy: IfNotPresent
env:
- name: STORAGE
value: "elasticsearch"
Copy link

Copilot AI Feb 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The STORAGE environment variable is set to "elasticsearch" but this is an OpenSearch deployment. While OpenSearch is API-compatible with Elasticsearch, using "elasticsearch" as the storage type may be confusing and could potentially lead to compatibility issues. Verify that the spark-dependencies image (v0.7.2-opensearch) correctly interprets this value or if it should be set to "opensearch" instead.

Suggested change
value: "elasticsearch"
value: "opensearch"

Copilot uses AI. Check for mistakes.
- name: OS_NODES
value: "opensearch-cluster-single.opensearch.svc.cluster.local:9200"
- name: OS_INDEX_PREFIX
value: "jaeger-main"
- name: SPARK_MASTER
value: "local[*]"
Comment on lines +16 to +28
Copy link

Copilot AI Apr 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This CronJob manifest is not valid YAML due to incorrect indentation: list items under containers: and env: must be indented beneath their parent keys. As written, - name: spark-dependencies and the env entries align with the keys instead of being nested, which will fail to apply.

Copilot uses AI. Check for mistakes.
- name: JAVA_OPTS
value: "-Xmx1g -Xms512m"
- name: OS_NODES_WAN_ONLY
value: "true"
resources:
requests:
cpu: 200m
memory: 1Gi
limits:
cpu: 500m
memory: 2Gi
Comment thread
danish9039 marked this conversation as resolved.
Loading