Skip to content

[receiver/k8seventsreceiver] feat: add reporting controller and reporting instance data to k8s event data#45289

Merged
ChrsMark merged 6 commits intoopen-telemetry:mainfrom
woehrl01:k8sevents_reporting_data
Apr 20, 2026
Merged

[receiver/k8seventsreceiver] feat: add reporting controller and reporting instance data to k8s event data#45289
ChrsMark merged 6 commits intoopen-telemetry:mainfrom
woehrl01:k8sevents_reporting_data

Conversation

@woehrl01
Copy link
Copy Markdown
Contributor

@woehrl01 woehrl01 commented Jan 8, 2026

Description

Adds the reportingController and reportingInstance data to the log event for k8s events receiver.

also fixes the EnsureCapacity values.

The Use Case & Need:
The primary driver here is that the Source field (which typically contained this info) is deprecated in the events.k8s.io/v1 API and replaced by reportingController and reportingInstance.

​Modern API Alignment:
Kubernetes explicitly recommends moving away from Source (component/host) in favor of these two fields. As the collector aims to support modern K8s environments, we need to expose these first-class citizens of the v1 Event API.

​Observability & Debugging: reportingController is critical for distinguishing who generated an event (e.g., was it the kubelet, statefulset-controller, or a custom operator?). This is distinct from the regarding object (what the event is about).

​High-Availability (HA) Context: reportingInstance allows users to identify exactly which replica of a controller emitted the event, which is vital for debugging issues in HA control planes or clustered operators.

​On Schema Stability:
Since these fields are direct mappings of the official Kubernetes v1.Event struct (and not custom logic we are inventing), they share the stability guarantees of the Kubernetes API itself.

Link to tracking issue

Fixes

Testing

Documentation

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Jan 8, 2026

Welcome, contributor! Thank you for your contribution to opentelemetry-collector-contrib.

Important reminders:

A maintainer will review your pull request soon. Thank you for helping make OpenTelemetry better!

@github-actions github-actions Bot added first-time contributor PRs made by new contributors receiver/k8sevents labels Jan 8, 2026
@woehrl01 woehrl01 changed the title feat: add reporting controller and reporting instance data to k8s event data [receiver/k8seventsreceiver] feat: add reporting controller and reporting instance data to k8s event data Jan 8, 2026
Comment thread receiver/k8seventsreceiver/k8s_event_to_logdata.go
Comment thread receiver/k8seventsreceiver/k8s_event_to_logdata.go
@woehrl01
Copy link
Copy Markdown
Contributor Author

Hi all, checking in to see if there are any outstanding blockers on this PR? I've provided the additional details requested, please let me know if further clarification is needed. I'd love to help get this merged.

attrs.PutStr("k8s.event.name", ev.Name)
attrs.PutStr("k8s.event.uid", string(ev.UID))
attrs.PutStr(string(conventions.K8SNamespaceNameKey), ev.InvolvedObject.Namespace)
attrs.PutStr("k8s.event.reporting_controller", ev.ReportingController)
Copy link
Copy Markdown
Contributor

@jinja2 jinja2 Jan 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the attribute names k8s.event.reporter.name and k8s.event.reporter.instance might be more aligned with our current K8s semantic conventions.

Existing K8s attributes generally follow the {object}.{property} pattern, where the namespace establishes the object type. If we treat k8s.event.reporter as the namespace, .name seems consistent and pairs with .instance:

k8s.event.reporter.name (or k8s.event.reporter.controller though I am partial to name as it aligns with existing property conventions for K8s objects) -> the controller type (e.g. kubelet, my-custom-controller)
k8s.event.reporter.instance -> the specific emitter instance (e.g. node-123, pod-xyz)

Wdyt?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your comment.

The reason I was using _ was because of consistency. We currently already use api_version and resource_version.

I also would like to keep the name to match the field of the kubernetes event. Main reason is consistency as well. Especially the subtile reporting vs reporter will likely cause more troubles in the future than I would bring benefits.

Wdyt?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think api_version isn't quite the same case - it's a multi-word property of the k8s.object namespace, whereas reporter or reporting could be modeled as a sub-namespace (like k8s.hpa.scaletargetref.kind). That said, I don't want to block on naming and this can evolve when semconv standardizes K8s Event attributes (see issue).

A couple of change requests -

  1. Please add a changelog entry
  2. Could you add unit tests for the new attributes?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jinja2 added tests and the changelog entry. thanks!

@github-actions
Copy link
Copy Markdown
Contributor

This PR was marked stale due to lack of activity. It will be closed in 14 days.

@github-actions github-actions Bot added the Stale label Feb 21, 2026
@woehrl01
Copy link
Copy Markdown
Contributor Author

/not stale

@github-actions github-actions Bot removed the Stale label Feb 22, 2026
@paulojmdias
Copy link
Copy Markdown
Member

/workflow-approve

@woehrl01 woehrl01 force-pushed the k8sevents_reporting_data branch from 2442493 to 0c94971 Compare February 25, 2026 15:32
@woehrl01
Copy link
Copy Markdown
Contributor Author

@paulojmdias I just pushed a rebase, as it seems the failing workflow was because of a missing if which was in main

@paulojmdias
Copy link
Copy Markdown
Member

/workflow-approve

@woehrl01
Copy link
Copy Markdown
Contributor Author

@paulojmdias what is missing to get this merged?

@paulojmdias
Copy link
Copy Markdown
Member

@paulojmdias what is missing to get this merged?

A review from the code owners 👍

@woehrl01 woehrl01 requested a review from jinja2 March 10, 2026 07:50
Comment thread .chloggen/k8sevents_reporting_data.yaml Outdated
Co-authored-by: Jina Jain <jjain@splunk.com>
attrs.PutStr("k8s.event.uid", string(ev.UID))
attrs.PutStr(string(conventions.K8SNamespaceNameKey), ev.InvolvedObject.Namespace)
attrs.PutStr("k8s.event.reporting_controller", ev.ReportingController)
attrs.PutStr("k8s.event.reporting_instance", ev.ReportingInstance)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These fields are optional in the spec. The k8sevents improvement issue discusses dropping empty string attributes from event logs. I think we should check if the fields are set and drop if not set.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking through the spec, it states that empty attribute values must be kept. But we already drop k8s.event.count when zero in this receiver.

Do we want a consistent schema on every log record, or are we okay omitting empty fields? I was leaning towards omitting, but reconsidering it. On the query side, is filtering on an empty attribute is easier than handling a missing one?

cc @dmitryax @ChrsMark wdyt?

Copy link
Copy Markdown
Contributor Author

@woehrl01 woehrl01 Mar 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dropping count when it's zero makes absolutely sense as a zero indicates that its not a grouped event. That is semantically a different case then a missing value.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jinja2 Just following up on this thread, please clarify which approach you'd prefer me to take here. I'm happy to implement whichever pattern the team decides on.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@woehrl01 I'd lean towards emitting the attribute as-is for now. But I don't feel strongly. cc @dmitryax for preference.

Comment on lines +19 to +20
This change adds a reporting controller and reporting instance to the kubernetes events receiver.
This allows for better monitoring and reporting of events collected by the receiver.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
This change adds a reporting controller and reporting instance to the kubernetes events receiver.
This allows for better monitoring and reporting of events collected by the receiver.
When present on the K8s event, `k8s.event.reporting_controller` and
`k8s.event.reporting_instance` are now included as log record attributes.

const (
// Number of log attributes to add to the plog.LogRecordSlice.
totalLogAttributes = 7
totalLogAttributes = 9
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Imo, totalLogAttributes shouldn't change here if we implement the change for attributes to only be emitted when non-empty. This constant is a capacity hint, and I think it should reflect the guaranteed minimum.

@woehrl01 woehrl01 requested a review from jinja2 March 18, 2026 08:01
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Apr 3, 2026

This PR was marked stale due to lack of activity. It will be closed in 14 days.

@github-actions github-actions Bot added the Stale label Apr 3, 2026
@woehrl01
Copy link
Copy Markdown
Contributor Author

woehrl01 commented Apr 3, 2026

/not-stale

@woehrl01
Copy link
Copy Markdown
Contributor Author

woehrl01 commented Apr 3, 2026

@jinja2 @dmitryax appreciate if we could move this forward.

@github-actions github-actions Bot removed the Stale label Apr 4, 2026
@jinja2
Copy link
Copy Markdown
Contributor

jinja2 commented Apr 9, 2026

I'm okay with the current approach of emitting the attributes as-is instead of dropping when empty. But I am not a code owner on the component, so this still needs a stamp from code owners.

@woehrl01 woehrl01 requested a review from ChrsMark April 18, 2026 07:58
@woehrl01
Copy link
Copy Markdown
Contributor Author

woehrl01 commented Apr 18, 2026

I am writing to express my deep frustration with the lack of progress on this PR. I understand that the maintainers have a lot on their plates, but I have a full schedule as well, and it is incredibly discouraging to see my donated time go to waste like this.

It has been over three months since this was opened, and I have meticulously addressed every piece of feedback provided.

I followed up for clarification on naming conventions over a month ago and received no response, only for the PR to be recently flagged as 'stale' by a bot. In my 10 years of contributing to open source, this has been one of the most discouraging and friction-heavy experiences I’ve encountered. It is incredibly disheartening to put in the work to align this receiver with modern Kubernetes API standards only to be met with weeks of total silence. I would like a clear path to closure here.

Can a maintainer please make a final decision so we can merge this, or should I stop wasting my time on this contribution?

Copy link
Copy Markdown
Member

@ChrsMark ChrsMark left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general, for new attributes we request defining them as Semantic Conventions first as per the project's guidelines:

## Semantic Conventions compatibility

However we don't have any K8s Events attributes yet defined in SemConv, sth that should be tackled by open-telemetry/semantic-conventions#2318.

Having said this, I'm fine merging this PR as is and revisit with open-telemetry/semantic-conventions#2318 and #45566.

As it's obvious, the whole area around K8s Events in OTel right now is quite in an experimental phase with multiple pieces missing especially around standardisation. From my perspective adding new experimental attributes there only increases the maintenance burden and the potential breaking changes once the standardisation is done. (partially the reason for not putting this PR into priority from my side)

@woehrl01
Copy link
Copy Markdown
Contributor Author

I appreciate the context and the honesty regarding the prioritization.

Regarding the concern about breaking changes: since OTel K8s Event attributes aren't standardized yet, a breaking change is inevitable once the Semantic Conventions are finalized anyway. Delaying the merge doesn't avoid that future work; it just keeps these useful fields out of users' hands in the meantime.

I’m glad to hear you're fine merging this 'as is' for now. Thank you.

@ChrsMark ChrsMark merged commit 3970ee3 into open-telemetry:main Apr 20, 2026
191 checks passed
@otelbot
Copy link
Copy Markdown
Contributor

otelbot Bot commented Apr 20, 2026

Thank you for your contribution @woehrl01! 🎉 We would like to hear from you about your experience contributing to OpenTelemetry by taking a few minutes to fill out this survey. If you are getting started contributing, you can also join the CNCF Slack channel #opentelemetry-new-contributors to ask for guidance and get help.

AndrewCharlesHay pushed a commit to AndrewCharlesHay/opentelemetry-collector-contrib that referenced this pull request Apr 23, 2026
…ting instance data to k8s event data (open-telemetry#45289)

<!--Ex. Fixing a bug - Describe the bug and how this fixes the issue.
Ex. Adding a feature - Explain what this achieves.-->
#### Description

Adds the `reportingController` and `reportingInstance` data to the log
event for k8s events receiver.

also fixes the `EnsureCapacity` values.


The Use Case & Need:
The primary driver here is that the Source field (which typically
contained this info) is deprecated in the `events.k8s.io/v1` API and
replaced by `reportingController` and `reportingInstance`.

​Modern API Alignment: 
Kubernetes explicitly recommends moving away from Source
(component/host) in favor of these two fields. As the collector aims to
support modern K8s environments, we need to expose these first-class
citizens of the v1 Event API.

​Observability & Debugging: `reportingController` is critical for
distinguishing who generated an event (e.g., was it the kubelet,
statefulset-controller, or a custom operator?). This is distinct from
the regarding object (what the event is about).

​High-Availability (HA) Context: `reportingInstance` allows users to
identify exactly which replica of a controller emitted the event, which
is vital for debugging issues in HA control planes or clustered
operators.


​On Schema Stability:
Since these fields are direct mappings of the official Kubernetes
`v1.Event` struct (and not custom logic we are inventing), they share
the stability guarantees of the Kubernetes API itself.

<!-- Issue number (e.g. open-telemetry#1234) or full URL to issue, if applicable. -->
#### Link to tracking issue
Fixes

<!--Describe what testing was performed and which tests were added.-->
#### Testing

<!--Describe the documentation added.-->
#### Documentation

<!--Please delete paragraphs that you did not use before submitting.-->

---------

Co-authored-by: Jina Jain <jjain@splunk.com>
Co-authored-by: Christos Markou <chrismarkou92@gmail.com>
gracewehner pushed a commit to gracewehner/opentelemetry-collector-contrib that referenced this pull request Apr 29, 2026
…ting instance data to k8s event data (open-telemetry#45289)

<!--Ex. Fixing a bug - Describe the bug and how this fixes the issue.
Ex. Adding a feature - Explain what this achieves.-->
#### Description

Adds the `reportingController` and `reportingInstance` data to the log
event for k8s events receiver.

also fixes the `EnsureCapacity` values.


The Use Case & Need:
The primary driver here is that the Source field (which typically
contained this info) is deprecated in the `events.k8s.io/v1` API and
replaced by `reportingController` and `reportingInstance`.

​Modern API Alignment: 
Kubernetes explicitly recommends moving away from Source
(component/host) in favor of these two fields. As the collector aims to
support modern K8s environments, we need to expose these first-class
citizens of the v1 Event API.

​Observability & Debugging: `reportingController` is critical for
distinguishing who generated an event (e.g., was it the kubelet,
statefulset-controller, or a custom operator?). This is distinct from
the regarding object (what the event is about).

​High-Availability (HA) Context: `reportingInstance` allows users to
identify exactly which replica of a controller emitted the event, which
is vital for debugging issues in HA control planes or clustered
operators.


​On Schema Stability:
Since these fields are direct mappings of the official Kubernetes
`v1.Event` struct (and not custom logic we are inventing), they share
the stability guarantees of the Kubernetes API itself.

<!-- Issue number (e.g. open-telemetry#1234) or full URL to issue, if applicable. -->
#### Link to tracking issue
Fixes

<!--Describe what testing was performed and which tests were added.-->
#### Testing

<!--Describe the documentation added.-->
#### Documentation

<!--Please delete paragraphs that you did not use before submitting.-->

---------

Co-authored-by: Jina Jain <jjain@splunk.com>
Co-authored-by: Christos Markou <chrismarkou92@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants