Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: thiagomedina The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #16419 +/- ##
==========================================
- Coverage 80.20% 80.18% -0.02%
==========================================
Files 217 217
Lines 13511 13506 -5
==========================================
- Hits 10836 10830 -6
- Misses 2310 2311 +1
Partials 365 365 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
Hey @thiagomedina - I've poked at the scale subresource before. One thing I didn't see in my prior investigation was whether you could create informers that return the autoscaling.v1.Scale object. Curious if you've played around with that? Then the autoscaler informers wouldn't need the entire deployment in memory. |
Hey @dprotaso I couldn't find a way to do that. The /scale subresource has no list or watch in |
dprotaso
left a comment
There was a problem hiding this comment.
So a few options here:
-
We can simplify the PR to use the
/scaleresource when updating - this will speed up cold-start due to json serialization -
If we want to support the scale subresource we shouldn't be performing an API call to get the current scale - that will slow things down and hit API rate limiting at scale. Since we already have the podscalable informer this data is already cached in memory. We could try to switch that to use the autoscaling.v1.Scale as a storage type. But in order for this to work for all resources that support
/scalewe probably need to to leverage the package (https://pkg.go.dev/k8s.io/client-go/scale) in order to know which field maps to desired & actual replica etc.
2 is pretty advanced use of client-go - and I'm haven't spent much time exploring it but it's probably the path we want to take. We could also in the short term just assume the same PodSpecable shape for now and then add support for other more niche setup's in the future.
If you need help let me know and we can figure out how to split the work.
| return err | ||
| } | ||
|
|
||
| psNew := ps.DeepCopy() |
There was a problem hiding this comment.
Just wanted to point out that by dropping this DeepCopy and json marshalling/unmarshalling we are now speeding up this code by a bit.
| if err != nil { | ||
| return desiredScale, err | ||
| } | ||
| scaleObj, err := ks.dynamicClient.Resource(*gvr).Namespace(pa.Namespace).Get(ctx, name, metav1.GetOptions{}, "scale") |
There was a problem hiding this comment.
We don't want this extra Get call - it will slow down scaling from zero since it's in the critical path.
For now let's use ps, err := resources.GetScaleResource(pa.Namespace, pa.Spec.ScaleTargetRef, ks.listerFactory)
This still requires the resources to conform to PodScalable still.
| currentScale := int32(1) | ||
| if ps.Spec.Replicas != nil { | ||
| currentScale = *ps.Spec.Replicas | ||
| if err := runtime.DefaultUnstructuredConverter.FromUnstructured(scaleObj.UnstructuredContent(), |
There was a problem hiding this comment.
Drop since PodScalable has the replica count
Fixes #15528
Proposed Changes
Release Note