Skip to content

chore(deps): update vite (major) - autoclosed#480

Closed
renovate[bot] wants to merge 1 commit intomainfrom
renovate/major-vite
Closed

chore(deps): update vite (major) - autoclosed#480
renovate[bot] wants to merge 1 commit intomainfrom
renovate/major-vite

Conversation

@renovate
Copy link
Contributor

@renovate renovate bot commented Feb 26, 2026

This PR contains the following updates:

Package Change Age Confidence
@vitejs/plugin-react-swc (source) ^3.11.0^4.0.0 age confidence
vite (source) ^7.2.7^8.0.0 age confidence

Release Notes

vitejs/vite-plugin-react (@​vitejs/plugin-react-swc)

v4.3.0

Compare Source

Add Vite 8 to peerDependencies range #​1142

This plugin is compatible with Vite 8.

v4.2.3

Compare Source

v4.2.2

Compare Source

Update code to support newer rolldown-vite (#​978)

rolldown-vite will remove optimizeDeps.rollupOptions in favor of optimizeDeps.rolldownOptions soon. This plugin now uses optimizeDeps.rolldownOptions to support newer rolldown-vite. Please update rolldown-vite to the latest version if you are using an older version.

v4.2.1

Compare Source

Fix @vitejs/plugin-react-swc/preamble on build (#​962)

v4.2.0

Compare Source

Add @vitejs/plugin-react-swc/preamble virtual module for SSR HMR (#​890)

SSR applications can now initialize HMR runtime by importing @vitejs/plugin-react-swc/preamble at the top of their client entry instead of manually calling transformIndexHtml. This simplifies SSR setup for applications that don't use the transformIndexHtml API.

Use SWC when useAtYourOwnRisk_mutateSwcOptions is provided (#​951)

Previously, this plugin did not use SWC if plugins were not provided even if useAtYourOwnRisk_mutateSwcOptions was provided. This is now fixed.

v4.1.0

Compare Source

Set SWC cacheRoot options

This is set to {viteCacheDir}/swc and override the default of .swc.

Perf: simplify refresh wrapper generation (#​835)

v4.0.1

Compare Source

Set optimizeDeps.rollupOptions.transform.jsx instead of optimizeDeps.rollupOptions.jsx for rolldown-vite (#​735)

optimizeDeps.rollupOptions.jsx is going to be deprecated in favor of optimizeDeps.rollupOptions.transform.jsx.

v4.0.0

Compare Source

vitejs/vite (vite)

v8.0.0

Compare Source

Features
Bug Fixes

Configuration

📅 Schedule: Branch creation - "before 10am on friday" in timezone Europe/London, Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@renovate renovate bot added dependencies Renovatebot and dependabot updates frontend javascript Pull requests that update javascript code labels Feb 26, 2026
@renovate renovate bot force-pushed the renovate/major-vite branch 2 times, most recently from 44180ee to 7c87580 Compare March 12, 2026 14:34
@renovate renovate bot changed the title chore(deps): update dependency @vitejs/plugin-react-swc to v4 chore(deps): update vite (major) Mar 12, 2026
@renovate
Copy link
Contributor Author

renovate bot commented Mar 12, 2026

⚠️ Artifact update problem

Renovate failed to update an artifact related to this branch. You probably do not want to merge this PR as-is.

♻ Renovate will retry this branch, including artifacts, only when one of the following happens:

  • any of the package files in this branch needs updating, or
  • the branch becomes conflicted, or
  • you click the rebase/retry checkbox if found above, or
  • you rename this PR's title to start with "rebase!" to trigger it manually

The artifact failure details are included below:

File name: modules/api-server/demo-app/package-lock.json
npm warn Unknown env config "store". This will stop working in the next major version of npm.
npm error code ERESOLVE
npm error ERESOLVE could not resolve
npm error
npm error While resolving: lovable-tagger@1.1.13
npm error Found: vite@8.0.2
npm error node_modules/vite
npm error   dev vite@"^8.0.0" from the root project
npm error   peer vite@"^4 || ^5 || ^6 || ^7 || ^8" from @vitejs/plugin-react-swc@4.3.0
npm error   node_modules/@vitejs/plugin-react-swc
npm error     dev @vitejs/plugin-react-swc@"^4.0.0" from the root project
npm error
npm error Could not resolve dependency:
npm error peer vite@">=5.0.0 <8.0.0" from lovable-tagger@1.1.13
npm error node_modules/lovable-tagger
npm error   dev lovable-tagger@"^1.1.13" from the root project
npm error
npm error Conflicting peer dependency: vite@7.3.1
npm error node_modules/vite
npm error   peer vite@">=5.0.0 <8.0.0" from lovable-tagger@1.1.13
npm error   node_modules/lovable-tagger
npm error     dev lovable-tagger@"^1.1.13" from the root project
npm error
npm error Fix the upstream dependency conflict, or retry
npm error this command with --force or --legacy-peer-deps
npm error to accept an incorrect (and potentially broken) dependency resolution.
npm error
npm error
npm error For a full report see:
npm error /runner/cache/others/npm/_logs/2026-03-24T22_27_13_817Z-eresolve-report.txt
npm error A complete log of this run can be found in: /runner/cache/others/npm/_logs/2026-03-24T22_27_13_817Z-debug-0.log

@github-actions
Copy link

github-actions bot commented Mar 12, 2026

Open in Overmind ↗


model|risks_v6
✨Encryption Key State Risk ✨KMS Key Creation

🔴 Change Signals

Routine 🔴 ▇▅▃▂▁ Multiple infrastructure resources showing unusual change activity of only 1-2 events/week for the last 2-3 months, which is infrequent compared to typical patterns.
Policies 🔴 ▃▂▁ Multiple infrastructure resources showing unusual policy violations that may need review: the S3 bucket does not have server-side encryption configured and is missing required tags, while the security group allows SSH (port 22) access from anywhere (0.0.0.0/0).

View signals ↗


🔥 Risks

Simultaneous public IP churn on both production nodes can break direct consumers while health-check alerting has no confirmed recipient ‼️High Open Risk ↗
Both production EC2 instances are changing their instance-level public IPs and public DNS names in the same apply, and neither instance has an Elastic IP attached. That means the current direct endpoints 18.175.147.19 / ec2-18-175-147-19.eu-west-2.compute.amazonaws.com and 35.179.137.86 / ec2-35-179-137-86.eu-west-2.compute.amazonaws.com will be replaced together, so any unmanaged consumers that talk to the nodes directly instead of using the ALB will lose connectivity until their allowlists, scripts, caches, or health checks are updated.

At the same time, the only production health-check alarm I found publishes to production-api-alerts, and that SNS topic currently has 0 confirmed subscriptions and 1 pending subscription. The new email subscription in this change uses endpoint_auto_confirms=false, so it will stay pending until someone manually confirms it. If the public-address cutover breaks direct consumers, the team will have a monitoring blind spot during the highest-risk part of the rollout.


🧠 Reasoning · ✔ 1 · ✖ 3

Fleet-wide public IP/DNS churn with unmanaged consumers and alerting gap

Observations 3

Hypothesis

Both production EC2 instances in eu-west-2 are changing public DNS names and public IPs in the same Terraform apply. Any external consumers that rely on the current instance-level addresses—such as downstream clients, firewall or API allowlists, health checks, admin scripts, or sticky clients—will stop reaching the service once new addresses are allocated, because Terraform does not model these out-of-band dependencies. Performing fleet-wide address churn simultaneously removes any stable node that could serve as a fallback while caches and allowlists are updated, so an active/active or active/standby pair can effectively behave like a single point of failure during cutover. In parallel, the new email-based SNS subscription to production-api-alerts will remain in a pending state until manually confirmed, creating an alerting blind spot: if connectivity or availability issues occur during the address cutover, notifications may not reach on-call quickly enough, weakening detection and response when the risk is highest.

Investigation

Evidence Gathered

I first checked the relevant organizational guidance. The aws-high-availability knowledge file explicitly flags applying changes to all instances simultaneously without gradual rollout as a reliability risk, and the aws-monitoring-detection guidance flags alarms without effective notification targets as a monitoring risk. I also checked infrastructure-quick-reference and security-compliance-requirements; those confirmed this environment contains some intentionally simplified infrastructure, but nothing that would make simultaneous public-address churn or unconfirmed SNS email subscriptions safe.

I then examined the planned diffs. Both production EC2 instances, 540044833068.eu-west-2.ec2-instance.i-0464c4413cb0c54aa and 540044833068.eu-west-2.ec2-instance.i-09d6479fb9b97d123, are changing public_ip and public_dns from concrete current values to (known after apply), which is strong evidence that each instance will be assigned a new public address. I queried both instances in the blast radius and confirmed they currently do have public internet identities: 18.175.147.19 / ec2-18-175-147-19.eu-west-2.compute.amazonaws.com and 35.179.137.86 / ec2-35-179-137-86.eu-west-2.compute.amazonaws.com. Neither instance has an Elastic IP attached; the only Elastic IPs in this blast radius belong to the ALB, not the instances. AWS documentation states that when an EC2 instance receives a new public IPv4 address, its public DNS name changes with it, and that public IP/DNS changes on stop/start unless an Elastic IP is used. (docs.aws.amazon.com)

I also checked what stable ingress exists today. There is an internet-facing ALB, 540044833068.eu-west-2.elbv2-load-balancer.api-207c90ee-alb, with two static public Elastic IPs (13.43.201.81 and 16.60.242.196) and a listener forwarding to target group api-207c90ee-tg. That ALB provides a stable entry point for traffic sent to the load balancer DNS name, but it does not eliminate risk for unmanaged consumers that connect directly to the instance public IPs or instance public DNS names. The blast radius itself includes global DNS/IP objects for those instance-level addresses, which is additional evidence that those identities are externally visible and modeled as distinct endpoints.

For alerting, I checked the current SNS topic and the planned subscription. The existing topic arn:aws:sns:eu-west-2:540044833068:production-api-alerts already shows SubscriptionsConfirmed: "0" and SubscriptionsPending: "1", meaning there is currently no confirmed subscriber on that topic. The new Terraform resource is an aws_sns_topic_subscription using protocol: email, endpoint: alerts@example.com, endpoint_auto_confirms: false, and confirmation_timeout_in_minutes: 1. AWS SNS documentation says email subscriptions must be confirmed before they receive notifications and remain in PendingConfirmation until the recipient confirms them; Terraform provider documentation also notes email subscriptions are not auto-confirming and are problematic because they do not produce a usable ARN until validated. (docs.aws.amazon.com)

I also verified the monitoring path. The CloudWatch alarm production-api-health-check-failed is configured to publish to arn:aws:sns:eu-west-2:540044833068:production-api-alerts. Both Route53 health checks related to this workload are currently in HEALTH_ERROR, so this alarm path matters operationally right now. However, because the SNS topic has no confirmed subscriptions, notifications sent to that topic are unlikely to reach an on-call human. That is a real alerting gap, not just a hypothetical one.

Impact Assessment

There are 2 directly affected production EC2 instances: i-0464c4413cb0c54aa and i-09d6479fb9b97d123. Both are in eu-west-2a, and both are having their instance-level public IP/DNS identities churned in the same apply. One of them (i-09d6479fb9b97d123) is confirmed as an ALB target and currently healthy; the other still has a production tag set and a public endpoint, so both are externally reachable nodes whose current public identities may have consumers outside Terraform.

The downstream impact splits into two paths. For consumers using the ALB DNS name, the stable ALB Elastic IPs mean they should continue to have a stable entry point. But for any direct consumers using 18.175.147.19, 35.179.137.86, or the corresponding ec2-*.compute.amazonaws.com names, both addresses will change at once, leaving 0 unchanged instance-level endpoints during cutover. That can break external allowlists, health checks, scripts, sticky clients, or manual operational access until those consumers are updated. In parallel, the alerting path for the production health-check alarm points to an SNS topic with 0 confirmed subscriptions, and the newly created email subscription will also remain pending until someone confirms it. So if the address churn causes a reachability incident for those unmanaged direct consumers, detection and human response are weakened exactly during the change window.

The scope of disruption is production-only but real: direct instance-level access to both public nodes can break simultaneously, and the single documented alarm route for the production health check currently has no confirmed delivery target. This does not prove total public service outage through the ALB, but it does prove a combined availability-and-detection risk affecting the production API environment.

Conclusion

I conclude the risk is real. The key evidence is that both production instances are losing their current public IP/DNS identities in the same apply with no instance-level stable address, while the SNS topic used by the production health-check alarm has 0 confirmed subscriptions and the new email subscription requires manual confirmation before it can receive alerts.

✔ Hypothesis proven


EC2 instance lifecycle changes risking EBS data loss and stale DNS/IP mappings

Observations 2

Hypothesis

Updating EC2 instance i-0464c4413cb0c54aa can change how the attached EBS volume vol-0a61278f4602fc12b is treated on lifecycle events (e.g., instance replacement, termination behavior changes), risking data loss if the volume is configured with delete-on-termination and is not backed up or detached properly. The same instance update can also implicitly affect the private IP / ENI used by DNS naming (e.g., global.dns.ip-10-0-101-133... pointing to 10.0.101.133). If the primary ENI or private IP is reassigned or replaced, DNS A records tied to the old IP may become stale, causing traffic to be routed to an unused or incorrect address and breaking services that depend on that internal name. These risks relate to coupling of compute lifecycle with persistent storage and IP/DNS identity without safeguards (snapshots, static ENIs, updated DNS automation).

Investigation

Evidence Gathered

I first checked the relevant organizational knowledge for compute, storage, availability, infrastructure notes, multi-region architecture, and security requirements. The storage guidance says EBS volumes with critical data should have automated snapshots, and the security guidance says EBS volumes should be encrypted. Those are real baseline concerns in this environment, but they only become change risk if this plan alters volume lifecycle, attachment, or encryption.

I then examined the planned diffs for all resources in this change. For 540044833068.eu-west-2.ec2-instance.i-0464c4413cb0c54aa, the only concrete change shown is public_dns and public_ip becoming (known after apply). The same pattern appears on 540044833068.eu-west-2.ec2-instance.i-09d6479fb9b97d123. There is no diff showing instance replacement, no change to private_ip, no change to block device mappings, no change to root volume attributes, no change to ENI attachment, and no change to any Route 53 or DNS resource tied to the private address. I also queried the current state of the instance, its root volume vol-0a61278f4602fc12b, the primary ENI eni-069a58a392f35dce3, and the internal DNS record global.dns.ip-10-0-101-133.eu-west-2.compute.internal.

That current-state evidence shows the root EBS volume is attached as /dev/xvda with DeleteOnTermination: true, but it is simply the instance's root volume and the plan does not modify that attachment or any lifecycle setting. The ENI is the primary interface for the instance and currently carries private IP 10.0.101.133; the internal DNS name is the standard AWS private hostname that resolves to that same private IP with TTL 55. The instance also has an auto-assigned public IP 18.175.147.19, which AWS documents as changeable on stop/start, while attached ENIs and their private IPs persist across stop/start for EBS-backed instances. AWS also documents that the private IPv4 DNS hostname resolves to the instance's private IPv4 address, i.e. it tracks the private IP, not the ephemeral public IP. The only thing this plan clearly leaves unresolved until apply is the auto-assigned public address, which is expected for EC2 and not evidence of replacement or private-IP churn. (docs.aws.amazon.com)

Impact Assessment

The hypothesis names two concern areas: EBS data loss on vol-0a61278f4602fc12b and stale DNS/IP mapping for global.dns.ip-10-0-101-133.eu-west-2.compute.internal. After investigation, I found no planned change that would affect either concern area directly. The directly affected planned resources are 2 EC2 instances and 1 SNS subscription resource, but for this instance the visible diff does not touch storage lifecycle, private addressing, ENI identity, or DNS automation.

If this plan were replacing the instance or altering block device settings, the blast radius could include the root volume and any services depending on the instance's private name. But that evidence is absent here. The only concrete attribute becoming unknown-after-apply is the public IP/public DNS, which affects internet-reachable identity, not the internal private hostname cited in the hypothesis. The internal DNS record ip-10-0-101-133.eu-west-2.compute.internal maps to the private IP on the attached primary ENI, and there is no sign in the diff that the private IP 10.0.101.133 or ENI eni-069a58a392f35dce3 is changing. So the scope of disruption from the hypothesized mechanism is effectively zero based on this plan.

There are separate environmental issues worth noting but not attributable to this change: the root volume is unencrypted, the instance has a public IP despite security guidance preferring private-only instances, and the Route 53 health checks queried are already failing. None of those conditions are introduced by this change, so they do not make the hypothesis true.

Conclusion

I conclude the risk is not real for this change. The key evidence is that the plan only makes the instance's ephemeral public_ip and public_dns unknown until apply; it does not change the root EBS lifecycle, the attached ENI, the private IP 10.0.101.133, or the internal DNS record that the hypothesis says would become stale.

✖ Hypothesis disproven


Overly permissive security group with unstable ENI/instance attachment during EC2 updates

Observations 4

Hypothesis

Security group sg-0437857de45b640ce is overly permissive, allowing ingress from 0.0.0.0/0 on SSH (22) and HTTP (80) and egress to 0.0.0.0/0 on all protocols. It is attached via ENIs to EC2 instance i-0464c4413cb0c54aa and also affects instance i-060c5af731ee54cc9. Updating i-0464c4413cb0c54aa can involve ENI reattachment or instance replacement, which may change which ENIs and instances are associated with this security group. Such lifecycle changes can silently alter which resources are exposed to the internet on SSH/HTTP or modify segmentation boundaries without review, violating least-privilege and network hardening guidance (e.g., SEC05-BP02, SEC06). Risk includes unintended exposure of additional instances or loss of expected protections if security group mappings shift.

Investigation

Evidence Gathered

I first checked the organization’s security guidance because this hypothesis is about security groups and internet exposure. The internal knowledge clearly says EC2 instances must not be directly reachable from the internet, SSH must never be open to 0.0.0.0/0, and a shared permissive security group increases blast radius. I then queried the current state of sg-0437857de45b640ce, its rules, the attached instances i-0464c4413cb0c54aa and i-060c5af731ee54cc9, and their ENIs. The security group is indeed permissive today: ingress from 0.0.0.0/0 on ports 22 and 80, and egress to 0.0.0.0/0 on all protocols. It is currently attached to exactly two ENIs: eni-069a58a392f35dce3 on i-0464c4413cb0c54aa and eni-06f63df9fa5b5a639 on i-060c5af731ee54cc9.

I then examined the planned changes for the EC2 instances in this change. For both i-0464c4413cb0c54aa and i-09d6479fb9b97d123, the only diff is public_ip and public_dns becoming (known after apply). There is no diff changing vpc_security_group_ids, subnet, ENI attachments, primary network interface, or any security group resource or rule. I also checked AWS documentation: stop/start or similar EC2 updates can cause an auto-assigned public IPv4 address to change, while attached network interfaces persist and are reattached; security groups are attached to ENIs, and the evidence here shows no ENI or SG association change is planned. AWS docs also note that auto-assigned public IP behavior depends on subnet settings and launch configuration, which explains why Terraform may show a recomputed public IP without implying a security group remap. (docs.aws.amazon.com)

Impact Assessment

There is a real existing security issue in the environment today: 2 EC2 instances are attached to the shared permissive security group sg-0437857de45b640ce. Of those, i-0464c4413cb0c54aa currently has a public IP (18.175.147.19) and is therefore directly internet-reachable on 22 and 80, which violates internal policy. i-060c5af731ee54cc9 shares the same permissive group but currently has no public IP, so it is not directly exposed from the internet at present. However, that is the current-state posture, not something introduced by this Terraform change.

For this specific change, the blast radius of the hypothesized failure mechanism is not supported. The change affects 2 instances (i-0464c4413cb0c54aa and i-09d6479fb9b97d123), but only as a recomputation of ephemeral public IP/DNS values. There is no evidence that additional ENIs will attach to sg-0437857de45b640ce, that i-060c5af731ee54cc9 will gain a public interface, or that segmentation boundaries will change. The one downstream service I checked, the ALB target group api-207c90ee-tg, is targeting i-09d6479fb9b97d123, not either instance attached to sg-0437857de45b640ce, so the shared SG concern is also not propagating into that application path.

Conclusion

I conclude the hypothesized risk is not real for this change. The security group is unquestionably over-permissive and non-compliant today, but the planned diff does not modify security groups, ENIs, or attachments; it only causes EC2 public IP/DNS values to be recomputed, which AWS documents as normal behavior for auto-assigned public IPs rather than evidence of a security-group remap.

✖ Hypothesis disproven


ALB and Route53 health/availability risk from single-instance EC2 update

Observations 10

Hypothesis

EC2 instance i-09d6479fb9b97d123 is a target in an internet-facing ALB target group backing api-207c90ee-alb and is also used as a direct IP endpoint for a Route53 HTTPS health check (/health on port 443 against addresses such as 44.207.52.17 and 13.134.236.98). Updating this instance (software, networking, security groups, or instance replacement) can cause it to fail ALB health checks or become unreachable, leading to unhealthy targets, potential deregistration from the target group, CloudWatch alarm api-207c90ee-unhealthy-targets firing, and reduced availability of the public API. The same lifecycle changes can also cause Route53 health checks to fail more frequently, degrading DNS-based failover and monitoring. Concentrating traffic and health checks on a single instance-backed public endpoint, sometimes referenced directly by IP instead of via stable DNS/LB abstractions, increases the chance that an EC2 update or network/security-group change will cause a visible outage or misrouted traffic, and may violate guidance that public-facing services be fronted by managed endpoints (ALB/CloudFront) with proper network segmentation and redundancy.

Investigation

Evidence Gathered

I first checked the relevant organizational guidance. The aws-high-availability knowledge file says production workloads should avoid single points of failure and specifically flags ELBs with targets in only one AZ as a reliability risk. The aws-network-security and security-compliance-requirements files also reinforce that direct public EC2 exposure is an anti-pattern in production, but those documents are about architectural posture, not necessarily a change-induced failure from this plan.

I then inspected the current state of the changed instance 540044833068.eu-west-2.ec2-instance.i-09d6479fb9b97d123. It is a running EC2 instance in eu-west-2a, with public IP 35.179.137.86, private IP 10.0.101.11, and security group sg-0b35287bf0a8a338c. I checked the ALB target group api-207c90ee-tg and its target health record. The target group uses HTTP health checks on /health over port 80, and the specific target registration for i-09d6479fb9b97d123 is currently healthy. I also checked the ALB api-207c90ee-alb, which is internet-facing and spans two subnets / AZs (eu-west-2a and eu-west-2b).

To test the hypothesis about direct IP health checks, I inspected both Route 53 health checks. They do not point to this instance’s current public IP 35.179.137.86, and they do not point to the ALB’s current public EIPs 13.43.201.81 and 16.60.242.196 either. Instead, they are configured against fixed IPs 44.207.52.17 and 13.134.236.98 on HTTPS /health port 443, and both health checks are already in HEALTH_ERROR with repeated timeout observations from multiple Route 53 checker regions. I also checked the ALB EIPs and ENIs to confirm the currently associated public addresses are 13.43.201.81 and 16.60.242.196, attached to ALB-managed network interfaces, not to the EC2 instance.

I examined the plan diff for the changed instance and the only visible change is public_dns and public_ip becoming (known after apply). I also queried the other changed EC2 instance i-0464c4413cb0c54aa and found the same pattern there: only computed public_dns/public_ip fields changing to (known after apply). I did not find any planned change to the target group, ALB, Route 53 health checks, security groups, ENIs, or Elastic IPs. I also checked AWS and Terraform documentation. Terraform documentation shows that EC2 public_ip / public_dns often become (known after apply) as computed attributes during instance updates or replacement planning, and this alone does not prove a disruptive networking change. AWS Route 53 documentation confirms that health checks explicitly probe the configured IP/port/path, and AWS recommends using an Elastic IP if you want stable IP monitoring for an EC2 endpoint. AWS documentation also confirms Route 53 health checks fail if the configured endpoint is not reachable from health checker IP ranges.

Impact Assessment

The hypothesis’s concern area is public API availability through the ALB and Route 53 monitoring/failover. Today, the directly affected serving path visible in the blast radius is 1 EC2 instance target: i-09d6479fb9b97d123, registered as a healthy target in api-207c90ee-tg. If that instance were actually replaced or made unhealthy, the ALB would lose 1 healthy target and the api-207c90ee-unhealthy-targets alarm could fire. That would matter because the target group snapshot only shows this single registered healthy target, so there is little redundancy behind the ALB.

However, this specific change does not provide evidence that such a disruption will occur. The plan does not show any concrete modification to the instance’s networking, security groups, ALB registration, listener, or target group health check behavior. The only diffed fields are computed public addressing attributes becoming unknown until apply, which is normal Terraform behavior and explicitly not enough on its own to infer an outage. On the Route 53 side, the blast radius shows 2 health checks, both already failing against unrelated IPs (44.207.52.17 and 13.134.236.98). Because those health checks are already unhealthy and are not configured to check this EC2 instance’s current public IP or the ALB’s current EIPs, this EC2 update is not the cause of their degraded state and is not what will newly break DNS failover.

Conclusion

I conclude the risk is not real for this specific change. The infrastructure does have an underlying architecture concern — a production ALB apparently backed by only one healthy EC2 target, plus Route 53 health checks already pointed at failing direct IP endpoints — but the investigated plan does not contain the concrete networking, security-group, target-group, or health-check changes needed to make this EC2 update a demonstrated availability risk.

✖ Hypothesis disproven


💥 Blast Radius

Items 47

Edges 114

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overmind

✅ Auto-Approved


🟢 Decision

Auto-approved: All safety checks passed


🔥 Risks Summary

High 0 · Medium 0 · Low 0


View full analysis in Overmind ↗

@renovate renovate bot force-pushed the renovate/major-vite branch from 7c87580 to 2393de4 Compare March 19, 2026 04:56
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overmind

⛔ Auto-Blocked


🔴 Decision

Auto-blocked: Policy signal (-3) is below threshold (-2)


📊 Signals Summary

Policies 🔴 -3


🔥 Risks Summary

High 0 · Medium 0 · Low 0


View full analysis in Overmind ↗

@renovate renovate bot force-pushed the renovate/major-vite branch from 2393de4 to 4d03e0f Compare March 23, 2026 13:52
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overmind

⛔ Auto-Blocked


🔴 Decision

Auto-blocked: Policy signal (-3) is below threshold (-2); Routine score (-5) is below minimum (-1)


📊 Signals Summary

Routine 🔴 -5

Policies 🔴 -3


🔥 Risks Summary

High 0 · Medium 0 · Low 0


💥 Blast Radius

Items 1 · Edges 0


View full analysis in Overmind ↗

@renovate renovate bot force-pushed the renovate/major-vite branch from 4d03e0f to a19b04d Compare March 24, 2026 22:27
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overmind

⛔ Auto-Blocked


🔴 Decision

Found 1 high risk requiring review


📊 Signals Summary

Routine 🔴 -5

Policies 🔴 -3


🔥 Risks Summary

High 1 · Medium 0 · Low 0


💥 Blast Radius

Items 47 · Edges 114


View full analysis in Overmind ↗

@renovate renovate bot changed the title chore(deps): update vite (major) chore(deps): update vite (major) - autoclosed Mar 24, 2026
@renovate renovate bot closed this Mar 24, 2026
@renovate renovate bot deleted the renovate/major-vite branch March 24, 2026 22:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dependencies Renovatebot and dependabot updates frontend javascript Pull requests that update javascript code

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants