Skip to content

fix: add .npmrc with legacy-peer-deps for eslint compatibility#509

Merged
dylanratcliffe merged 1 commit intomainfrom
fix/npmrc-legacy-peer-deps
Mar 24, 2026
Merged

fix: add .npmrc with legacy-peer-deps for eslint compatibility#509
dylanratcliffe merged 1 commit intomainfrom
fix/npmrc-legacy-peer-deps

Conversation

@dylanratcliffe
Copy link
Member

Summary

  • Adds .npmrc with legacy-peer-deps=true to the demo app
  • eslint-plugin-react-hooks@7 doesn't yet declare eslint@10 as a supported peer dependency, which causes npm install to fail without --legacy-peer-deps
  • This is what's been causing the renovate/artifacts failure on every Renovate PR (including chore(deps): lock file maintenance #499)

Once this merges, Renovate should be able to regenerate lock files successfully and #499 can be rebased and merged.

Test plan

  • Verified npm install succeeds in demo-app with this .npmrc

Made with Cursor

eslint-plugin-react-hooks@7 doesn't yet declare eslint@10 as a
supported peer, which causes npm install to fail without
--legacy-peer-deps. This unblocks Renovate lock file maintenance.

Made-with: Cursor
@dylanratcliffe dylanratcliffe merged commit 9aebd55 into main Mar 24, 2026
4 of 5 checks passed
@dylanratcliffe dylanratcliffe deleted the fix/npmrc-legacy-peer-deps branch March 24, 2026 22:47
@github-actions
Copy link

Open in Overmind ↗


model|risks_v6
✨Encryption Key State Risk ✨KMS Key Creation

🔴 Change Signals

Routine 🔴 ▇▅▃▂▁ Multiple compute resources showing unusual and infrequent weekly changes at 1 event/week for the last 2-3 months, which is rare compared to typical patterns.

View signals ↗


🔥 Risks

Shared security group leaves the production EC2 api-server publicly reachable on SSH and HTTP ‼️High Open Risk ↗
The change updates 540044833068.eu-west-2.ec2-instance.i-0464c4413cb0c54aa, but it does not change the shared security group sg-0437857de45b640ce that is attached to that instance and to 540044833068.eu-west-2.ec2-instance.i-060c5af731ee54cc9. Current state shows that security group still allows inbound tcp/22 and tcp/80 from 0.0.0.0/0, and the api-server instance i-0464c4413cb0c54aa still has a public IP and public DNS name. Under the organization’s security requirements, that combination is critical severity, and it violates SEC06-BP03 and SEC05-BP02.

Because this plan only refreshes the instance’s public IP/DNS fields and leaves the permissive ingress rules intact, the exposure will persist after apply. The production api-server will remain directly reachable from the internet on SSH and HTTP, and the shared security group will continue to extend that same exposure pattern to any other instance that uses it now or in future lifecycle changes.


🧠 Reasoning · ✔ 1 · ✖ 2

Public SSH/HTTP exposure via EC2 security groups

Observations 4

Hypothesis

Multiple EC2-related resources are configured with public SSH (port 22) access from 0.0.0.0/0, creating a high-severity attack surface and violating least-privilege and compute hardening best practices (aws-compute-configuration SEC06-BP03, aws-network-security SEC05-BP02). EC2 instance i-0464c4413cb0c54aa has a security group permitting SSH from the entire internet, and shared security group sg-0437857de45b640ce allows SSH (22) and HTTP (80) from 0.0.0.0/0. Instance lifecycle actions (replacement, reattachment, subnet/ENI changes) will not remediate this exposure; the risk persists as long as these security group rules remain. Public SSH should be replaced with Session Manager or tightly restricted CIDR ranges, and HTTP should be limited to intended client networks or fronted by controlled ingress (e.g., ALB/WAF).

Investigation

Evidence Gathered

I first checked the organization’s security and architecture guidance because the hypothesis concerns EC2 hardening and network exposure. The relevant knowledge files explicitly say EC2 instances must not be directly reachable from the internet, SSH on port 22 must never be open to 0.0.0.0/0, and that an instance with both a public IP and an open security group is critical severity. The AWS compute and network guidance also classifies SSH from 0.0.0.0/0 as a High risk under SEC06-BP03 and SEC05-BP02.

I then queried the current blast-radius state for the affected EC2 instances, security group, security-group rules, and ENIs. That confirmed sg-0437857de45b640ce (internet-access) currently has inbound tcp/22 from 0.0.0.0/0 and inbound tcp/80 from 0.0.0.0/0. The same shared security group is attached to two running instances: i-0464c4413cb0c54aa (api-server) and i-060c5af731ee54cc9 (data-processor). Of those, i-0464c4413cb0c54aa currently has a public IP 18.175.147.19 and public DNS name ec2-18-175-147-19.eu-west-2.compute.amazonaws.com, while i-060c5af731ee54cc9 is private-only. I also checked the planned changes for both changed instances. The only change shown is that public_dns and public_ip become (known after apply); there is no change to the attached security group or to any of its ingress rules, so the broad exposure is not being remediated by this plan.

To verify the semantics, I checked AWS documentation. AWS EC2 documentation says that if you authorize port 22, you should authorize only the specific IPs or ranges that need access, not Anywhere-IPv4. AWS Config’s restricted-ssh rule marks a security group compliant only when incoming SSH is restricted to CIDRs other than 0.0.0.0/0 or ::/0. AWS Security Hub also defines controls stating EC2 instances should not have a public IPv4 address and security groups should not allow ingress from 0.0.0.0/0 to port 22.

Impact Assessment

The directly affected resources are 3 network-control resources and 2 compute resources: the shared security group 540044833068.eu-west-2.ec2-security-group.sg-0437857de45b640ce, its ingress rules 540044833068.eu-west-2.ec2-security-group-rule.sgr-069fb02dd388e8991 (SSH) and 540044833068.eu-west-2.ec2-security-group-rule.sgr-0af27f51db289ce19 (HTTP), plus the two attached instances 540044833068.eu-west-2.ec2-instance.i-0464c4413cb0c54aa and 540044833068.eu-west-2.ec2-instance.i-060c5af731ee54cc9.

The highest-severity impact is on exactly 1 internet-reachable instance today: the api-server instance i-0464c4413cb0c54aa. Its ENI eni-069a58a392f35dce3 has public IP 18.175.147.19, and the attached shared security group permits inbound SSH and HTTP from the entire internet. That means this host is directly reachable on ports 22 and 80 from anywhere, creating a live remote attack surface rather than a hypothetical future one. The second attached instance, data-processor (i-060c5af731ee54cc9), does not currently have a public IP, so it is not directly reachable from the public internet right now; however, the same permissive shared security group expands blast radius because any future public exposure, subnet move, ENI reassociation, or other routing change would inherit the same open ingress immediately.

Operationally, this is not an apply-time failure; it is a persistent security exposure. The current Terraform change only refreshes or replaces public-IP-related computed values on the EC2 instances and leaves the shared security-group rules intact. Because the permissive rules remain attached to the same shared group, the attack surface survives this change unchanged. The scope is production, as the security group and api-server instance are tagged Environment=production, and the exposed endpoint is a directly addressable EC2 public DNS name rather than a controlled ingress layer such as an ALB/WAF.

Conclusion

I conclude the risk is real. The key evidence is that the change does not modify sg-0437857de45b640ce, while blast-radius data confirms that security group still allows 0.0.0.0/0 on SSH and HTTP and remains attached to a production EC2 instance that currently has a public IP, leaving a directly exploitable internet-facing path in place.

✔ Hypothesis proven


Instance lifecycle changes risking ENI, IP, and EBS-backed data

Observations 2

Hypothesis

Planned updates to EC2 instance i-0464c4413cb0c54aa, with an attached ENI eni-069a58a392f35dce3 and EBS volume vol-0a61278f4602fc12b (DeleteOnTermination=true), may trigger replacement or termination flows that change public_ip/public_dns, alter ENI attachment, or delete the data volume. This can break consumers that rely on the existing instance IP/DNS or ENI placement (anti-pattern of treating instance public IPs as stable endpoints) and can cause permanent data loss if the volume is terminated with the instance. It is necessary to confirm whether the instance remains in the same subnet/AZ with the same ENI attachment, ensure that any DNS A records or client configurations are updated atomically, and ensure that data volumes needing persistence have DeleteOnTermination disabled or are backed up/snapshotted.

Investigation

Evidence Gathered

I first checked the relevant organizational knowledge for compute, storage, availability, infrastructure notes, multi-region context, and security requirements. The most relevant guidance here was that EC2 public IPs are not considered stable endpoints, EBS-backed data should have backup protections if it is important, and public-IP exposure is itself undesirable.

I then examined the planned change for 540044833068.eu-west-2.ec2-instance.i-0464c4413cb0c54aa and the only concrete diff is that public_ip and public_dns move from explicit current values to (known after apply). I also checked the only other resource in the same plan, 540044833068.eu-west-2.ec2-instance.i-09d6479fb9b97d123, and it shows the same pattern. There are no diffs showing subnet changes, AZ changes, ENI changes, block device changes, instance replacement, or volume deletion.

Using blast-radius-query, I verified the current state of the affected instance, ENI, volume, and DNS records. The instance is currently running in subnet subnet-07b5b1fb2ba02f964 in AZ eu-west-2a with primary ENI eni-069a58a392f35dce3, private IP 10.0.101.133, public IP 18.175.147.19, and root EBS volume vol-0a61278f4602fc12b. The ENI is the primary device-index-0 interface attached to that instance, and the volume is attached as /dev/xvda with DeleteOnTermination: true. The internal DNS ip-10-0-101-133.eu-west-2.compute.internal resolves to the private IP, and the public DNS ec2-18-175-147-19.eu-west-2.compute.amazonaws.com resolves to the current public IP.

I also checked AWS documentation on EC2 stop/start and instance addressing. AWS documents that stopping and starting an EBS-backed instance typically gives it a new public IPv4 address, while the private IP, attached EBS volumes, and attached ENIs persist across stop/start. AWS also documents that automatically assigned public IPs are not stable and are released on stop/start or terminate unless an Elastic IP is used. Terraform documentation and provider behavior commonly surface these values as computed attributes, so seeing them as (known after apply) by itself does not indicate replacement or termination.

Impact Assessment

Directly affected by the planned diff is 1 EC2 instance: i-0464c4413cb0c54aa (api-server). Its currently attached primary ENI eni-069a58a392f35dce3, root volume vol-0a61278f4602fc12b, private IP 10.0.101.133, and the corresponding public/private DNS records are in the blast radius because they are related to the instance.

However, there is no evidence in the change that the instance will be replaced, terminated, moved to another subnet or AZ, or have its ENI attachment altered. There is also no evidence that the root volume configuration is changing or that Terraform plans to delete vol-0a61278f4602fc12b. The only thing we can say confidently is that if this update path involves a stop/start or similar lifecycle action, the auto-assigned public IP and public DNS may be re-evaluated by AWS. That is normal EC2 behavior for non-EIP public addressing, not evidence of a new risk introduced by this plan. By contrast, the private IP, private DNS, ENI attachment, and EBS volume are all documented by AWS as persisting across stop/start for EBS-backed instances, which directly undercuts the hypothesis’s stronger claims about ENI drift and data loss from this specific change.

The operational scope therefore appears limited: consumers incorrectly depending on the current public IP or public DNS are already relying on an unstable EC2 property. This plan does not provide evidence of a new destructive action against the ENI or volume, and the private address-based identity of the instance remains unchanged in the observed plan.

Conclusion

I conclude the risk is not real for this specific change. The evidence shows only computed refresh of public_ip/public_dns; there is no planned replacement, ENI reattachment, subnet/AZ move, or EBS volume deletion, so the hypothesis overstates the actual lifecycle risk here.

✖ Hypothesis disproven


Simultaneous public endpoint and instance changes eliminating redundancy

Observations 3

Hypothesis

Both EC2 instances 540044833068.eu-west-2.ec2-instance.i-09d6479fb9b97d123 and 540044833068.eu-west-2.ec2-instance.i-0464c4413cb0c54aa are planned to change public_ip and public_dns in the same deployment, potentially alongside instance replacement. This simultaneous churn can break any direct consumers (clients, partner firewalls, DNS A records, scripts, allowlists) and management/observability tooling (SSH-based operations, health checks, log shippers, monitoring jobs) that are hard-coded to the current IPs/DNS. Changing both instances together can also eliminate redundancy if they form a small service pair or active/standby set, causing a full outage if bootstrap or configuration fails on the new nodes. A phased rollout or additional capacity should be used, and all external dependencies should be updated to use indirection (load balancers, DNS records, service discovery) or be coordinated in the same change window. (REL02-BP01, SEC05-BP02, REL06-BP03, REL10-BP01, REL11-BP01, REL11-BP05, OPS04-BP02)

Investigation

Evidence Gathered

I first loaded the relevant organizational knowledge for EC2 availability, network security, monitoring, compute configuration, security compliance, and the infrastructure quick reference. Those sources do establish that directly exposed EC2 public endpoints are an anti-pattern, that simultaneous rollout to all instances can be risky in production, and that production EC2 instances should not have public IPs. However, the question here is whether this specific change creates a new outage risk in the concern area raised by the hypothesis.

I then examined the full planned diffs for both changed resources, 540044833068.eu-west-2.ec2-instance.i-09d6479fb9b97d123 and 540044833068.eu-west-2.ec2-instance.i-0464c4413cb0c54aa. The only before/after values shown are public_ip and public_dns, both changing from concrete current values to (known after apply). There is no diff showing instance replacement, AMI change, instance type change, subnet change, security group change, user data change, or any other configuration drift that would support the stronger part of the hypothesis about bootstrap failure or simultaneous loss of a service pair.

I queried the current blast-radius state for both instances plus related ENIs and security groups. The results show both instances are currently running and each has a public IP assigned from Amazon's pool. The private IPs stay fixed (10.0.101.11 and 10.0.101.133), and neither instance has an Elastic IP attached. One of the instances, i-0464c4413cb0c54aa, is attached to security group sg-0437857de45b640ce, which allows inbound 80/tcp and 22/tcp from 0.0.0.0/0; that is a standing security problem, but it is not being changed by this plan. I also saw a third instance using the same security group, which means this SG is shared, but again there is no SG modification in this change.

To verify the semantics of the changed attributes, I checked AWS EC2 documentation. AWS documents that an automatically assigned public IPv4 address is released when an instance is stopped and a new public IPv4 address is assigned when it starts again, unless an Elastic IP is attached. AWS also documents that changing instance type requires a stop/start cycle and that a restarted instance receives a new public IPv4 address. Terraform documentation/search results also support that public_ip and public_dns are computed attributes, so (known after apply) by itself reflects that Terraform cannot know the new value until apply time. (docs.aws.amazon.com)

Impact Assessment

The direct scope of the planned change is 2 EC2 instances: i-09d6479fb9b97d123 and i-0464c4413cb0c54aa. Based on the diff, the only concrete impact evidenced is that each instance's public endpoint may change during apply. That could affect any external consumer hard-coded to 35.179.137.86 or 18.175.147.19, but there is no evidence in the blast radius or the plan of such consumers: no Route 53 records, no load balancer target groups, no security group rule updates keyed to those IPs, no monitoring resources for these two instances, and no dependent resources in the change itself.

The hypothesis's stronger outage claim depends on both instances forming a redundant pair and both being disrupted in the same rollout. I could not verify that from the available evidence. Both instances are in the same Availability Zone (eu-west-2a), which is not ideal for resilience, but there is no evidence they are the only members of a service, no evidence of an active/standby relationship, and no evidence that this change is replacing or rebooting both in a way that would eliminate capacity simultaneously. The quick-reference knowledge file also notes that this environment contains testing infrastructure patterns and some EC2 instances exist for relationship density rather than live workload serving, which further weakens any assumption that these two public IPs are production endpoints with critical external dependencies.

Conclusion

I do not find the risk real for this specific hypothesis. The plan only shows computed public endpoint values becoming unknown until apply, which is expected for EC2 instances with auto-assigned public IPs, and there is no supporting evidence of instance replacement, simultaneous service-pair disruption, or any actual hard-coded consumers that would turn this into a substantiated outage risk.

✖ Hypothesis disproven


💥 Blast Radius

Items 13

Edges 44

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overmind

⛔ Auto-Blocked


🔴 Decision

Found 1 high risk requiring review


📊 Signals Summary

Routine 🔴 -5


🔥 Risks Summary

High 1 · Medium 0 · Low 0


💥 Blast Radius

Items 13 · Edges 44


View full analysis in Overmind ↗

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant