Skip to content

Commit 7a428a8

Browse files
Merge pull request #373 from softwareone-platform/bp-sync-661c043
⚠️ Sync upstream/integration (661c043) -> release/4 2026-03-10 (deployment changes present)
2 parents ceb9d56 + f5d17f9 commit 7a428a8

227 files changed

Lines changed: 6292 additions & 2494 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

.github/CODEOWNERS

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,8 @@
1-
# Frontend only for ngui
1+
# Frontend
22
ngui/ @hystax/ui
3+
jira_ui/ @hystax/ui
34

4-
# Backend for other directories
5+
# Backend
56
auth/ @hystax/backend
67
bi_exporter/ @hystax/backend
78
bumischeduler/ @hystax/backend
@@ -13,7 +14,6 @@ gemini/ @hystax/backend
1314
herald/ @hystax/backend
1415
insider/ @hystax/backend
1516
jira_bus/ @hystax/backend
16-
jira_ui/ @hystax/backend
1717
katara/ @hystax/backend
1818
keeper/ @hystax/backend
1919
metroculus/ @hystax/backend
Lines changed: 58 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -1,25 +1,58 @@
1-
name: Auto-assign PR author
2-
3-
on:
4-
pull_request:
5-
types: [opened, reopened]
6-
7-
jobs:
8-
auto-assign:
9-
runs-on: ubuntu-latest
10-
permissions:
11-
pull-requests: write
12-
13-
steps:
14-
- name: Assign PR author
15-
uses: actions/github-script@v8
16-
with:
17-
script: |
18-
const reporter = context.actor
19-
await github.rest.issues.addAssignees({
20-
owner: context.repo.owner,
21-
repo: context.repo.repo,
22-
issue_number: context.payload.pull_request.number,
23-
assignees: [reporter]
24-
})
25-
1+
name: Auto-assign PR author
2+
3+
on:
4+
pull_request:
5+
types: [opened, reopened]
6+
7+
jobs:
8+
auto-assign:
9+
runs-on: ubuntu-latest
10+
permissions:
11+
pull-requests: write
12+
issues: write # Required because assignees API works via issues
13+
contents: read # Minimal read access to repository contents
14+
15+
steps:
16+
- name: Assign PR author
17+
uses: actions/github-script@v8
18+
with:
19+
script: |
20+
// Repository owner (organization or user who owns the repo)
21+
const owner = context.repo.owner;
22+
23+
// Repository name
24+
const repo = context.repo.repo;
25+
26+
// Login of the pull request author
27+
const prAuthor = context.payload.pull_request.user.login;
28+
29+
try {
30+
// Check the permission level of the PR author
31+
const { data: perm } =
32+
await github.rest.repos.getCollaboratorPermissionLevel({
33+
owner,
34+
repo,
35+
username: prAuthor
36+
});
37+
38+
// If the author has write/maintain/admin rights → assign them to the PR
39+
if (["write", "maintain", "admin"].includes(perm.permission)) {
40+
await github.rest.issues.addAssignees({
41+
owner,
42+
repo,
43+
issue_number: context.payload.pull_request.number,
44+
assignees: [prAuthor]
45+
});
46+
console.log(`Assigned PR to ${prAuthor}`);
47+
} else {
48+
// If the author has insufficient rights → skip assignment
49+
console.log(
50+
`Skipping assignment for ${prAuthor}: permission=${perm.permission}`
51+
);
52+
}
53+
} catch (error) {
54+
// If the author is not a collaborator (e.g., PR from a fork) → skip without failing
55+
console.log(
56+
`Skipping assignment for ${prAuthor}: not a collaborator`
57+
);
58+
}

.gitignore

Lines changed: 5 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,7 @@ rest_api/.clickhouse
1717
**/.pytest_cache/
1818
**/.ruff_cache/
1919
# Build files
20+
**/build
2021
**/dist
2122
**/*egg-info/
2223
**/*.tar.gz
@@ -33,14 +34,6 @@ ngui/*/storybook
3334
ngui/server/package-lock.json
3435
ngui/ui/package-lock.json
3536

36-
# ffc_ngui
37-
ffc_ngui/server/.env
38-
ffc_ngui/ui/.env
39-
ffc_ngui/*/node_modules
40-
ffc_ngui/*/build/
41-
ffc_ngui/*/dist/
42-
ffc_ngui/*/storybook
43-
4437
**/npm-debug.log*
4538
**/yarn-debug.log*
4639
**/yarn-error.log*
@@ -59,3 +52,7 @@ jira_ui/*/build/
5952
/e2etests/.cache/
6053
/e2etests/.cache/*
6154
/e2etests/tests/downloads/*
55+
56+
57+
# Vagrant
58+
optscale-deploy/.vagrant/

README.md

Lines changed: 52 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -2,12 +2,15 @@
22
⭐ Drop a star to support OptScale ⭐
33
</p>
44

5-
# FinOps and cloud cost management platform to run any cloud workload with optimal performance and cost
5+
# Open-Source FinOps & Cloud Cost Optimization Platform
66

77
<p align="center">
88
<a href="documentation/images/cover-GitHub.png"><img src="documentation/images/FinOps-platform.png" width="40%" align="middle"></a>
99
</p>
10-
OptScale is an open source FinOps platform that optimizes cloud costs and performance for any workload, providing effective cloud cost management for all types of organizations.
10+
11+
<br>OptScale is an open-source [FinOps and cloud cost optimization platform](https://hystax.com/optscale/finops-overview/) that helps engineering and finance teams control and reduce spend across AWS, Microsoft Azure, GCP, Alibaba Cloud, and Kubernetes clusters.
12+
It provides deep visibility into infrastructure costs, automated optimization recommendations, and governance tools for R&D and data platforms.
13+
1114
<br>
1215
<br>
1316
<p align="center">
@@ -29,21 +32,56 @@ OptScale is an open source FinOps platform that optimizes cloud costs and perfor
2932
![Average cloud cost savings](https://img.shields.io/badge/Average_cloud_cost_savings-38%25-yellow)
3033

3134
</div>
35+
36+
<br>
37+
38+
<div>
39+
<br>
40+
<img src="documentation/images/Max_Kuzkin.png" width="80" align="left" style="border-radius: 50%; margin-right: 15px">
41+
<i>
42+
“Hystax OptScale has been a game-changer for our FinOps practice. Its powerful capabilities, flexibility, and seamless integration have empowered us to deliver unprecedented transparency, control, and cost optimization for our clients. We truly value our partnership with Hystax and are excited to innovate further together.”
43+
</i>
44+
<div align="right">
45+
<i><b>Max Kuzkin</b>, General Manager, SoftwareOne Platform</i>
46+
</div>
47+
</div>
48+
49+
<br>
50+
3251
<br>
3352

34-
## OptScale FinOps and cloud cost optimization capabilities
53+
## Overview
54+
OptScale connects to your cloud accounts and Kubernetes clusters, ingests billing and usage data, and analyzes infrastructure consumption to surface actionable insights that eliminate waste and optimize resource usage.
55+
It supports multi-cloud environments and integrates with popular data platforms, including Databricks, Amazon S3, and Amazon Redshift.
56+
57+
<br>
58+
59+
## Key Features
60+
### Cost optimization
61+
<li>Unused and idle resource detection for VMs, volumes, databases, and other cloud resources</li>
62+
<li>Rightsizing recommendations for overprovisioned instances and workloads</li>
63+
<li>R&D resource power management to automatically stop non-production environments outside working hours</li>
64+
<li>Commitment utilization analysis for Reserved Instances, Savings Plans, and Spot Instances</li>
65+
66+
67+
### FinOps and governance
68+
<li>FinOps dashboards for engineering, finance, and product teams to track and allocate cloud spend</li>
69+
<li>Budgeting and alerting for cost anomalies, spikes, and budget overruns</li>
70+
<li>Tagging and ownership visibility to attribute costs to teams, projects, and environments</li>
71+
<li>Policy-driven governance and automation controls</li>
72+
73+
74+
### Data and AI/ML workloads
75+
<li>Databricks cost analytics with detailed visibility into cluster usage and idle time</li>
76+
<li>S3 and object storage optimization (lifecycle, unused buckets, storage class recommendations)</li>
77+
78+
79+
### Kubernetes and multi‑cloud
80+
<li>Kubernetes cluster cost allocation per namespace, workload, and label with workload-level visibility</li>
81+
<li>Multi-cloud support for AWS, Microsoft Azure, Google Cloud, and Alibaba Cloud from a single OptScale instance</li>
3582

36-
<li>Optimal utilization of Reserved Instances, Savings Plans, and Spot Instances</li>
37-
<li>Unused resource detection</li>
38-
<li>R&D resource power management and rightsizing</li>
39-
<li>S3 duplicate object finder</li>
40-
<li>Resource bottleneck identification</li>
41-
<li>Optimal instance type and family selection</li>
42-
<li>Databricks support</li>
43-
<li>S3 and Redshift instrumentation</li>
44-
<li>VM Power Schedules</li>
83+
<br><br>Learn more about [OptScale features for FinOps and multi-cloud cost management](https://hystax.com/optscale/finops-capabilities-and-benefits/).
4584

46-
4785
<br>You can check OptScale [live demo](https://my.optscale.com/live-demo) to explore product features on a pre-generated demo organization.
4886
<br>Learn more about the Hystax OptScale platform and its capabilities at [our website](https://hystax.com).
4987

@@ -83,7 +121,7 @@ NVMe SSD is recommended.
83121

84122
**OS Required**: [Ubuntu 24.04](https://releases.ubuntu.com/noble/).
85123

86-
_The current installation process should work also on Ubuntu 22.04_
124+
_The current installation process should also work on Ubuntu 22.04_
87125

88126
#### Updating old installation
89127
please follow [this document](documentation/update_to_24.04.md) to upgrade your existing installation on Ubuntu 20.04.

build.sh

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -110,6 +110,12 @@ do
110110
else
111111
echo "Building image for ${COMPONENT}, build tag: ${BUILD_TAG}"
112112
$BUILD_TOOL build $FLAGS -t ${COMPONENT}:${BUILD_TAG} -f ${DOCKERFILE} .
113+
114+
# If the build fails, exit with the same status code as the build command
115+
build_status_code="$?"
116+
if [ "$build_status_code" -gt 0 ]; then
117+
exit $build_status_code
118+
fi
113119
fi
114120

115121
if use_registry; then

bumiworker/bumiworker/modules/archive/instance_subscription.py

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -27,12 +27,13 @@ def __init__(self, *args, **kwargs):
2727
def supported_cloud_types(self):
2828
return SUPPORTED_CLOUD_TYPES
2929

30-
def _has_discounts(self, raw_info):
30+
@staticmethod
31+
def _has_discounts(raw_info):
3132
if raw_info.get('cost') == 0:
3233
# savings plan applied
3334
return True
3435
for key in ['coupons_discount', 'resource_package_discount']:
35-
if key in raw_info and float(raw_info[key]):
36+
if key in raw_info and float(raw_info[key] or 0):
3637
return True
3738

3839
def _get(self, previous_options, optimizations, cloud_accounts_map,

docker_images/cleanelkdb/clean-elk-db.sh

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,8 @@ remove_line_from_filebeat() {
2222

2323
remove_index_from_elk() {
2424
echo "DELETING "$2" INDEX FROM ELK"
25-
curl -s -X DELETE $1':'$ELK_PORT'/'$2
25+
encoded_index=$(jq -rn --arg index "$2" '$index|@uri')
26+
curl -s -X DELETE $1':'$ELK_PORT'/'$encoded_index
2627
}
2728

2829
m_total_log_size=$(get_size_of_logs $ELK_IP)
@@ -34,12 +35,11 @@ fi
3435

3536
echo "SIZE OF LOGS BIGGER "$LOG_SIZE_MAX"Mb -> START TO REMOVE LOGS"
3637
curl -s -X GET "$ELK_IP:$ELK_PORT/_cat/indices?v" > curl_test.txt
37-
cat curl_test.txt | awk '/filebeat/ { print $3 }' | sort --reverse > filebeat.txt
38+
grep -oE '(%{[^}]+}|[a-zA-Z_]+)-[0-9]{4}\.[0-9]{2}\.[0-9]{2}' curl_test.txt | sort -t '-' -k2,2 > filebeat.txt
3839

3940
while [ $m_total_log_size -gt $LOG_SIZE_MAX ]; do
4041
m_filebeat=$(tail -n -1 filebeat.txt)
4142
filebeat_date=$(echo $m_filebeat | awk -F '-' '{ print $2 }')
42-
4343
if [ "$m_filebeat" = "" ] ; then
4444
break
4545
else
Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
#!/usr/bin/env bash
2+
3+
# TODO: Instead of this script we should use multi stage docker builds
4+
# but this is good enough until we get arround to it
5+
# Or even better -- see if we need this tool at all or if there is a better
6+
# way to install it (e.g. via a package manager)
7+
8+
set -x
9+
10+
arch="$(uname -m)"
11+
dest_bin_path="/usr/local/bin/peer-finder"
12+
13+
apt-get update
14+
apt-get install -y --no-install-recommends openssl ca-certificates wget
15+
rm -rf /var/lib/apt/lists/*
16+
17+
if [[ "$arch" == "x86_64" || "$arch" == "amd64" ]]; then
18+
wget -O $dest_bin_path https://storage.googleapis.com/kubernetes-release/pets/peer-finder
19+
elif [[ "$arch" == "aarch64" || "$arch" == "arm64" ]]; then
20+
wget https://github.com/kmodules/peer-finder/releases/download/v1.0.2/peer-finder-linux-arm64.tar.gz \
21+
-O /tmp/peer-finder-linux-arm64.tar.gz
22+
tar -xzf /tmp/peer-finder-linux-arm64.tar.gz -C /tmp
23+
mv /tmp/peer-finder-linux-arm64 $dest_bin_path
24+
else
25+
echo "Unsupported architecture: $arch"
26+
exit 1
27+
fi
28+
29+
chmod +x $dest_bin_path
30+
apt-get purge -y --auto-remove ca-certificates wget
31+
Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,10 @@
1-
FROM ingressnginx/custom-error-pages:v1.2.0
1+
# TODO: The base image doesn't support arm64 yet but shouldn't be too hard to change that,
2+
# though it will require a change in the `kubernetes/ingress-nginx` repo.
3+
# References:
4+
# * Base image's Dockerfile: https://github.com/kubernetes/ingress-nginx/blob/main/images/custom-error-pages/rootfs/Dockerfile
5+
# * Relevant issue on GitHub: https://github.com/kubernetes/ingress-nginx/issues/10245
6+
7+
ARG arch=amd64
8+
FROM --platform="linux/${arch}" ingressnginx/custom-error-pages:v1.2.0
29

310
COPY docker_images/error_pages/www /www

docker_images/etcd/Dockerfile

Lines changed: 17 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,18 @@
1-
FROM gcr.io/etcd-development/etcd:v3.2.13
2-
RUN apk update
3-
# https://github.com/Yelp/dumb-init/issues/73#issuecomment-240439732
4-
RUN apk add ca-certificates wget && update-ca-certificates
5-
RUN apk --no-cache add curl
6-
RUN wget $(curl -H "Accept: application/vnd.github.v3+json" https://api.github.com/repos/nexusriot/etcd-walker/releases/tags/0.0.11 | grep -Eo 'https://(.*linux_x64_static)') -O /bin/etcd-walker
1+
# etcd is a distroless image starting from v3.5, meaning we don't have access to a shell or package manager.
2+
# this is why we use multi-stage builds to build and copy the binary into the final image.
3+
# ref: https://github.com/GoogleContainerTools/distroless?tab=readme-ov-file#docker
4+
5+
FROM golang:1.24.6 AS build-etcd-walker
6+
7+
RUN git clone https://github.com/nexusriot/etcd-walker/ /tmp/etcd-walker-src
8+
9+
WORKDIR /tmp/etcd-walker-src
10+
RUN git checkout 0.2.1
11+
RUN go build -ldflags "-linkmode external -extldflags -static" -o etcd-walker cmd/etcd-walker/main.go
12+
13+
RUN mv etcd-walker /bin/etcd-walker
714
RUN chmod +x /bin/etcd-walker
15+
16+
# NOTE: v3.6+ require significant changes as they removed support for the V2 API, see https://etcd.io/docs/v3.6/upgrades/upgrade_3_6/
17+
FROM gcr.io/etcd-development/etcd:v3.2.13
18+
COPY --from=build-etcd-walker /bin/etcd-walker /bin/etcd-walker

0 commit comments

Comments
 (0)