Skip to content

Finalizers for postgrescluster#1748

Draft
limak9182 wants to merge 3 commits intofeature/database-controllersfrom
finalizers-for-postgrescluster
Draft

Finalizers for postgrescluster#1748
limak9182 wants to merge 3 commits intofeature/database-controllersfrom
finalizers-for-postgrescluster

Conversation

@limak9182
Copy link

Description

What does this PR have in it?

Key Changes

Highlight the updates in specific files

Testing and Verification

How did you test these changes? What automated tests are added?

Related Issues

Jira tickets, GitHub issues, Support tickets...

PR Checklist

  • Code changes adhere to the project's coding standards.
  • Relevant unit and integration tests are included.
  • Documentation has been updated accordingly.
  • All tests pass locally.
  • The PR description follows the project's guidelines.

@github-actions
Copy link
Contributor

github-actions bot commented Mar 4, 2026

CLA Assistant Lite bot All contributors have signed the CLA ✍️ ✅

@DmytroPI-dev DmytroPI-dev force-pushed the finalizers-for-postgrescluster branch from 5de1272 to f8ec473 Compare March 4, 2026 10:20
@DmytroPI-dev DmytroPI-dev requested a review from mploski March 4, 2026 15:08

// ConnectionPoolerConfig defines PgBouncer connection pooler configuration.
// When enabled, creates RW and RO pooler deployments for clusters using this class.
// +kubebuilder:validation:XValidation:rule="!self.connectionPoolerEnabled || self.connectionPoolerConfig != null || (self.cnpg != null && self.cnpg.connectionPooler != null)",message="connectionPoolerConfig must be set in cluster spec or class when connectionPoolerEnabled is true"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we remove this validation?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed, it was causing an error if placed here, as rule references fields (connectionPoolerEnabled and cnpg) that do not exist within ConnectionPoolerConfig, so placed validation rule on top level

type: object
x-kubernetes-validations:
- message: Storage size cannot be removed and can only be increased
- messageExpression: '!has(self.postgresVersion) ? ''postgresVersion cannot
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this seems weird why we have validation for postges here?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using these validation rules on field level were failing, because type 'string' does not support field selection, and we have PostrgesVersion as a string. Validation rule for Storage also failed on field level, so I put them on PostgresClusterSpec level, to be able comparing objects. And when generating CRDs, they are shown on ClusterSpec (spec) level, what is expected, I assume.

# Takes precedence over the class-level connectionPoolerEnabled value
connectionPoolerEnabled: true No newline at end of file
connectionPoolerEnabled: true
connectionPoolerConfig:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We agreed with @limak9182 if I recall, that on the cluster side we can only disable or enable config and details are treated as a specific class provider details. What is the reason we changed it?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed this, now only if pooler is enabled and config is present in Class, creates pooler.

logger.Error(err, "Failed to add finalizer to PostgresCluster")
return ctrl.Result{}, err
}
return ctrl.Result{}, nil // requeue via watch
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we have this comment here?

// Adding finalizer logic here to ensure poolers and other resources are cleaned up when PostgresCluster is deleted.
if !postgresCluster.ObjectMeta.DeletionTimestamp.IsZero() {
// TODO - to delete CNPG Cluster as well, currently relying on owner reference for cleanup, but we might want to have more control and update status during deletion, so implementing explicit deletion here as well.
if err := r.deleteCNPGCluster(ctx, postgresCluster); err != nil {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we have finalizer logic as a separate function?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in that case our update status would be also simplified

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moved to separate method, you are correct.

}

// deleteCNPGCluster deletes the CNPG cluster and its associated resources if they exist.
func (r *PostgresClusterReconciler) deleteCNPGCluster(ctx context.Context, postgresCluster *enterprisev4.PostgresCluster) error {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add retain/delete to config

}

logger.Info("Deleting CNPG Cluster", "name", cnpgCluster.Name)
if err := r.Delete(ctx, cnpgCluster); err != nil && !apierrors.IsNotFound(err) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to also decide if we should remove cluster if we have customer databases configured

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants