-
Notifications
You must be signed in to change notification settings - Fork 164
Added support for lifecycle prevent_destroy option #3448
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 21 commits
2ddb691
637a057
251284c
fd67d51
d8c5ecb
cc79894
9dc81ce
877deb3
04b059d
7e5b1fb
db9bafa
71d8981
404192e
1f67f4f
3181033
c066038
a32e69a
109d453
1165def
db34f10
9118ceb
fe565f5
65bc5dd
42faa7b
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,20 @@ | ||
| bundle: | ||
| name: prevent-destroy | ||
|
|
||
| lifecycle: &lifecycle_base | ||
| lifecycle: | ||
| prevent_destroy: true | ||
|
|
||
| pipeline: &pipeline_base | ||
| resources: | ||
| pipelines: | ||
| my_pipelines: | ||
| name: "test-pipeline" | ||
| libraries: | ||
| - notebook: | ||
| path: "./test-notebook.py" | ||
| <<: *lifecycle_base | ||
| schema: test-schema | ||
| catalog: main | ||
|
|
||
| <<: *pipeline_base | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Curious: why use the YAML anchors if they aren't used anywhere?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I use them in the test to remove the whole resource from the configuration
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Alternative, could be easier to manage and follow than anchors:
removing resources: rm resources/filename.yml
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I don't have a preference here. If you think this makes it easier to understand, I'll change it to that
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Done here 65bc5dd |
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,48 @@ | ||
|
|
||
| >>> errcode [CLI] bundle plan | ||
|
|
||
| >>> musterr [CLI] bundle destroy --auto-approve | ||
| Error: resource my_pipelines has lifecycle.prevent_destroy set, but the plan calls for this resource to be recreated or destroyed. To avoid this error, disable lifecycle.prevent_destroy for pipelines.my_pipelines | ||
|
|
||
|
|
||
| Exit code (musterr): 1 | ||
|
|
||
| >>> errcode [CLI] bundle plan | ||
| recreate pipelines.my_pipelines | ||
|
|
||
| >>> musterr [CLI] bundle deploy | ||
|
andrewnester marked this conversation as resolved.
|
||
| Uploading bundle files to /Workspace/Users/[USERNAME]/.bundle/prevent-destroy/default/files... | ||
| Error: resource my_pipelines has lifecycle.prevent_destroy set, but the plan calls for this resource to be recreated or destroyed. To avoid this error, disable lifecycle.prevent_destroy for pipelines.my_pipelines | ||
|
|
||
|
|
||
| Exit code (musterr): 1 | ||
|
|
||
| >>> errcode [CLI] bundle plan | ||
| recreate pipelines.my_pipelines | ||
|
|
||
| >>> [CLI] bundle deploy --auto-approve | ||
| Uploading bundle files to /Workspace/Users/[USERNAME]/.bundle/prevent-destroy/default/files... | ||
|
|
||
| This action will result in the deletion or recreation of the following Lakeflow Declarative Pipelines along with the | ||
| Streaming Tables (STs) and Materialized Views (MVs) managed by them. Recreating the pipelines will | ||
| restore the defined STs and MVs through full refresh. Note that recreation is necessary when pipeline | ||
| properties such as the 'catalog' or 'storage' are changed: | ||
| recreate pipeline my_pipelines | ||
| Deploying resources... | ||
| Updating deployment state... | ||
| Deployment complete! | ||
|
|
||
| >>> errcode [CLI] bundle plan | ||
| delete pipelines.my_pipelines | ||
|
|
||
| >>> [CLI] bundle deploy --auto-approve | ||
| Uploading bundle files to /Workspace/Users/[USERNAME]/.bundle/prevent-destroy/default/files... | ||
|
|
||
| This action will result in the deletion or recreation of the following Lakeflow Declarative Pipelines along with the | ||
| Streaming Tables (STs) and Materialized Views (MVs) managed by them. Recreating the pipelines will | ||
| restore the defined STs and MVs through full refresh. Note that recreation is necessary when pipeline | ||
| properties such as the 'catalog' or 'storage' are changed: | ||
| delete pipeline my_pipelines | ||
| Deploying resources... | ||
| Updating deployment state... | ||
| Deployment complete! | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,84 @@ | ||
|
|
||
| >>> errcode [CLI] bundle plan | ||
|
andrewnester marked this conversation as resolved.
|
||
|
|
||
| >>> musterr [CLI] bundle destroy --auto-approve | ||
| Error: exit status 1 | ||
|
|
||
| Error: Instance cannot be destroyed | ||
|
|
||
| on bundle.tf.json line 15, in resource.databricks_pipeline: | ||
| 15: "my_pipelines": { | ||
|
|
||
| Resource databricks_pipeline.my_pipelines has lifecycle.prevent_destroy set, | ||
|
andrewnester marked this conversation as resolved.
|
||
| but the plan calls for this resource to be destroyed. To avoid this error and | ||
| continue with the plan, either disable lifecycle.prevent_destroy or reduce | ||
| the scope of the plan using the -target flag. | ||
|
|
||
|
|
||
|
|
||
| Exit code (musterr): 1 | ||
|
|
||
| >>> errcode [CLI] bundle plan | ||
| Error: exit status 1 | ||
|
|
||
| Error: Instance cannot be destroyed | ||
|
|
||
| on bundle.tf.json line 15, in resource.databricks_pipeline: | ||
| 15: "my_pipelines": { | ||
|
|
||
| Resource databricks_pipeline.my_pipelines has lifecycle.prevent_destroy set, | ||
| but the plan calls for this resource to be destroyed. To avoid this error and | ||
| continue with the plan, either disable lifecycle.prevent_destroy or reduce | ||
| the scope of the plan using the -target flag. | ||
|
|
||
|
|
||
|
|
||
| Exit code: 1 | ||
|
|
||
| >>> musterr [CLI] bundle deploy | ||
| Uploading bundle files to /Workspace/Users/[USERNAME]/.bundle/prevent-destroy/default/files... | ||
| Error: exit status 1 | ||
|
|
||
| Error: Instance cannot be destroyed | ||
|
|
||
| on bundle.tf.json line 15, in resource.databricks_pipeline: | ||
| 15: "my_pipelines": { | ||
|
|
||
| Resource databricks_pipeline.my_pipelines has lifecycle.prevent_destroy set, | ||
| but the plan calls for this resource to be destroyed. To avoid this error and | ||
| continue with the plan, either disable lifecycle.prevent_destroy or reduce | ||
| the scope of the plan using the -target flag. | ||
|
|
||
|
|
||
|
|
||
| Exit code (musterr): 1 | ||
|
|
||
| >>> errcode [CLI] bundle plan | ||
| recreate pipelines.my_pipelines | ||
|
|
||
| >>> [CLI] bundle deploy --auto-approve | ||
| Uploading bundle files to /Workspace/Users/[USERNAME]/.bundle/prevent-destroy/default/files... | ||
|
|
||
| This action will result in the deletion or recreation of the following Lakeflow Declarative Pipelines along with the | ||
| Streaming Tables (STs) and Materialized Views (MVs) managed by them. Recreating the pipelines will | ||
| restore the defined STs and MVs through full refresh. Note that recreation is necessary when pipeline | ||
| properties such as the 'catalog' or 'storage' are changed: | ||
| recreate pipeline my_pipelines | ||
| Deploying resources... | ||
| Updating deployment state... | ||
| Deployment complete! | ||
|
|
||
| >>> errcode [CLI] bundle plan | ||
| delete pipelines.my_pipelines | ||
|
|
||
| >>> [CLI] bundle deploy --auto-approve | ||
| Uploading bundle files to /Workspace/Users/[USERNAME]/.bundle/prevent-destroy/default/files... | ||
|
|
||
| This action will result in the deletion or recreation of the following Lakeflow Declarative Pipelines along with the | ||
| Streaming Tables (STs) and Materialized Views (MVs) managed by them. Recreating the pipelines will | ||
| restore the defined STs and MVs through full refresh. Note that recreation is necessary when pipeline | ||
| properties such as the 'catalog' or 'storage' are changed: | ||
| delete pipeline my_pipelines | ||
| Deploying resources... | ||
| Updating deployment state... | ||
| Deployment complete! | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,5 @@ | ||
| Local = true | ||
| Cloud = false | ||
|
|
||
| [EnvMatrix] | ||
| DATABRICKS_CLI_DEPLOYMENT = ["terraform", "direct-exp"] |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,15 @@ | ||
|
|
||
| >>> [CLI] bundle validate | ||
| Name: prevent-destroy | ||
| Target: default | ||
| Workspace: | ||
| User: [USERNAME] | ||
| Path: /Workspace/Users/[USERNAME]/.bundle/prevent-destroy/default | ||
|
|
||
| Validation OK! | ||
|
|
||
| >>> [CLI] bundle deploy | ||
| Uploading bundle files to /Workspace/Users/[USERNAME]/.bundle/prevent-destroy/default/files... | ||
| Deploying resources... | ||
| Updating deployment state... | ||
| Deployment complete! |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,22 @@ | ||
| trace $CLI bundle validate | ||
|
|
||
| trace $CLI bundle deploy | ||
|
|
||
| trace errcode $CLI bundle plan >out.$DATABRICKS_CLI_DEPLOYMENT.txt 2>&1 | ||
| trace musterr $CLI bundle destroy --auto-approve >>out.$DATABRICKS_CLI_DEPLOYMENT.txt 2>&1 | ||
|
|
||
| # Changing the catalog name, deploy must fail because pipeline will be recreated | ||
| update_file.py databricks.yml 'catalog: main' 'catalog: mainnew' | ||
| trace errcode $CLI bundle plan >>out.$DATABRICKS_CLI_DEPLOYMENT.txt 2>&1 | ||
| trace musterr $CLI bundle deploy >>out.$DATABRICKS_CLI_DEPLOYMENT.txt 2>&1 | ||
|
|
||
| # Removing the prevent_destroy, deploy must succeed | ||
| update_file.py databricks.yml 'prevent_destroy: true' 'prevent_destroy: false' | ||
| trace errcode $CLI bundle plan >>out.$DATABRICKS_CLI_DEPLOYMENT.txt 2>&1 | ||
| trace $CLI bundle deploy --auto-approve >>out.$DATABRICKS_CLI_DEPLOYMENT.txt 2>&1 | ||
|
|
||
| update_file.py databricks.yml 'prevent_destroy: false' 'prevent_destroy: true' | ||
| # Removing the pipeline, deploy must succeed | ||
| update_file.py databricks.yml '<<: *pipeline_base' '' | ||
| trace errcode $CLI bundle plan >>out.$DATABRICKS_CLI_DEPLOYMENT.txt 2>&1 | ||
| trace $CLI bundle deploy --auto-approve >>out.$DATABRICKS_CLI_DEPLOYMENT.txt 2>&1 |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,3 @@ | ||
| # Databricks notebook source | ||
|
|
||
| print("Hello, World!") |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,5 @@ | ||
| EnvVaryOutput = "DATABRICKS_CLI_DEPLOYMENT" | ||
|
|
||
| Ignore = [ | ||
| ".databricks" | ||
| ] |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,2 @@ | ||
| Local = true | ||
| Cloud = false |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -24,10 +24,9 @@ type ClusterPermission struct { | |
|
|
||
| type Cluster struct { | ||
| BaseResource | ||
| compute.ClusterSpec | ||
|
|
||
| Permissions []ClusterPermission `json:"permissions,omitempty"` | ||
|
|
||
| compute.ClusterSpec | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Curious about this move -- was it needed for this PR to work?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. No, just grouping them together, it's less readable when these structs are separated by custom fields |
||
| } | ||
|
|
||
| func (s *Cluster) UnmarshalJSON(b []byte) error { | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -12,7 +12,6 @@ import ( | |
|
|
||
| type DatabaseCatalog struct { | ||
| BaseResource | ||
|
|
||
| database.DatabaseCatalog | ||
| } | ||
|
|
||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,8 @@ | ||
| package resources | ||
|
|
||
| // Lifecycle is a struct that contains the lifecycle settings for a resource. | ||
| // It controls the behavior of the resource when it is deployed or destroyed. | ||
| type Lifecycle struct { | ||
| // Lifecycle setting to prevent the resource from being destroyed. | ||
| PreventDestroy bool `json:"prevent_destroy,omitempty"` | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Test coverage is unequal for direct and terraform - for Tf all resources are checked and for direct we only have this test. Can we have a test that loops through all resource types?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The difficulty with writing an acceptance test for this is that we first need to deploy these resources, and doing this automatically in a loop is difficult, as they need to have a valid configuration, which is different per resource.
We can manually create the configuration for all resources but not sure if it's worth it. I've added a unit test for
checkForPreventDestroyinsteadThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it work if we had large databricks.yml with all the resources? So we don't loop but test all at once. This requires all error messages to be printed rather than the first one, which may be a good idea anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It could work, but if you add a new type of resource, you need to remember to add it to this databricks.yml file. The unit test I added will fail if the new resource does not handle prevent destroy correctly.
We can always follow up with additional tests if needed. This is a net new feature
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It depends. If existing resource provide enough variety and implementation is general (it is) we can say we've covered our ground.
It only covers TF, right?
Overall I agree we can proceed without full coverage, but it would be interesting to cover >1 resource to see what kind of error messages we get. Do we get them all? One at random?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, there are two separate ones
Pipeline resource seem to provide enough variety, I'll add job just in case