-
Notifications
You must be signed in to change notification settings - Fork 163
Expand file tree
/
Copy pathannotations_openapi.yml
More file actions
4884 lines (4761 loc) · 222 KB
/
annotations_openapi.yml
File metadata and controls
4884 lines (4761 loc) · 222 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
# This file is auto-generated. DO NOT EDIT.
github.com/databricks/cli/bundle/config/resources.App:
"active_deployment":
"description": |-
The active deployment of the app. A deployment is considered active when it has been deployed
to the app compute.
"x-databricks-field-behaviors_output_only": |-
true
"app_status":
"x-databricks-field-behaviors_output_only": |-
true
"budget_policy_id": {}
"compute_size": {}
"compute_status":
"x-databricks-field-behaviors_output_only": |-
true
"create_time":
"description": |-
The creation time of the app. Formatted timestamp in ISO 6801.
"x-databricks-field-behaviors_output_only": |-
true
"creator":
"description": |-
The email of the user that created the app.
"x-databricks-field-behaviors_output_only": |-
true
"default_source_code_path":
"description": |-
The default workspace file system path of the source code from which app deployment are
created. This field tracks the workspace source code path of the last active deployment.
"x-databricks-field-behaviors_output_only": |-
true
"description":
"description": |-
The description of the app.
"effective_budget_policy_id":
"x-databricks-field-behaviors_output_only": |-
true
"effective_user_api_scopes":
"description": |-
The effective api scopes granted to the user access token.
"x-databricks-field-behaviors_output_only": |-
true
"id":
"description": |-
The unique identifier of the app.
"x-databricks-field-behaviors_output_only": |-
true
"name":
"description": |-
The name of the app. The name must contain only lowercase alphanumeric characters and hyphens.
It must be unique within the workspace.
"oauth2_app_client_id":
"x-databricks-field-behaviors_output_only": |-
true
"oauth2_app_integration_id":
"x-databricks-field-behaviors_output_only": |-
true
"pending_deployment":
"description": |-
The pending deployment of the app. A deployment is considered pending when it is being prepared
for deployment to the app compute.
"x-databricks-field-behaviors_output_only": |-
true
"resources":
"description": |-
Resources for the app.
"service_principal_client_id":
"x-databricks-field-behaviors_output_only": |-
true
"service_principal_id":
"x-databricks-field-behaviors_output_only": |-
true
"service_principal_name":
"x-databricks-field-behaviors_output_only": |-
true
"update_time":
"description": |-
The update time of the app. Formatted timestamp in ISO 6801.
"x-databricks-field-behaviors_output_only": |-
true
"updater":
"description": |-
The email of the user that last updated the app.
"x-databricks-field-behaviors_output_only": |-
true
"url":
"description": |-
The URL of the app once it is deployed.
"x-databricks-field-behaviors_output_only": |-
true
"user_api_scopes": {}
github.com/databricks/cli/bundle/config/resources.Cluster:
"_":
"description": |-
Contains a snapshot of the latest user specified settings that were used to create/edit the cluster.
"apply_policy_default_values":
"description": |-
When set to true, fixed and default values from the policy will be used for fields that are omitted. When set to false, only fixed values from the policy will be applied.
"autoscale":
"description": |-
Parameters needed in order to automatically scale clusters up and down based on load.
Note: autoscaling works best with DB runtime versions 3.0 or later.
"autotermination_minutes":
"description": |-
Automatically terminates the cluster after it is inactive for this time in minutes. If not set,
this cluster will not be automatically terminated. If specified, the threshold must be between
10 and 10000 minutes.
Users can also set this value to 0 to explicitly disable automatic termination.
"aws_attributes":
"description": |-
Attributes related to clusters running on Amazon Web Services.
If not specified at cluster creation, a set of default values will be used.
"azure_attributes":
"description": |-
Attributes related to clusters running on Microsoft Azure.
If not specified at cluster creation, a set of default values will be used.
"cluster_log_conf":
"description": |-
The configuration for delivering spark logs to a long-term storage destination.
Three kinds of destinations (DBFS, S3 and Unity Catalog volumes) are supported. Only one destination can be specified
for one cluster. If the conf is given, the logs will be delivered to the destination every
`5 mins`. The destination of driver logs is `$destination/$clusterId/driver`, while
the destination of executor logs is `$destination/$clusterId/executor`.
"cluster_name":
"description": |-
Cluster name requested by the user. This doesn't have to be unique.
If not specified at creation, the cluster name will be an empty string.
For job clusters, the cluster name is automatically set based on the job and job run IDs.
"custom_tags":
"description": |-
Additional tags for cluster resources. Databricks will tag all cluster resources (e.g., AWS
instances and EBS volumes) with these tags in addition to `default_tags`. Notes:
- Currently, Databricks allows at most 45 custom tags
- Clusters can only reuse cloud resources if the resources' tags are a subset of the cluster tags
"data_security_mode":
"description": |-
Data security mode decides what data governance model to use when accessing data
from a cluster.
The following modes can only be used when `kind = CLASSIC_PREVIEW`.
* `DATA_SECURITY_MODE_AUTO`: Databricks will choose the most appropriate access mode depending on your compute configuration.
* `DATA_SECURITY_MODE_STANDARD`: Alias for `USER_ISOLATION`.
* `DATA_SECURITY_MODE_DEDICATED`: Alias for `SINGLE_USER`.
The following modes can be used regardless of `kind`.
* `NONE`: No security isolation for multiple users sharing the cluster. Data governance features are not available in this mode.
* `SINGLE_USER`: A secure cluster that can only be exclusively used by a single user specified in `single_user_name`. Most programming languages, cluster features and data governance features are available in this mode.
* `USER_ISOLATION`: A secure cluster that can be shared by multiple users. Cluster users are fully isolated so that they cannot see each other's data and credentials. Most data governance features are supported in this mode. But programming languages and cluster features might be limited.
The following modes are deprecated starting with Databricks Runtime 15.0 and
will be removed for future Databricks Runtime versions:
* `LEGACY_TABLE_ACL`: This mode is for users migrating from legacy Table ACL clusters.
* `LEGACY_PASSTHROUGH`: This mode is for users migrating from legacy Passthrough on high concurrency clusters.
* `LEGACY_SINGLE_USER`: This mode is for users migrating from legacy Passthrough on standard clusters.
* `LEGACY_SINGLE_USER_STANDARD`: This mode provides a way that doesn’t have UC nor passthrough enabled.
"docker_image":
"description": |-
Custom docker image BYOC
"driver_instance_pool_id":
"description": |-
The optional ID of the instance pool for the driver of the cluster belongs.
The pool cluster uses the instance pool with id (instance_pool_id) if the driver pool is not
assigned.
"driver_node_type_id":
"description": |-
The node type of the Spark driver.
Note that this field is optional; if unset, the driver node type will be set as the same value
as `node_type_id` defined above.
This field, along with node_type_id, should not be set if virtual_cluster_size is set.
If both driver_node_type_id, node_type_id, and virtual_cluster_size are specified, driver_node_type_id and node_type_id take precedence.
"enable_elastic_disk":
"description": |-
Autoscaling Local Storage: when enabled, this cluster will dynamically acquire additional disk
space when its Spark workers are running low on disk space. This feature requires specific AWS
permissions to function correctly - refer to the User Guide for more details.
"enable_local_disk_encryption":
"description": |-
Whether to enable LUKS on cluster VMs' local disks
"gcp_attributes":
"description": |-
Attributes related to clusters running on Google Cloud Platform.
If not specified at cluster creation, a set of default values will be used.
"init_scripts":
"description": |-
The configuration for storing init scripts. Any number of destinations can be specified.
The scripts are executed sequentially in the order provided.
If `cluster_log_conf` is specified, init script logs are sent to `<destination>/<cluster-ID>/init_scripts`.
"instance_pool_id":
"description": |-
The optional ID of the instance pool to which the cluster belongs.
"is_single_node":
"description": |-
This field can only be used when `kind = CLASSIC_PREVIEW`.
When set to true, Databricks will automatically set single node related `custom_tags`, `spark_conf`, and `num_workers`
"kind":
"description": |-
The kind of compute described by this compute specification.
Depending on `kind`, different validations and default values will be applied.
Clusters with `kind = CLASSIC_PREVIEW` support the following fields, whereas clusters with no specified `kind` do not.
* [is_single_node](/api/workspace/clusters/create#is_single_node)
* [use_ml_runtime](/api/workspace/clusters/create#use_ml_runtime)
* [data_security_mode](/api/workspace/clusters/create#data_security_mode) set to `DATA_SECURITY_MODE_AUTO`, `DATA_SECURITY_MODE_DEDICATED`, or `DATA_SECURITY_MODE_STANDARD`
By using the [simple form](https://docs.databricks.com/compute/simple-form.html), your clusters are automatically using `kind = CLASSIC_PREVIEW`.
"node_type_id":
"description": |-
This field encodes, through a single value, the resources available to each of
the Spark nodes in this cluster. For example, the Spark nodes can be provisioned
and optimized for memory or compute intensive workloads. A list of available node
types can be retrieved by using the :method:clusters/listNodeTypes API call.
"num_workers":
"description": |-
Number of worker nodes that this cluster should have. A cluster has one Spark Driver
and `num_workers` Executors for a total of `num_workers` + 1 Spark nodes.
Note: When reading the properties of a cluster, this field reflects the desired number
of workers rather than the actual current number of workers. For instance, if a cluster
is resized from 5 to 10 workers, this field will immediately be updated to reflect
the target size of 10 workers, whereas the workers listed in `spark_info` will gradually
increase from 5 to 10 as the new nodes are provisioned.
"policy_id":
"description": |-
The ID of the cluster policy used to create the cluster if applicable.
"remote_disk_throughput":
"description": |-
If set, what the configurable throughput (in Mb/s) for the remote disk is. Currently only supported for GCP HYPERDISK_BALANCED disks.
"runtime_engine":
"description": |-
Determines the cluster's runtime engine, either standard or Photon.
This field is not compatible with legacy `spark_version` values that contain `-photon-`.
Remove `-photon-` from the `spark_version` and set `runtime_engine` to `PHOTON`.
If left unspecified, the runtime engine defaults to standard unless the spark_version
contains -photon-, in which case Photon will be used.
"single_user_name":
"description": |-
Single user name if data_security_mode is `SINGLE_USER`
"spark_conf":
"description": |-
An object containing a set of optional, user-specified Spark configuration key-value pairs.
Users can also pass in a string of extra JVM options to the driver and the executors via
`spark.driver.extraJavaOptions` and `spark.executor.extraJavaOptions` respectively.
"spark_env_vars":
"description": |-
An object containing a set of optional, user-specified environment variable key-value pairs.
Please note that key-value pair of the form (X,Y) will be exported as is (i.e.,
`export X='Y'`) while launching the driver and workers.
In order to specify an additional set of `SPARK_DAEMON_JAVA_OPTS`, we recommend appending
them to `$SPARK_DAEMON_JAVA_OPTS` as shown in the example below. This ensures that all
default databricks managed environmental variables are included as well.
Example Spark environment variables:
`{"SPARK_WORKER_MEMORY": "28000m", "SPARK_LOCAL_DIRS": "/local_disk0"}` or
`{"SPARK_DAEMON_JAVA_OPTS": "$SPARK_DAEMON_JAVA_OPTS -Dspark.shuffle.service.enabled=true"}`
"spark_version":
"description": |-
The Spark version of the cluster, e.g. `3.3.x-scala2.11`.
A list of available Spark versions can be retrieved by using
the :method:clusters/sparkVersions API call.
"ssh_public_keys":
"description": |-
SSH public key contents that will be added to each Spark node in this cluster. The
corresponding private keys can be used to login with the user name `ubuntu` on port `2200`.
Up to 10 keys can be specified.
"total_initial_remote_disk_size":
"description": |-
If set, what the total initial volume size (in GB) of the remote disks should be. Currently only supported for GCP HYPERDISK_BALANCED disks.
"use_ml_runtime":
"description": |-
This field can only be used when `kind = CLASSIC_PREVIEW`.
`effective_spark_version` is determined by `spark_version` (DBR release), this field `use_ml_runtime`, and whether `node_type_id` is gpu node or not.
"workload_type":
"description": |-
Cluster Attributes showing for clusters workload types.
github.com/databricks/cli/bundle/config/resources.DatabaseCatalog:
"create_database_if_not_exists": {}
"database_instance_name":
"description": |-
The name of the DatabaseInstance housing the database.
"database_name":
"description": |-
The name of the database (in a instance) associated with the catalog.
"name":
"description": |-
The name of the catalog in UC.
"uid":
"x-databricks-field-behaviors_output_only": |-
true
github.com/databricks/cli/bundle/config/resources.DatabaseInstance:
"_":
"description": |-
A DatabaseInstance represents a logical Postgres instance, comprised of both compute and storage.
"capacity":
"description": |-
The sku of the instance. Valid values are "CU_1", "CU_2", "CU_4", "CU_8".
"child_instance_refs":
"description": |-
The refs of the child instances. This is only available if the instance is
parent instance.
"x-databricks-field-behaviors_output_only": |-
true
"creation_time":
"description": |-
The timestamp when the instance was created.
"x-databricks-field-behaviors_output_only": |-
true
"creator":
"description": |-
The email of the creator of the instance.
"x-databricks-field-behaviors_output_only": |-
true
"custom_tags":
"description": |-
Custom tags associated with the instance. This field is only included on create and update responses.
"effective_capacity":
"description": |-
Deprecated. The sku of the instance; this field will always match the value of capacity.
"deprecation_message": |-
This field is deprecated
"x-databricks-field-behaviors_output_only": |-
true
"effective_custom_tags":
"description": |-
The recorded custom tags associated with the instance.
"x-databricks-field-behaviors_output_only": |-
true
"effective_enable_pg_native_login":
"description": |-
Whether the instance has PG native password login enabled.
"x-databricks-field-behaviors_output_only": |-
true
"effective_enable_readable_secondaries":
"description": |-
Whether secondaries serving read-only traffic are enabled. Defaults to false.
"x-databricks-field-behaviors_output_only": |-
true
"effective_node_count":
"description": |-
The number of nodes in the instance, composed of 1 primary and 0 or more secondaries. Defaults to
1 primary and 0 secondaries.
"x-databricks-field-behaviors_output_only": |-
true
"effective_retention_window_in_days":
"description": |-
The retention window for the instance. This is the time window in days
for which the historical data is retained.
"x-databricks-field-behaviors_output_only": |-
true
"effective_stopped":
"description": |-
Whether the instance is stopped.
"x-databricks-field-behaviors_output_only": |-
true
"effective_usage_policy_id":
"description": |-
The policy that is applied to the instance.
"x-databricks-field-behaviors_output_only": |-
true
"enable_pg_native_login":
"description": |-
Whether to enable PG native password login on the instance. Defaults to false.
"enable_readable_secondaries":
"description": |-
Whether to enable secondaries to serve read-only traffic. Defaults to false.
"name":
"description": |-
The name of the instance. This is the unique identifier for the instance.
"node_count":
"description": |-
The number of nodes in the instance, composed of 1 primary and 0 or more secondaries. Defaults to
1 primary and 0 secondaries. This field is input only, see effective_node_count for the output.
"parent_instance_ref":
"description": |-
The ref of the parent instance. This is only available if the instance is
child instance.
Input: For specifying the parent instance to create a child instance. Optional.
Output: Only populated if provided as input to create a child instance.
"pg_version":
"description": |-
The version of Postgres running on the instance.
"x-databricks-field-behaviors_output_only": |-
true
"read_only_dns":
"description": |-
The DNS endpoint to connect to the instance for read only access. This is only available if
enable_readable_secondaries is true.
"x-databricks-field-behaviors_output_only": |-
true
"read_write_dns":
"description": |-
The DNS endpoint to connect to the instance for read+write access.
"x-databricks-field-behaviors_output_only": |-
true
"retention_window_in_days":
"description": |-
The retention window for the instance. This is the time window in days
for which the historical data is retained. The default value is 7 days.
Valid values are 2 to 35 days.
"state":
"description": |-
The current state of the instance.
"x-databricks-field-behaviors_output_only": |-
true
"stopped":
"description": |-
Whether to stop the instance. An input only param, see effective_stopped for the output.
"uid":
"description": |-
An immutable UUID identifier for the instance.
"x-databricks-field-behaviors_output_only": |-
true
"usage_policy_id":
"description": |-
The desired usage policy to associate with the instance.
github.com/databricks/cli/bundle/config/resources.Job:
"budget_policy_id":
"description": |-
The id of the user specified budget policy to use for this job.
If not specified, a default budget policy may be applied when creating or modifying the job.
See `effective_budget_policy_id` for the budget policy used by this workload.
"continuous":
"description": |-
An optional continuous property for this job. The continuous property will ensure that there is always one run executing. Only one of `schedule` and `continuous` can be used.
"deployment":
"description": |-
Deployment information for jobs managed by external sources.
"description":
"description": |-
An optional description for the job. The maximum length is 27700 characters in UTF-8 encoding.
"edit_mode":
"description": |-
Edit mode of the job.
* `UI_LOCKED`: The job is in a locked UI state and cannot be modified.
* `EDITABLE`: The job is in an editable state and can be modified.
"email_notifications":
"description": |-
An optional set of email addresses that is notified when runs of this job begin or complete as well as when this job is deleted.
"environments":
"description": |-
A list of task execution environment specifications that can be referenced by serverless tasks of this job.
An environment is required to be present for serverless tasks.
For serverless notebook tasks, the environment is accessible in the notebook environment panel.
For other serverless tasks, the task environment is required to be specified using environment_key in the task settings.
"format":
"description": |-
Used to tell what is the format of the job. This field is ignored in Create/Update/Reset calls. When using the Jobs API 2.1 this value is always set to `"MULTI_TASK"`.
"deprecation_message": |-
This field is deprecated
"git_source":
"description": |-
An optional specification for a remote Git repository containing the source code used by tasks. Version-controlled source code is supported by notebook, dbt, Python script, and SQL File tasks.
If `git_source` is set, these tasks retrieve the file from the remote repository by default. However, this behavior can be overridden by setting `source` to `WORKSPACE` on the task.
Note: dbt and SQL File tasks support only version-controlled sources. If dbt or SQL File tasks are used, `git_source` must be defined on the job.
"health":
"description": |-
An optional set of health rules that can be defined for this job.
"job_clusters":
"description": |-
A list of job cluster specifications that can be shared and reused by tasks of this job. Libraries cannot be declared in a shared job cluster. You must declare dependent libraries in task settings.
"max_concurrent_runs":
"description": |-
An optional maximum allowed number of concurrent runs of the job.
Set this value if you want to be able to execute multiple runs of the same job concurrently.
This is useful for example if you trigger your job on a frequent schedule and want to allow consecutive runs to overlap with each other, or if you want to trigger multiple runs which differ by their input parameters.
This setting affects only new runs. For example, suppose the job’s concurrency is 4 and there are 4 concurrent active runs. Then setting the concurrency to 3 won’t kill any of the active runs.
However, from then on, new runs are skipped unless there are fewer than 3 active runs.
This value cannot exceed 1000. Setting this value to `0` causes all new runs to be skipped.
"name":
"description": |-
An optional name for the job. The maximum length is 4096 bytes in UTF-8 encoding.
"notification_settings":
"description": |-
Optional notification settings that are used when sending notifications to each of the `email_notifications` and `webhook_notifications` for this job.
"parameters":
"description": |-
Job-level parameter definitions
"performance_target":
"description": |-
The performance mode on a serverless job. This field determines the level of compute performance or cost-efficiency for the run.
* `STANDARD`: Enables cost-efficient execution of serverless workloads.
* `PERFORMANCE_OPTIMIZED`: Prioritizes fast startup and execution times through rapid scaling and optimized cluster performance.
"queue":
"description": |-
The queue settings of the job.
"run_as":
"description": |-
The user or service principal that the job runs as, if specified in the request.
This field indicates the explicit configuration of `run_as` for the job.
To find the value in all cases, explicit or implicit, use `run_as_user_name`.
"schedule":
"description": |-
An optional periodic schedule for this job. The default behavior is that the job only runs when triggered by clicking “Run Now” in the Jobs UI or sending an API request to `runNow`.
"tags":
"description": |-
A map of tags associated with the job. These are forwarded to the cluster as cluster tags for jobs clusters, and are subject to the same limitations as cluster tags. A maximum of 25 tags can be added to the job.
"tasks":
"description": |-
A list of task specifications to be executed by this job.
It supports up to 1000 elements in write endpoints (:method:jobs/create, :method:jobs/reset, :method:jobs/update, :method:jobs/submit).
Read endpoints return only 100 tasks. If more than 100 tasks are available, you can paginate through them using :method:jobs/get. Use the `next_page_token` field at the object root to determine if more results are available.
"timeout_seconds":
"description": |-
An optional timeout applied to each run of this job. A value of `0` means no timeout.
"trigger":
"description": |-
A configuration to trigger a run when certain conditions are met. The default behavior is that the job runs only when triggered by clicking “Run Now” in the Jobs UI or sending an API request to `runNow`.
"usage_policy_id":
"description": |-
The id of the user specified usage policy to use for this job.
If not specified, a default usage policy may be applied when creating or modifying the job.
See `effective_usage_policy_id` for the usage policy used by this workload.
"x-databricks-preview": |-
PRIVATE
"webhook_notifications":
"description": |-
A collection of system notification IDs to notify when runs of this job begin or complete.
github.com/databricks/cli/bundle/config/resources.MlflowExperiment:
"artifact_location":
"description": |-
Location where all artifacts for the experiment are stored.
If not provided, the remote server will select an appropriate default.
"name":
"description": |-
Experiment name.
"tags":
"description": |-
A collection of tags to set on the experiment. Maximum tag size and number of tags per request
depends on the storage backend. All storage backends are guaranteed to support tag keys up
to 250 bytes in size and tag values up to 5000 bytes in size. All storage backends are also
guaranteed to support up to 20 tags per request.
github.com/databricks/cli/bundle/config/resources.MlflowModel:
"description":
"description": |-
Optional description for registered model.
"name":
"description": |-
Register models under this name
"tags":
"description": |-
Additional metadata for registered model.
github.com/databricks/cli/bundle/config/resources.ModelServingEndpoint:
"ai_gateway":
"description": |-
The AI Gateway configuration for the serving endpoint. NOTE: External model, provisioned throughput, and pay-per-token endpoints are fully supported; agent endpoints currently only support inference tables.
"budget_policy_id":
"description": |-
The budget policy to be applied to the serving endpoint.
"config":
"description": |-
The core config of the serving endpoint.
"description": {}
"email_notifications":
"description": |-
Email notification settings.
"name":
"description": |-
The name of the serving endpoint. This field is required and must be unique across a Databricks workspace.
An endpoint name can consist of alphanumeric characters, dashes, and underscores.
"rate_limits":
"description": |-
Rate limits to be applied to the serving endpoint. NOTE: this field is deprecated, please use AI Gateway to manage rate limits.
"deprecation_message": |-
This field is deprecated
"route_optimized":
"description": |-
Enable route optimization for the serving endpoint.
"tags":
"description": |-
Tags to be attached to the serving endpoint and automatically propagated to billing logs.
github.com/databricks/cli/bundle/config/resources.Pipeline:
"allow_duplicate_names":
"description": |-
If false, deployment will fail if name conflicts with that of another pipeline.
"budget_policy_id":
"description": |-
Budget policy of this pipeline.
"catalog":
"description": |-
A catalog in Unity Catalog to publish data from this pipeline to. If `target` is specified, tables in this pipeline are published to a `target` schema inside `catalog` (for example, `catalog`.`target`.`table`). If `target` is not specified, no data is published to Unity Catalog.
"channel":
"description": |-
DLT Release Channel that specifies which version to use.
"clusters":
"description": |-
Cluster settings for this pipeline deployment.
"configuration":
"description": |-
String-String configuration for this pipeline execution.
"continuous":
"description": |-
Whether the pipeline is continuous or triggered. This replaces `trigger`.
"deployment":
"description": |-
Deployment type of this pipeline.
"development":
"description": |-
Whether the pipeline is in Development mode. Defaults to false.
"dry_run": {}
"edition":
"description": |-
Pipeline product edition.
"environment":
"description": |-
Environment specification for this pipeline used to install dependencies.
"event_log":
"description": |-
Event log configuration for this pipeline
"filters":
"description": |-
Filters on which Pipeline packages to include in the deployed graph.
"gateway_definition":
"description": |-
The definition of a gateway pipeline to support change data capture.
"x-databricks-preview": |-
PRIVATE
"id":
"description": |-
Unique identifier for this pipeline.
"ingestion_definition":
"description": |-
The configuration for a managed ingestion pipeline. These settings cannot be used with the 'libraries', 'schema', 'target', or 'catalog' settings.
"libraries":
"description": |-
Libraries or code needed by this deployment.
"name":
"description": |-
Friendly identifier for this pipeline.
"notifications":
"description": |-
List of notification settings for this pipeline.
"photon":
"description": |-
Whether Photon is enabled for this pipeline.
"restart_window":
"description": |-
Restart window of this pipeline.
"x-databricks-preview": |-
PRIVATE
"root_path":
"description": |-
Root path for this pipeline.
This is used as the root directory when editing the pipeline in the Databricks user interface and it is
added to sys.path when executing Python sources during pipeline execution.
"run_as":
"description": |-
Write-only setting, available only in Create/Update calls. Specifies the user or service principal that the pipeline runs as. If not specified, the pipeline runs as the user who created the pipeline.
Only `user_name` or `service_principal_name` can be specified. If both are specified, an error is thrown.
"schema":
"description": |-
The default schema (database) where tables are read from or published to.
"serverless":
"description": |-
Whether serverless compute is enabled for this pipeline.
"storage":
"description": |-
DBFS root directory for storing checkpoints and tables.
"tags":
"description": |-
A map of tags associated with the pipeline.
These are forwarded to the cluster as cluster tags, and are therefore subject to the same limitations.
A maximum of 25 tags can be added to the pipeline.
"target":
"description": |-
Target schema (database) to add tables in this pipeline to. Exactly one of `schema` or `target` must be specified. To publish to Unity Catalog, also specify `catalog`. This legacy field is deprecated for pipeline creation in favor of the `schema` field.
"deprecation_message": |-
This field is deprecated
"trigger":
"description": |-
Which pipeline trigger to use. Deprecated: Use `continuous` instead.
"deprecation_message": |-
This field is deprecated
"usage_policy_id":
"description": |-
Usage policy of this pipeline.
"x-databricks-preview": |-
PRIVATE
github.com/databricks/cli/bundle/config/resources.QualityMonitor:
"assets_dir":
"description": |-
[Create:REQ Update:IGN] Field for specifying the absolute path to a custom directory to store data-monitoring
assets. Normally prepopulated to a default user location via UI and Python APIs.
"baseline_table_name":
"description": |-
[Create:OPT Update:OPT] Baseline table name.
Baseline data is used to compute drift from the data in the monitored `table_name`.
The baseline table and the monitored table shall have the same schema.
"custom_metrics":
"description": |-
[Create:OPT Update:OPT] Custom metrics.
"data_classification_config":
"description": |-
[Create:OPT Update:OPT] Data classification related config.
"x-databricks-preview": |-
PRIVATE
"inference_log": {}
"latest_monitor_failure_msg":
"description": |-
[Create:ERR Update:IGN] The latest error message for a monitor failure.
"notifications":
"description": |-
[Create:OPT Update:OPT] Field for specifying notification settings.
"output_schema_name":
"description": |-
[Create:REQ Update:REQ] Schema where output tables are created. Needs to be in 2-level format {catalog}.{schema}
"schedule":
"description": |-
[Create:OPT Update:OPT] The monitor schedule.
"skip_builtin_dashboard":
"description": |-
Whether to skip creating a default dashboard summarizing data quality metrics.
"slicing_exprs":
"description": |-
[Create:OPT Update:OPT] List of column expressions to slice data with for targeted analysis. The data is grouped by
each expression independently, resulting in a separate slice for each predicate and its
complements. For example `slicing_exprs=[“col_1”, “col_2 > 10”]` will generate the following
slices: two slices for `col_2 > 10` (True and False), and one slice per unique value in
`col1`. For high-cardinality columns, only the top 100 unique values by frequency will
generate slices.
"snapshot":
"description": |-
Configuration for monitoring snapshot tables.
"time_series":
"description": |-
Configuration for monitoring time series tables.
"warehouse_id":
"description": |-
Optional argument to specify the warehouse for dashboard creation. If not specified, the first running
warehouse will be used.
github.com/databricks/cli/bundle/config/resources.RegisteredModel:
"aliases":
"description": |-
List of aliases associated with the registered model
"browse_only":
"description": |-
Indicates whether the principal is limited to retrieving metadata for the associated object through the BROWSE privilege when include_browse is enabled in the request.
"catalog_name":
"description": |-
The name of the catalog where the schema and the registered model reside
"comment":
"description": |-
The comment attached to the registered model
"created_at":
"description": |-
Creation timestamp of the registered model in milliseconds since the Unix epoch
"created_by":
"description": |-
The identifier of the user who created the registered model
"full_name":
"description": |-
The three-level (fully qualified) name of the registered model
"metastore_id":
"description": |-
The unique identifier of the metastore
"name":
"description": |-
The name of the registered model
"owner":
"description": |-
The identifier of the user who owns the registered model
"schema_name":
"description": |-
The name of the schema where the registered model resides
"storage_location":
"description": |-
The storage location on the cloud under which model version data files are stored
"updated_at":
"description": |-
Last-update timestamp of the registered model in milliseconds since the Unix epoch
"updated_by":
"description": |-
The identifier of the user who updated the registered model last time
github.com/databricks/cli/bundle/config/resources.Schema:
"catalog_name":
"description": |-
Name of parent catalog.
"comment":
"description": |-
User-provided free-form text description.
"name":
"description": |-
Name of schema, relative to parent catalog.
"properties":
"description": |-
A map of key-value properties attached to the securable.
"storage_root":
"description": |-
Storage root URL for managed tables within schema.
github.com/databricks/cli/bundle/config/resources.SqlWarehouse:
"_":
"description": |-
Creates a new SQL warehouse.
"auto_stop_mins":
"description": |-
The amount of time in minutes that a SQL warehouse must be idle (i.e., no
RUNNING queries) before it is automatically stopped.
Supported values:
- Must be == 0 or >= 10 mins
- 0 indicates no autostop.
Defaults to 120 mins
"channel":
"description": |-
Channel Details
"cluster_size":
"description": |-
Size of the clusters allocated for this warehouse.
Increasing the size of a spark cluster allows you to run larger queries on
it. If you want to increase the number of concurrent queries, please tune
max_num_clusters.
Supported values:
- 2X-Small
- X-Small
- Small
- Medium
- Large
- X-Large
- 2X-Large
- 3X-Large
- 4X-Large
"creator_name":
"description": |-
warehouse creator name
"enable_photon":
"description": |-
Configures whether the warehouse should use Photon optimized clusters.
Defaults to false.
"enable_serverless_compute":
"description": |-
Configures whether the warehouse should use serverless compute
"instance_profile_arn":
"description": |-
Deprecated. Instance profile used to pass IAM role to the cluster
"deprecation_message": |-
This field is deprecated
"max_num_clusters":
"description": |-
Maximum number of clusters that the autoscaler will create to handle
concurrent queries.
Supported values:
- Must be >= min_num_clusters
- Must be <= 40.
Defaults to min_clusters if unset.
"min_num_clusters":
"description": |-
Minimum number of available clusters that will be maintained for this SQL
warehouse. Increasing this will ensure that a larger number of clusters are
always running and therefore may reduce the cold start time for new
queries. This is similar to reserved vs. revocable cores in a resource
manager.
Supported values:
- Must be > 0
- Must be <= min(max_num_clusters, 30)
Defaults to 1
"name":
"description": |-
Logical name for the cluster.
Supported values:
- Must be unique within an org.
- Must be less than 100 characters.
"spot_instance_policy":
"description": |-
Configurations whether the endpoint should use spot instances.
"tags":
"description": |-
A set of key-value pairs that will be tagged on all resources (e.g., AWS instances and EBS volumes) associated
with this SQL warehouse.
Supported values:
- Number of tags < 45.
"warehouse_type":
"description": |-
Warehouse type: `PRO` or `CLASSIC`. If you want to use serverless compute,
you must set to `PRO` and also set the field `enable_serverless_compute` to `true`.
github.com/databricks/cli/bundle/config/resources.SyncedDatabaseTable:
"_":
"description": |-
Next field marker: 18
"data_synchronization_status":
"description": |-
Synced Table data synchronization status
"x-databricks-field-behaviors_output_only": |-
true
"database_instance_name":
"description": |-
Name of the target database instance. This is required when creating synced database tables in standard catalogs.
This is optional when creating synced database tables in registered catalogs. If this field is specified
when creating synced database tables in registered catalogs, the database instance name MUST
match that of the registered catalog (or the request will be rejected).
"effective_database_instance_name":
"description": |-
The name of the database instance that this table is registered to. This field is always returned, and for
tables inside database catalogs is inferred database instance associated with the catalog.
"x-databricks-field-behaviors_output_only": |-
true
"effective_logical_database_name":
"description": |-
The name of the logical database that this table is registered to.
"x-databricks-field-behaviors_output_only": |-
true
"logical_database_name":
"description": |-
Target Postgres database object (logical database) name for this table.
When creating a synced table in a registered Postgres catalog, the
target Postgres database name is inferred to be that of the registered catalog.
If this field is specified in this scenario, the Postgres database name MUST
match that of the registered catalog (or the request will be rejected).
When creating a synced table in a standard catalog, this field is required.
In this scenario, specifying this field will allow targeting an arbitrary postgres database.
Note that this has implications for the `create_database_objects_is_missing` field in `spec`.
"name":
"description": |-
Full three-part (catalog, schema, table) name of the table.
"spec":
"description": |-
Specification of a synced database table.
"unity_catalog_provisioning_state":
"description": |-
The provisioning state of the synced table entity in Unity Catalog. This is distinct from the
state of the data synchronization pipeline (i.e. the table may be in "ACTIVE" but the pipeline
may be in "PROVISIONING" as it runs asynchronously).
"x-databricks-field-behaviors_output_only": |-
true
github.com/databricks/cli/bundle/config/resources.Volume:
"catalog_name":
"description": |-
The name of the catalog where the schema and the volume are
"comment":
"description": |-
The comment attached to the volume
"name":
"description": |-
The name of the volume
"schema_name":
"description": |-
The name of the schema where the volume is
"storage_location":
"description": |-
The storage location on the cloud
"volume_type":
"description": |-
The type of the volume. An external volume is located in the specified external location.
A managed volume is located in the default location which is specified by the parent schema, or the parent catalog, or the Metastore.
[Learn more](https://docs.databricks.com/aws/en/volumes/managed-vs-external)
github.com/databricks/databricks-sdk-go/service/apps.AppDeployment:
"create_time":
"description": |-
The creation time of the deployment. Formatted timestamp in ISO 6801.
"x-databricks-field-behaviors_output_only": |-
true
"creator":
"description": |-
The email of the user creates the deployment.
"x-databricks-field-behaviors_output_only": |-
true
"deployment_artifacts":
"description": |-
The deployment artifacts for an app.
"x-databricks-field-behaviors_output_only": |-
true
"deployment_id":
"description": |-
The unique id of the deployment.
"mode":
"description": |-
The mode of which the deployment will manage the source code.
"source_code_path":
"description": |-
The workspace file system path of the source code used to create the app deployment. This is different from
`deployment_artifacts.source_code_path`, which is the path used by the deployed app. The former refers
to the original source code location of the app in the workspace during deployment creation, whereas
the latter provides a system generated stable snapshotted source code path used by the deployment.
"status":
"description": |-
Status and status message of the deployment