From 389fc764f1bbfe8243515f6b4425a1e55bd5a89f Mon Sep 17 00:00:00 2001 From: Myasnikov Daniil Date: Thu, 19 Mar 2026 10:45:14 +0500 Subject: [PATCH 1/3] Added docs covering VMinstance and VMDisk backups Signed-off-by: Myasnikov Daniil --- .../docs/v1/kubernetes/backup-and-recovery.md | 95 ------- .../services/velero-backup-configuration.md | 80 +++++- .../v1/virtualization/backup-and-recovery.md | 244 ++++++++++++++++++ 3 files changed, 310 insertions(+), 109 deletions(-) delete mode 100644 content/en/docs/v1/kubernetes/backup-and-recovery.md create mode 100644 content/en/docs/v1/virtualization/backup-and-recovery.md diff --git a/content/en/docs/v1/kubernetes/backup-and-recovery.md b/content/en/docs/v1/kubernetes/backup-and-recovery.md deleted file mode 100644 index 4bced493..00000000 --- a/content/en/docs/v1/kubernetes/backup-and-recovery.md +++ /dev/null @@ -1,95 +0,0 @@ ---- -title: Backup and Recovery -linkTitle: Backup and Recovery -description: "How to create and manage backups in your Kubernetes cluster using BackupJobs and Plans." -weight: 40 -aliases: - - /docs/v1/guides/backups ---- - -Cluster backup **strategies** and **BackupClasses** are configured by cluster administrators. If your tenant does not have a BackupClass yet, ask your administrator to follow the [Velero Backup Configuration]({{% ref "/docs/v1/operations/services/velero-backup-configuration" %}}) guide to set up storage, strategies, and BackupClasses. - -This guide is for **tenant users**: how to run one-off and scheduled backups using existing BackupClasses, check backup status, and where to look for restore options. - -Cozystack uses [Velero](https://velero.io/docs/v1.17/) under the hood. Backups and restores run in the `cozy-velero` namespace (management cluster) or the equivalent namespace in your tenant cluster, depending on your setup. - -## Prerequisites - -- The Velero add-on is enabled for your cluster (by an administrator). -- At least one **BackupClass** is available for your tenant or namespace (provided by an administrator). -- `kubectl` and kubeconfig for the cluster you are backing up. - -## 1. List available BackupClasses - -BackupClasses define where and how backups are stored. You can only use those that administrators have created and made available to you. -Check available BackupClass **names**, and use in the next steps when creating a BackupJob or Plan. - -```bash -kubectl get backupclasses -NAME AGE -velero 14m -``` - -## 2. Create a one-off backup (BackupJob) - -Use a **BackupJob** when you want to run a backup once (for example, before a risky change). - -Example BackupJob for VMInstance: - -```yaml -apiVersion: backups.cozystack.io/v1alpha1 -kind: BackupJob -metadata: - name: my-manual-backup - namespace: tenant-root -spec: - applicationRef: - apiGroup: apps.cozystack.io - kind: VMInstance - name: vm1 - backupClassName: velero -``` - -Apply and check status: - -```bash -kubectl apply -f backupjob.yaml -kubectl get backupjobs -n tenant-root -kubectl describe backupjob my-manual-backup -n tenant-root -``` - -## 3. Create scheduled backups (Plan) - -Use a **Plan** to run backups on a schedule (e.g. daily or every 6 hours). - -Example: - -```yaml -apiVersion: backups.cozystack.io/v1alpha1 -kind: Plan -metadata: - name: my-backup-plan - namespace: tenant-root -spec: - applicationRef: - apiGroup: apps.cozystack.io - kind: VMInstance - name: vm1 - backupClassName: velero - schedule: "0 */6 * * *" # Every 6 hours (cron) -``` - -Apply and check: - -```bash -kubectl apply -f plan.yaml -kubectl get plans -n tenant-root -kubectl describe plan my-backup-plan -n tenant-root -kubectl get backups.velero.io -n tenant-root -``` - -## 4. Check backup status - -- **BackupJobs**: `kubectl get backupjobs -n tenant-root` and `kubectl describe backupjob -n tenant-root` -- **Plans**: `kubectl get plans -n tenant-root` and `kubectl describe plan -n tenant-root` -- **Velero backups**: `kubectl get backups.velero.io -n tenant-root` diff --git a/content/en/docs/v1/operations/services/velero-backup-configuration.md b/content/en/docs/v1/operations/services/velero-backup-configuration.md index 187b86ed..88e4d3e2 100644 --- a/content/en/docs/v1/operations/services/velero-backup-configuration.md +++ b/content/en/docs/v1/operations/services/velero-backup-configuration.md @@ -5,7 +5,7 @@ description: "Configure backup storage, strategies, and BackupClasses for cluste weight: 30 --- -This guide is for **cluster administrators** who configure the backup infrastructure in Cozystack: S3 storage, Velero locations, backup **strategies**, and **BackupClasses**. Tenant users then use existing BackupClasses to create [BackupJobs and Plans]({{% ref "/docs/v1/kubernetes/backup-and-recovery" %}}). +This guide is for **cluster administrators** who configure the backup infrastructure in Cozystack: S3 storage, Velero locations, backup **strategies**, and **BackupClasses**. Tenant users then use existing BackupClasses to create [BackupJobs and Plans]({{% ref "/docs/v1/virtualization/backup-and-recovery" %}}). ## Prerequisites @@ -119,42 +119,84 @@ kubectl get crd | grep -i backup kubectl explain --recursive ``` -Example strategy: +Example strategy for VMInstance (includes all VM resources and attached volumes): ```yaml apiVersion: strategy.backups.cozystack.io/v1alpha1 kind: Velero metadata: - name: velero-backup-strategy + name: vminstance-strategy spec: template: + restoreSpec: + existingResourcePolicy: update + spec: # see https://velero.io/docs/v1.17/api-types/backup/ includedNamespaces: - - '{{ .Application.metadata.namespace }}' - - # Resources related VMInstance + - '{{ .Application.metadata.namespace }}' + orLabelSelectors: + # VM resources (VirtualMachine, DataVolume, PVC, etc.) + - matchLabels: + app.kubernetes.io/instance: 'vm-instance-{{ .Application.metadata.name }}' + # HelmRelease (the Cozystack app object) + - matchLabels: + apps.cozystack.io/application.kind: '{{ .Application.kind }}' + apps.cozystack.io/application.name: '{{ .Application.metadata.name }}' includedResources: - helmreleases.helm.toolkit.fluxcd.io - virtualmachines.kubevirt.io - virtualmachineinstances.kubevirt.io + - pods - datavolumes.cdi.kubevirt.io - persistentvolumeclaims - - services - configmaps - secrets - + includeClusterResources: false storageLocation: '{{ .Parameters.backupStorageLocationName }}' - volumeSnapshotLocations: - '{{ .Parameters.backupStorageLocationName }}' snapshotVolumes: true snapshotMoveData: true + ttl: 720h0m0s + itemOperationTimeout: 24h0m0s +``` + +Example strategy for VMDisk (disk and its volume only): +```yaml +apiVersion: strategy.backups.cozystack.io/v1alpha1 +kind: Velero +metadata: + name: vmdisk-strategy +spec: + template: + restoreSpec: + existingResourcePolicy: update + + spec: + includedNamespaces: + - '{{ .Application.metadata.namespace }}' + orLabelSelectors: + - matchLabels: + app.kubernetes.io/instance: 'vm-disk-{{ .Application.metadata.name }}' + - matchLabels: + apps.cozystack.io/application.kind: '{{ .Application.kind }}' + apps.cozystack.io/application.name: '{{ .Application.metadata.name }}' + includedResources: + - helmreleases.helm.toolkit.fluxcd.io + - datavolumes.cdi.kubevirt.io + - persistentvolumeclaims + includeClusterResources: false + storageLocation: '{{ .Parameters.backupStorageLocationName }}' + volumeSnapshotLocations: + - '{{ .Parameters.backupStorageLocationName }}' + snapshotVolumes: true + snapshotMoveData: true ttl: 720h0m0s itemOperationTimeout: 24h0m0s ``` -Template context for substitutions in template spec will be resolved according to defined Parameters in BackupClass and desired ApplicationRef defined in BackupJob / Plan. +Template variables (`{{ .Application.* }}` and `{{ .Parameters.* }}`) are resolved from the ApplicationRef in the BackupJob/Plan and the parameters defined in the BackupClass. Don't forget to apply it into management cluster: @@ -183,9 +225,19 @@ spec: - strategyRef: apiGroup: strategy.backups.cozystack.io kind: Velero - name: velero-backup-strategy + name: vminstance-strategy application: kind: VMInstance + apiGroup: apps.cozystack.io + parameters: + backupStorageLocationName: default + - strategyRef: + apiGroup: strategy.backups.cozystack.io + kind: Velero + name: vmdisk-strategy + application: + kind: VMDisk + apiGroup: apps.cozystack.io parameters: backupStorageLocationName: default ``` @@ -201,8 +253,8 @@ kubectl get backupclasses Once strategies and BackupClasses are in place, **tenant users** can run backups without touching Velero or storage configuration: -- **One-off backup**: create a [BackupJob]({{% ref "/docs/v1/kubernetes/backup-and-recovery#create-a-one-off-backup-backupjob" %}}) that references a BackupClass. -- **Scheduled backups**: create a [Plan]({{% ref "/docs/v1/kubernetes/backup-and-recovery#create-scheduled-backups-plan" %}}) with a cron schedule and a BackupClass reference. +- **One-off backup**: create a [BackupJob]({{% ref "/docs/v1/virtualization/backup-and-recovery#one-off-backup" %}}) that references a BackupClass. +- **Scheduled backups**: create a [Plan]({{% ref "/docs/v1/virtualization/backup-and-recovery#scheduled-backup" %}}) with a cron schedule and a BackupClass reference. Direct use of Velero CRDs (`Backup`, `Schedule`, `Restore`) remains available for advanced or recovery scenarios: @@ -228,4 +280,4 @@ kubectl logs -n cozy-velero -l app.kubernetes.io/name=velero --tail=100 ## 5. Restore from a backup -For a description of restore procedures (including listing backups and checking restore progress), see [Restore from a backup (all resources)]({{% ref "/docs/v0/kubernetes/backup-and-recovery#3-restore-from-a-backup-all-resources" %}}). +Once strategies and BackupClasses are in place, tenant users can restore from a backup using **RestoreJob** resources. See the [Backup and Recovery]({{% ref "/docs/v1/virtualization/backup-and-recovery" %}}) guide for restore instructions covering VMInstance and VMDisk in-place restores. diff --git a/content/en/docs/v1/virtualization/backup-and-recovery.md b/content/en/docs/v1/virtualization/backup-and-recovery.md new file mode 100644 index 00000000..6e02119d --- /dev/null +++ b/content/en/docs/v1/virtualization/backup-and-recovery.md @@ -0,0 +1,244 @@ +--- +title: Backup and Recovery +linkTitle: Backup and Recovery +description: "How to create and manage backups of VMInstance and VMDisk resources using BackupJobs and Plans." +weight: 40 +aliases: + - /docs/v1/guides/backups + - /docs/v1/kubernetes/backup-and-recovery +--- + +Cluster backup **strategies** and **BackupClasses** are configured by cluster administrators. If your tenant does not have a BackupClass yet, ask your administrator to follow the [Velero Backup Configuration]({{% ref "/docs/v1/operations/services/velero-backup-configuration" %}}) guide to set up storage, strategies, and BackupClasses. + +This guide covers backing up and restoring **VMInstance** and **VMDisk** resources as a tenant user: running one-off and scheduled backups, checking backup status, and restoring from a backup using RestoreJobs. + +Cozystack uses [Velero](https://velero.io/docs/v1.17/) under the hood for backup storage and volume snapshots. + +## Prerequisites + +- The Velero add-on is enabled for your cluster (by an administrator). +- At least one **BackupClass** is available for your tenant namespace (provided by an administrator). +- `kubectl` and kubeconfig for the cluster you are backing up. + +## List available BackupClasses + +BackupClasses define where and how backups are stored. You can only use those that administrators have created. + +```bash +kubectl get backupclasses +``` + +Example output: + +``` +NAME AGE +velero 14m +``` + +Use the BackupClass name when creating a BackupJob or Plan. + +## Back up a VMInstance + +A VMInstance backup captures the VM configuration and all attached VMDisk volumes. + +### One-off backup + +Use a **BackupJob** when you want to run a backup once — for example, before a risky change. + +```yaml +apiVersion: backups.cozystack.io/v1alpha1 +kind: BackupJob +metadata: + name: my-vm-backup + namespace: tenant-user +spec: + applicationRef: + apiGroup: apps.cozystack.io + kind: VMInstance + name: my-vm + backupClassName: velero +``` + +Apply it and watch the status: + +```bash +kubectl apply -f backupjob.yaml +kubectl get backupjobs -n tenant-user +kubectl describe backupjob my-vm-backup -n tenant-user +``` + +When the BackupJob completes successfully, it creates a **Backup** object with the same name (`my-vm-backup`). You will use that name when restoring. + +### Scheduled backup + +Use a **Plan** to run backups on a schedule. + +```yaml +apiVersion: backups.cozystack.io/v1alpha1 +kind: Plan +metadata: + name: my-vm-daily + namespace: tenant-user +spec: + applicationRef: + apiGroup: apps.cozystack.io + kind: VMInstance + name: my-vm + backupClassName: velero + schedule: + cron: "0 2 * * *" # Every day at 02:00 +``` + +Apply it and check: + +```bash +kubectl apply -f plan.yaml +kubectl get plans -n tenant-user +kubectl describe plan my-vm-daily -n tenant-user +``` + +Each scheduled run creates a BackupJob (and, on success, a Backup object) named after the Plan with a timestamp suffix. + +## Back up a VMDisk + +You can back up a VMDisk independently — for example, to capture a specific disk without the VM configuration. + +{{% alert color="info" %}} +The BackupClass must include a strategy for `VMDisk`. Ask your administrator to add one if it is missing (see [Velero Backup Configuration]({{% ref "/docs/v1/operations/services/velero-backup-configuration" %}})). +{{% /alert %}} + +```yaml +apiVersion: backups.cozystack.io/v1alpha1 +kind: BackupJob +metadata: + name: my-disk-backup + namespace: tenant-user +spec: + applicationRef: + apiGroup: apps.cozystack.io + kind: VMDisk + name: my-disk + backupClassName: velero +``` + +Apply and check status: + +```bash +kubectl apply -f backupjob-disk.yaml +kubectl get backupjobs -n tenant-user +kubectl describe backupjob my-disk-backup -n tenant-user +``` + +## Check backup status + +List all BackupJobs in a namespace: + +```bash +kubectl get backupjobs -n tenant-user +``` + +Describe a specific BackupJob to see phase and any errors: + +```bash +kubectl describe backupjob my-vm-backup -n tenant-user +``` + +List the Backup objects that were produced (one per completed BackupJob): + +```bash +kubectl get backups -n tenant-user +``` + +List BackupJobs created by a Plan: + +```bash +kubectl get backupjobs -n tenant-user -l backups.cozystack.io/plan=my-vm-daily +``` + +## Restore a VMInstance in place + +An in-place restore overwrites the existing VMInstance and its volumes with data from a backup. Use this when you want to roll back a running VM to a previous state. + +{{% alert color="warning" %}} +The restore will update existing resources. Make sure the VMInstance is in a state where overwriting it is safe (e.g., quiesce any running workloads if needed). +{{% /alert %}} + +First, find the Backup object you want to restore from: + +```bash +kubectl get backups -n tenant-user +``` + +Example output: + +``` +NAME AGE +my-vm-backup 2h +``` + +Create a RestoreJob referencing that Backup: + +```yaml +apiVersion: backups.cozystack.io/v1alpha1 +kind: RestoreJob +metadata: + name: restore-my-vm + namespace: tenant-user +spec: + backupRef: + name: my-vm-backup +``` + +Apply it and check progress: + +```bash +kubectl apply -f restorejob.yaml +kubectl get restorejobs -n tenant-user +kubectl describe restorejob restore-my-vm -n tenant-user +``` + +The RestoreJob goes through `Pending` → `Running` → `Succeeded` (or `Failed`). On success, the VMInstance and its VMDisks are restored to the state captured in the backup. + +If you want to restore into a **different** VMInstance, add `targetApplicationRef` to the spec pointing at that application. + +## Restore a VMDisk in place + +To restore only a VMDisk without touching the VM configuration: + +```bash +kubectl get backups -n tenant-user +``` + +```yaml +apiVersion: backups.cozystack.io/v1alpha1 +kind: RestoreJob +metadata: + name: restore-my-disk + namespace: tenant-user +spec: + backupRef: + name: my-disk-backup +``` + +Apply and check: + +```bash +kubectl apply -f restorejob-disk.yaml +kubectl get restorejobs -n tenant-user +kubectl describe restorejob restore-my-disk -n tenant-user +``` + +## Troubleshooting + +If a BackupJob or RestoreJob ends in `Failed` phase, check the `message` field in its status: + +```bash +kubectl get backupjob my-vm-backup -n tenant-user -o jsonpath='{.status.message}' +kubectl get restorejob restore-my-vm -n tenant-user -o jsonpath='{.status.message}' +``` + +For lower-level details, check the Velero logs in the management cluster: + +```bash +kubectl logs -n cozy-velero -l app.kubernetes.io/name=velero --tail=100 +``` From 4f79d558984b4d6a3de542114d3a425213d7fbf2 Mon Sep 17 00:00:00 2001 From: Myasnikov Daniil Date: Thu, 19 Mar 2026 11:44:59 +0500 Subject: [PATCH 2/3] Fixed restore instruction Signed-off-by: Myasnikov Daniil --- .../v1/virtualization/backup-and-recovery.md | 26 ++++++++++++++++--- 1 file changed, 23 insertions(+), 3 deletions(-) diff --git a/content/en/docs/v1/virtualization/backup-and-recovery.md b/content/en/docs/v1/virtualization/backup-and-recovery.md index 6e02119d..e2643e4f 100644 --- a/content/en/docs/v1/virtualization/backup-and-recovery.md +++ b/content/en/docs/v1/virtualization/backup-and-recovery.md @@ -157,10 +157,20 @@ kubectl get backupjobs -n tenant-user -l backups.cozystack.io/plan=my-vm-daily ## Restore a VMInstance in place -An in-place restore overwrites the existing VMInstance and its volumes with data from a backup. Use this when you want to roll back a running VM to a previous state. +An in-place restore updates the existing VMInstance configuration from a backup. Use this when you want to roll back a running VM to a previous state. {{% alert color="warning" %}} -The restore will update existing resources. Make sure the VMInstance is in a state where overwriting it is safe (e.g., quiesce any running workloads if needed). +Velero skips existing DataVolumes during restore to avoid overwriting live data. If you need to restore the actual disk contents from the backup, delete the DataVolumes before creating the RestoreJob. Use the disk names from the VMInstance spec to find them: + +```bash +# List disk names for the VM +kubectl get vminstance my-vm -n tenant-user -o jsonpath='{.spec.disks[*].name}' + +# Delete the corresponding DataVolumes (one per disk, prefixed with vm-disk-) +kubectl delete datavolume vm-disk- -n tenant-user +``` + +The RestoreJob will then recreate the DataVolumes and download disk data from the backup storage. {{% /alert %}} First, find the Backup object you want to restore from: @@ -203,7 +213,17 @@ If you want to restore into a **different** VMInstance, add `targetApplicationRe ## Restore a VMDisk in place -To restore only a VMDisk without touching the VM configuration: +To restore only a VMDisk without touching the VM configuration. + +{{% alert color="warning" %}} +Velero skips an existing DataVolume during restore. To restore the actual disk contents from the backup, delete the DataVolume first: + +```bash +kubectl delete datavolume vm-disk-my-disk -n tenant-user +``` + +The RestoreJob will then recreate it and download disk data from the backup storage. +{{% /alert %}} ```bash kubectl get backups -n tenant-user From f34f4ac45e1b034cbedba73acec07304d77f4283 Mon Sep 17 00:00:00 2001 From: Myasnikov Daniil Date: Fri, 20 Mar 2026 10:42:30 +0500 Subject: [PATCH 3/3] Updated docs about Backup and Restore Signed-off-by: Myasnikov Daniil --- .../services/velero-backup-configuration.md | 1 + .../v1/virtualization/backup-and-recovery.md | 56 ++++++++++++++++++- 2 files changed, 55 insertions(+), 2 deletions(-) diff --git a/content/en/docs/v1/operations/services/velero-backup-configuration.md b/content/en/docs/v1/operations/services/velero-backup-configuration.md index 88e4d3e2..695cf6ab 100644 --- a/content/en/docs/v1/operations/services/velero-backup-configuration.md +++ b/content/en/docs/v1/operations/services/velero-backup-configuration.md @@ -151,6 +151,7 @@ spec: - persistentvolumeclaims - configmaps - secrets + - controllerrevisions.apps includeClusterResources: false storageLocation: '{{ .Parameters.backupStorageLocationName }}' volumeSnapshotLocations: diff --git a/content/en/docs/v1/virtualization/backup-and-recovery.md b/content/en/docs/v1/virtualization/backup-and-recovery.md index e2643e4f..7fa7dcd6 100644 --- a/content/en/docs/v1/virtualization/backup-and-recovery.md +++ b/content/en/docs/v1/virtualization/backup-and-recovery.md @@ -155,9 +155,9 @@ List BackupJobs created by a Plan: kubectl get backupjobs -n tenant-user -l backups.cozystack.io/plan=my-vm-daily ``` -## Restore a VMInstance in place +## Restore a VMInstance -An in-place restore updates the existing VMInstance configuration from a backup. Use this when you want to roll back a running VM to a previous state. +You can restore a VMInstance both **in place** (rolling back a running VM) and **from scratch** (after the VM and its disks have been deleted). The VMInstance backup includes all attached VMDisk volumes and their data. {{% alert color="warning" %}} Velero skips existing DataVolumes during restore to avoid overwriting live data. If you need to restore the actual disk contents from the backup, delete the DataVolumes before creating the RestoreJob. Use the disk names from the VMInstance spec to find them: @@ -173,6 +173,10 @@ kubectl delete datavolume vm-disk- -n tenant-user The RestoreJob will then recreate the DataVolumes and download disk data from the backup storage. {{% /alert %}} +{{% alert color="info" %}} +The VM will receive a **new IP address** after restore because pod network IPs are dynamically assigned by default. +{{% /alert %}} + First, find the Backup object you want to restore from: ```bash @@ -209,6 +213,53 @@ kubectl describe restorejob restore-my-vm -n tenant-user The RestoreJob goes through `Pending` → `Running` → `Succeeded` (or `Failed`). On success, the VMInstance and its VMDisks are restored to the state captured in the backup. +### Post-restore verification + +After the RestoreJob succeeds, verify that the VM is actually running: + +```bash +# Check that the VMInstance and VMDisk are Ready +kubectl get vminstances,vmdisks -n tenant-user + +# Verify the VirtualMachineInstance is running (not just the CR) +kubectl get vmi -n tenant-user + +# Check the VM's new IP address +kubectl get vmi -n tenant-user -o wide +``` + +### Fixing network after restore (cloud-init MAC address mismatch) + +After a VMInstance is restored, the guest OS may lose network connectivity. This is known to happen on **Ubuntu Server**, where cloud-init generates a netplan configuration bound to the old VM's MAC address. After restore, the VM gets a new virtual NIC with a different MAC address, but the guest OS still has the old netplan config bound to the previous MAC — so the network interface is never configured. Other operating systems that do not pin network configuration to a specific MAC address may not be affected by this issue. + +To fix this, update the `cloudInitSeed` field in the VMInstance spec and restart the VM. Changing the seed generates a new SMBIOS UUID, which makes cloud-init treat the VM as a new instance and re-run network configuration with the correct MAC address. + +```bash +# Set a new cloudInitSeed value (any string different from the current one) +kubectl patch vminstance my-vm -n tenant-user --type merge \ + -p '{"spec":{"cloudInitSeed":"reseed1"}}' + +# Wait for the VMInstance to reconcile +kubectl wait vminstance/my-vm -n tenant-user --for=condition=Ready --timeout=180s + +# Restart the VM so the new seed takes effect +virtctl restart vm-instance-my-vm -n tenant-user +``` + +After the restart, verify that the VM has network connectivity: + +```bash +# Check that the VMI is running +kubectl get vmi -n tenant-user + +# Verify SSH access +virtctl ssh -i ~/.ssh/my-key -l ubuntu vmi/vm-instance-my-vm -n tenant-user -c "ip a" +``` + +{{% alert color="info" %}} +If you need to change the seed again in the future (e.g. after another restore), use a different value each time (e.g. `reseed2`, `reseed3`, etc.). +{{% /alert %}} + If you want to restore into a **different** VMInstance, add `targetApplicationRef` to the spec pointing at that application. ## Restore a VMDisk in place @@ -262,3 +313,4 @@ For lower-level details, check the Velero logs in the management cluster: ```bash kubectl logs -n cozy-velero -l app.kubernetes.io/name=velero --tail=100 ``` +