diff --git a/data-explorer/cluster-encryption-double.md b/data-explorer/cluster-encryption-double.md index b0696264f6..0fda5b498c 100644 --- a/data-explorer/cluster-encryption-double.md +++ b/data-explorer/cluster-encryption-double.md @@ -1,48 +1,48 @@ --- -title: Enable double encryption for your cluster in Azure Data Explorer +title: Enable Double Encryption for Your Custer in Azure Data Explorer description: This article describes how to enable infrastructure encryption (double encryption) during cluster creation in Azure Data Explorer. ms.reviewer: toleibov ms.topic: how-to ms.custom: devx-track-arm-template -ms.date: 02/04/2025 +ms.date: 03/15/2026 --- # Enable double encryption for your cluster in Azure Data Explorer -When you create a cluster, data is [automatically encrypted](/azure/storage/common/storage-service-encryption) at the service level. For greater data security, you can additionally enable [double encryption](/azure/storage/common/infrastructure-encryption-enable). +When you create a cluster, the service [automatically encrypts](/azure/storage/common/storage-service-encryption) data at the service level. For greater data security, you can additionally enable [double encryption](/azure/storage/common/infrastructure-encryption-enable). -When double encryption is enabled, data in the storage account is encrypted twice, using two different algorithms. +When you enable double encryption, the cluster encrypts data in the storage account twice by using two different algorithms. > [!IMPORTANT] > -> * Enabling double encryption is only possible during cluster creation. -> * Once infrastructure encryption is enabled on your cluster, you **can't** disable it. +> * You can enable double encryption only during cluster creation. +> * After you enable infrastructure encryption on your cluster, you **can't** disable it. > For code samples based on previous SDK versions, see the [archived article](/previous-versions/azure/data-explorer/cluster-encryption-double). ## [Azure portal](#tab/portal) -1. [Create an Azure Data Explorer cluster](create-cluster-and-database.md#create-a-cluster) -1. In the **Security** tab > **Enable Double Encryption**, select **On**. To remove the double encryption, select **Off**. +1. [Create an Azure Data Explorer cluster](create-cluster-and-database.md#create-a-cluster). +1. In the **Security** tab, under **Enable Double Encryption**, select **On**. To remove double encryption, select **Off**. 1. Select **Next:Network>** or **Review + create** to create the cluster. :::image type="content" source="media/double-encryption/double-encryption-portal.png" alt-text="Screenshot of security tab, showing double encryption being enabled on a new cluster."::: ## [C#](#tab/c-sharp) -You can enable infrastructure encryption during cluster creation using C#. +You can enable infrastructure encryption during cluster creation by using C#. ## Prerequisites -Set up a managed identity using the Azure Data Explorer C# client: +Set up a managed identity by using the Azure Data Explorer C# client: * Install the [Azure Data Explorer NuGet package](https://www.nuget.org/packages/Azure.ResourceManager.Kusto/). * Install the [Azure.Identity NuGet package](https://www.nuget.org/packages/Azure.Identity/) for authentication. -* [Create a Microsoft Entra application](/azure/active-directory/develop/howto-create-service-principal-portal) and service principal that can access resources. You add role assignment at the subscription scope and get the required `Directory (tenant) ID`, `Application ID`, and `Client Secret`. +* [Create a Microsoft Entra application](/azure/active-directory/develop/howto-create-service-principal-portal) and service principal that can access resources. Add role assignment at the subscription scope and get the required `Directory (tenant) ID`, `Application ID`, and `Client Secret`. ## Create your cluster -1. Create your cluster using the `enableDoubleEncryption` property: +1. Create your cluster by using the `enableDoubleEncryption` property: ```csharp var tenantId = "xxxxxxxx-xxxxx-xxxx-xxxx-xxxxxxxxx"; //Directory (tenant) ID @@ -63,23 +63,23 @@ Set up a managed identity using the Azure Data Explorer C# client: await clusters.CreateOrUpdateAsync(WaitUntil.Completed, clusterName, clusterData); ``` -1. Run the following command to check if your cluster was successfully created: +1. Run the following command to check if you created your cluster successfully: ```csharp clusterData = (await clusters.GetAsync(clusterName)).Value.Data; ``` - If the result contains `ProvisioningState` with the `Succeeded` value, then the cluster was created successfully. + If the result contains `ProvisioningState` with the `Succeeded` value, you created your cluster successfully. ## [ARM template](#tab/arm) -You can enable infrastructure encryption during cluster creation using Azure Resource Manager. +You can enable infrastructure encryption during cluster creation by using Azure Resource Manager. -An Azure Resource Manager template can be used to automate deployment of your Azure resources. To learn more about deploying to Azure Data Explorer, see [Create an Azure Data Explorer cluster and database by using an Azure Resource Manager template](create-cluster-database.md?tabs=arm). +You can use an Azure Resource Manager template to automate deployment of your Azure resources. To learn more about deploying to Azure Data Explorer, see [Create an Azure Data Explorer cluster and database by using an Azure Resource Manager template](create-cluster-database.md?tabs=arm). -## Add a system-assigned identity using an Azure Resource Manager template +## Add a system-assigned identity by using an Azure Resource Manager template -Add the 'EnableDoubleEncryption' type to tell Azure to enable infrastructure encryption (double encryption) for your cluster. +Add the `EnableDoubleEncryption` type to tell Azure to enable infrastructure encryption (double encryption) for your cluster. ```json { diff --git a/data-explorer/customer-managed-keys.md b/data-explorer/customer-managed-keys.md index a7a1900b9e..474e2fd762 100644 --- a/data-explorer/customer-managed-keys.md +++ b/data-explorer/customer-managed-keys.md @@ -1,17 +1,17 @@ --- -title: Configure customer-managed-keys +title: Configure Customer-managed-keys description: This article describes how to configure customer-managed keys encryption on your data in Azure Data Explorer. ms.reviewer: astauben ms.topic: how-to ms.custom: -ms.date: 05/10/2023 +ms.date: 03/15/2026 --- # Configure customer-managed keys -Azure Data Explorer encrypts all data in a storage account at rest. By default, data is encrypted with Microsoft-managed keys. For extra control over encryption keys, you can supply customer-managed keys to use for data encryption. +Azure Data Explorer encrypts all data in a storage account at rest. By default, it uses Microsoft-managed keys for encryption. If you want more control over the encryption keys, you can provide customer-managed keys for data encryption. -Customer-managed keys must be stored in an [Azure Key Vault](/azure/key-vault/key-vault-overview). You can create your own keys and store them in a key vault, or you can use an Azure Key Vault API to generate keys. The Azure Data Explorer cluster and the key vault must be in the same region, but they can be in different subscriptions. For a detailed explanation on customer-managed keys, see [customer-managed keys with Azure Key Vault](/azure/storage/common/storage-service-encryption). +You must store customer-managed keys in an [Azure Key Vault](/azure/key-vault/key-vault-overview). You can create your own keys and store them in a key vault, or you can use an Azure Key Vault API to generate keys. The Azure Data Explorer cluster and the key vault must be in the same region, but they can be in different subscriptions. For a detailed explanation of customer-managed keys, see [customer-managed keys with Azure Key Vault](/azure/storage/common/storage-service-encryption). This article shows you how to configure customer-managed keys. @@ -19,29 +19,29 @@ This article shows you how to configure customer-managed keys. ## Configure Azure Key Vault -To configure customer-managed keys with Azure Data Explorer, you must [set two properties on the key vault](/azure/key-vault/key-vault-ovw-soft-delete): **Soft Delete** and **Do Not Purge**. These properties aren't enabled by default. To enable these properties, perform **Enabling soft-delete** and **Enabling Purge Protection** in [PowerShell](/azure/key-vault/key-vault-soft-delete-powershell) or [Azure CLI](/azure/key-vault/key-vault-soft-delete-cli) on a new or existing key vault. RSA and RSA-HSM keys of size 2048, 3072, and 4096 are supported. To utilize RSA-HSM keys utilize C#, Azure CLI, PowerShell, or ARM Template methods below. For more information about keys, see [Key Vault keys](/azure/key-vault/about-keys-secrets-and-certificates#key-vault-keys). +To configure customer-managed keys with Azure Data Explorer, you must [set two properties on the key vault](/azure/key-vault/key-vault-ovw-soft-delete): **Soft Delete** and **Do Not Purge**. These properties aren't enabled by default. To enable these properties, perform **Enabling soft-delete** and **Enabling Purge Protection** in [PowerShell](/azure/key-vault/key-vault-soft-delete-powershell) or [Azure CLI](/azure/key-vault/key-vault-soft-delete-cli) on a new or existing key vault. Azure Data Explorer supports RSA and RSA-HSM keys of size 2048, 3072, and 4096. To use RSA-HSM keys, use the C#, Azure CLI, PowerShell, or ARM Template methods described in this article. For more information about keys, see [Key Vault keys](/azure/key-vault/about-keys-secrets-and-certificates#key-vault-keys). > [!NOTE] > For information about the limitations of using customer managed keys on leader and follower clusters, see [Limitations](follower.md#limitations). ## Assign a managed identity to the cluster -To enable customer-managed keys for your cluster, first assign either a system-assigned or user-assigned managed identity to the cluster. You'll use this managed identity to grant the cluster permissions to access the key vault. To configure managed identities, see [managed identities](configure-managed-identities-cluster.md). +To enable customer-managed keys for your cluster, first assign either a system-assigned or user-assigned managed identity to the cluster. Use this managed identity to grant the cluster permissions to access the key vault. To configure managed identities, see [managed identities](configure-managed-identities-cluster.md). -## Enable encryption with customer-managed keys +## Enable encryption by using customer-managed keys ### [Portal](#tab/portal) -The following steps explain how to enable customer-managed keys encryption using the Azure portal. By default, Azure Data Explorer encryption uses Microsoft-managed keys. Configure your Azure Data Explorer cluster to use customer-managed keys and specify the key to associate with the cluster. +The following steps explain how to enable customer-managed keys encryption by using the Azure portal. By default, Azure Data Explorer encryption uses Microsoft-managed keys. Configure your Azure Data Explorer cluster to use customer-managed keys and specify the key to associate with the cluster. 1. In the [Azure portal](https://portal.azure.com/), go to your [Azure Data Explorer cluster](create-cluster-and-database.md#create-a-cluster) resource. -1. Select **Settings** > **Encryption** in left pane of portal. +1. Select **Settings** > **Encryption** in the left pane of the portal. 1. In the **Encryption** pane, select **On** for the **Customer-managed key** setting. 1. Select **Select Key**. :::image type="content" source="media/customer-managed-keys-portal/customer-managed-key-encryption-setting.png" alt-text="Screenshot showing configure customer-managed keys."::: -1. In the **Select key from Azure Key Vault** window, select an existing **Key vault** from the dropdown list. If you select **Create new** to [create a new Key Vault](/azure/key-vault/quick-create-portal#create-a-vault), you'll be routed to the **Create Key Vault** screen. +1. In the **Select key from Azure Key Vault** window, select an existing **Key vault** from the dropdown list. If you select **Create new** to [create a new Key Vault](/azure/key-vault/quick-create-portal#create-a-vault), you're routed to the **Create Key Vault** screen. 1. Select **Key**. 1. Version: @@ -56,23 +56,23 @@ The following steps explain how to enable customer-managed keys encryption using :::image type="content" source="media/customer-managed-keys-portal/customer-managed-key-select-user-type.png" alt-text="Screenshot showing the option to select a managed identity type."::: -1. In the **Encryption** pane that now contains your key, select **Save**. When CMK creation succeeds, you'll see a success message in **Notifications**. +1. In the **Encryption** pane that now contains your key, select **Save**. When CMK creation succeeds, you see a success message in **Notifications**. :::image type="content" source="media/customer-managed-keys-portal/customer-managed-key-before-save.png" alt-text="Screenshot showing option to save a customer-managed key."::: -If you select system assigned identity when enabling customer-managed keys for your Azure Data Explorer cluster, you'll create a system assigned identity for the cluster if one doesn't exist. In addition, you'll be providing the required get, wrapKey, and unwrapKey permissions to your Azure Data Explorer cluster on the selected Key Vault and get the Key Vault properties. +If you select system assigned identity when enabling customer-managed keys for your Azure Data Explorer cluster, you create a system assigned identity for the cluster if one doesn't exist. In addition, you provide the required get, wrapKey, and unwrapKey permissions to your Azure Data Explorer cluster on the selected Key Vault and get the Key Vault properties. > [!NOTE] -> Select **Off** to remove the customer-managed key after it has been created. +> Select **Off** to remove the customer-managed key after it exists. ### [C#](#tab/csharp) -The following sections explain how to configure customer-managed keys encryption using the Azure Data Explorer C# client. +The following sections explain how to configure customer-managed keys encryption by using the Azure Data Explorer C# client. ### Install packages * Install the [Azure Data Explorer (Kusto) NuGet package](https://www.nuget.org/packages/Azure.ResourceManager.Kusto/). -* Install the [Azure.Identity NuGet package](https://www.nuget.org/packages/Azure.Identity/) for authentication with Microsoft Entra ID. +* Install the [Azure.Identity NuGet package](https://www.nuget.org/packages/Azure.Identity/) for authentication by using Microsoft Entra ID. ### Authentication @@ -112,17 +112,17 @@ By default, Azure Data Explorer encryption uses Microsoft-managed keys. Configur await cluster.UpdateAsync(WaitUntil.Completed, clusterPatch); ``` -1. Run the following command to check if your cluster was successfully updated: +1. Run the following command to check if you updated your cluster successfully: ```csharp var clusterData = (await resourceGroup.GetKustoClusterAsync(clusterName)).Value.Data; ``` - If the result contains `ProvisioningState` with the `Succeeded` value, then your cluster was successfully updated. + If the result contains `ProvisioningState` with the `Succeeded` value, you updated your cluster successfully. ### [Azure CLI](#tab/azcli) -The following steps explain how to enable customer-managed keys encryption using Azure CLI client. By default, Azure Data Explorer encryption uses Microsoft-managed keys. Configure your Azure Data Explorer cluster to use customer-managed keys and specify the key to associate with the cluster. +The following steps explain how to enable customer-managed keys encryption by using the Azure CLI. By default, Azure Data Explorer encryption uses Microsoft-managed keys. Configure your Azure Data Explorer cluster to use customer-managed keys and specify the key to associate with the cluster. 1. Run the following command to sign in to Azure: @@ -136,19 +136,19 @@ The following steps explain how to enable customer-managed keys encryption using az account set --subscription MyAzureSub ``` -1. Run the following command to set the new key with the cluster's system assigned identity +1. Run the following command to set the new key with the cluster's system assigned identity: ```azurecli-interactive az kusto cluster update --cluster-name "mytestcluster" --resource-group "mytestrg" --key-vault-properties key-name="" key-version="" key-vault-uri="" ``` - Alternatively, set the new key with a user assigned identity. + Alternatively, set the new key by using a user assigned identity: ```azurecli-interactive az kusto cluster update --cluster-name "mytestcluster" --resource-group "mytestrg" --key-vault-properties key-name="" key-version="" key-vault-uri="" key-user-identity="" ``` -1. Run the following command and check the 'keyVaultProperties' property to verify the cluster updated successfully. +1. Run the following command and check the `keyVaultProperties` property to verify the cluster updated successfully. ```azurecli-interactive az kusto cluster show --cluster-name "mytestcluster" --resource-group "mytestrg" @@ -156,7 +156,7 @@ The following steps explain how to enable customer-managed keys encryption using ### [PowerShell](#tab/powershell) -The following steps explain how to enable customer-managed keys encryption using PowerShell. By default, Azure Data Explorer encryption uses Microsoft-managed keys. Configure your Azure Data Explorer cluster to use customer-managed keys and specify the key to associate with the cluster. +The following steps explain how to enable customer-managed keys encryption by using PowerShell. By default, Azure Data Explorer encryption uses Microsoft-managed keys. Configure your Azure Data Explorer cluster to use customer-managed keys and specify the key to associate with the cluster. 1. Run the following command to sign in to Azure: @@ -170,19 +170,19 @@ The following steps explain how to enable customer-managed keys encryption using Set-AzContext -SubscriptionId "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" ``` -1. Run the following command to set the new key using a system assigned identity. +1. Run the following command to set the new key by using a system assigned identity. ```azurepowershell-interactive Update-AzKustoCluster -ResourceGroupName "mytestrg" -Name "mytestcluster" -KeyVaultPropertyKeyName "" -KeyVaultPropertyKeyVaultUri "" -KeyVaultPropertyKeyVersion "" ``` - Alternatively, set the new key using a user assigned identity. + Alternatively, set the new key by using a user assigned identity. ```azurepowershell-interactive Update-AzKustoCluster -ResourceGroupName "mytestrg" -Name "mytestcluster" -KeyVaultPropertyKeyName "" -KeyVaultPropertyKeyVaultUri "" -KeyVaultPropertyKeyVersion "" -KeyVaultPropertyUserIdentity "user-assigned-identity-resource-id" ``` -1. Run the following command and check the 'KeyVaultProperty...' properties to verify the cluster updated successfully. +1. Run the following command and check the `KeyVaultProperty...` properties to verify the cluster updated successfully. ```azurepowershell-interactive Get-AzKustoCluster -Name "mytestcluster" -ResourceGroupName "mytestrg" | Format-List @@ -190,11 +190,11 @@ The following steps explain how to enable customer-managed keys encryption using ### [ARM template](#tab/arm) -The following steps explain how to configure customer-managed keys using Azure Resource Manager templates. By default, Azure Data Explorer encryption uses Microsoft-managed keys. In this step, configure your Azure Data Explorer cluster to use customer-managed keys and specify the key to associate with the cluster. +The following steps explain how to configure customer-managed keys by using Azure Resource Manager templates. By default, Azure Data Explorer encryption uses Microsoft-managed keys. In this step, configure your Azure Data Explorer cluster to use customer-managed keys and specify the key to associate with the cluster. If you'd like to use a system assigned identity to access the key vault, leave `userIdentity` empty. Otherwise, set the identity's resource ID. -You can deploy the Azure Resource Manager template by using the Azure portal or using PowerShell. +You can deploy the Azure Resource Manager template by using the Azure portal or by using PowerShell. ```json { @@ -245,7 +245,7 @@ You can deploy the Azure Resource Manager template by using the Azure portal or ## Update the key version -When you create a new version of a key, you'll need to update the cluster to use the new version. First, call `Get-AzKeyVaultKey` to get the latest version of the key. Then update the cluster's key vault properties to use the new version of the key, as shown in [Enable encryption with customer-managed keys](#enable-encryption-with-customer-managed-keys). +When you create a new version of a key, you need to update the cluster to use the new version. First, call `Get-AzKeyVaultKey` to get the latest version of the key. Then update the cluster's key vault properties to use the new version of the key, as shown in [Enable encryption by using customer-managed keys](#enable-encryption-by-using-customer-managed-keys). ## Related content diff --git a/data-explorer/kusto/api/get-started/app-managed-streaming-ingest.md b/data-explorer/kusto/api/get-started/app-managed-streaming-ingest.md index 8d20c9d875..ff2e92c561 100644 --- a/data-explorer/kusto/api/get-started/app-managed-streaming-ingest.md +++ b/data-explorer/kusto/api/get-started/app-managed-streaming-ingest.md @@ -1,9 +1,9 @@ --- -title: Create an app to get data using the managed streaming ingestion client +title: Create an App to Get Data Using the Managed Streaming Ingestion Client description: Learn how to create an app to ingest data from a file or in-memory stream using the managed streaming ingestion client. ms.reviewer: yogilad ms.topic: how-to -ms.date: 02/03/2025 +ms.date: 03/15/2026 zone_pivot_groups: ingest-api @@ -15,52 +15,52 @@ zone_pivot_groups: ingest-api > [!INCLUDE [applies](../../includes/applies-to-version/applies.md)] [!INCLUDE [fabric](../../includes/applies-to-version/fabric.md)] [!INCLUDE [azure-data-explorer](../../includes/applies-to-version/azure-data-explorer.md)] -Streaming Ingestion allows writing data to Kusto with near-real-time latencies. It’s also useful when writing small amounts of data to a large number of tables, making batching inefficient. +Streaming ingestion enables you to write data to Kusto with near-real-time latencies. It's also useful when you need to write small amounts of data to a large number of tables, making batching inefficient. -In this article, you’ll learn how to ingest data to Kusto using the managed streaming ingestion client. You'll ingest a data stream in the form of a file or in-memory stream. +In this article, you learn how to ingest data to Kusto by using the managed streaming ingestion client. You ingest a data stream in the form of a file or in-memory stream. > [!NOTE] -> Streaming ingestion is a high-velocity ingestion protocol. Ingesting with a *Managed Streaming Ingestion* or *Streaming Ingestion* client isn't the same as ingesting with a *Stream Source*. +> Streaming ingestion is a high-velocity ingestion protocol. Ingesting by using a *Managed Streaming Ingestion* or *Streaming Ingestion* client isn't the same as ingesting by using a *Stream Source*. > -> The type of client refers to the _way_ data is ingested - When ingesting with a *Managed Streaming Ingestion* or *Streaming Ingestion* client, data is sent to Kusto using the streaming ingestion protocol - it uses a *Streaming Service* to allow for low latency ingestion. +> The type of client refers to the _way_ data is ingested. When you ingest by using a *Managed Streaming Ingestion* or *Streaming Ingestion* client, you send data to Kusto by using the streaming ingestion protocol. It uses a *Streaming Service* to allow for low latency ingestion. > -> Ingesting from a *Stream Source* refers to how the data is stored. For example, in C# a *Stream Source* can be created from a `MemoryStream` object. This is as opposed to a *File Source* which is created from a file on disk. +> Ingesting from a *Stream Source* refers to how the data is stored. For example, in C# you can create a *Stream Source* from a `MemoryStream` object. This approach is different from a *File Source*, which you create from a file on disk. > -> The ingestion method depends on the client used: with queued ingestion, the data from the source is first uploaded to blob storage and then queued for ingestion; with streaming ingestion, the data is sent directly to Kusto in the body of a streaming HTTP request. +> The ingestion method depends on the client you use: with queued ingestion, the process first uploads data from the source to blob storage, and then it queues the data for ingestion. By using streaming ingestion, the process sends data directly to Kusto in the body of a streaming HTTP request. > [!IMPORTANT] > -> The Ingest API now has two versions: V1 and V2. The V1 API is the original API, while the V2 API is a reimagined version that simplifies the ingest API while offering more customization. +> The Ingest API now has two versions: V1 and V2. The V1 API is the original API. The V2 API is a reimagined version that simplifies the ingest API while offering more customization. > > Ingest Version 2 is in **preview** and is available in the following languages: C# > -> Also note, that the Query V2 API is not related to the Ingest V2 API. +> Also, the Query V2 API isn't related to the Ingest V2 API. ## Streaming and Managed Streaming -Kusto SDKs provide two flavors of Streaming Ingestion Clients, A *Streaming Ingestion Client* and *Managed Streaming Ingestion Client* where Managed Streaming has built-in retry and failover logic +Kusto SDKs provide two flavors of streaming ingestion clients: a *Streaming Ingestion Client* and a *Managed Streaming Ingestion Client*. The *Managed Streaming Ingestion Client* includes built-in retry and failover logic. > [!NOTE] -> This article shows how to use *Managed Streaming Ingestion*. If you wish to use plain *Streaming Ingestion* instead of *Managed Streaming*, simply change the instantiated client type to be *Streaming Ingestion Client*. +> This article shows how to use *Managed Streaming Ingestion*. To use plain *Streaming Ingestion* instead of *Managed Streaming*, change the instantiated client type to *Streaming Ingestion Client*. -When ingesting with a *Managed Streaming Ingestion* API, failures and retries are handled automatically as follows: -+ Streaming requests that fail due to server-side size limitations are moved to queued ingestion. -+ Data that's estimated to be larger than the streaming limit is automatically sent to queued ingestion. +When you ingest data by using a *Managed Streaming Ingestion* API, it automatically handles failures and retries as follows: ++ It moves streaming requests that fail because of server-side size limitations to queued ingestion. ++ It automatically sends data that's estimated to be larger than the streaming limit to queued ingestion. + The size of the streaming limit depends on the format and compression of the data. - + It's possible to change the limit by setting the *Size Factor* in the *Managed Streaming Ingest Policy*, passed in initialization. -+ Transient failures, for example throttling, are retried three times, then moved to queued ingestion. -+ Permanent failures aren't retried. + + You can change the limit by setting the *Size Factor* in the *Managed Streaming Ingest Policy*, passed in initialization. ++ It retries transient failures, such as throttling, three times, and then moves the request to queued ingestion. ++ It doesn't retry permanent failures. > [!NOTE] -> If the streaming ingestion fails and the data is moved to queued ingestion, then the data will take longer to be ingested, due to it being batched and queued for ingestion. You can control it via the [batching policy](/kusto/management/batching-policy). +> If streaming ingestion fails and the system moves the data to queued ingestion, the data takes longer to ingest because the process batches and queues the data for ingestion. You can control this delay by using the [batching policy](/kusto/management/batching-policy). ## Limitations -Data Streaming has some limitations compared to queuing data for ingestion. +Data streaming has some limitations compared to queuing data for ingestion. -+ Tags can’t be set on data. -+ Mapping can only be provided using [`ingestionMappingReference`](/kusto/management/mappings?view=azure-data-explorer#mapping-with-ingestionmappingreference&preserve-view=true). Inline mapping isn't supported. -+ The payload sent in the request can’t exceed 10 MB, regardless of format or compression. ++ You can't set tags on data. ++ You can only provide mapping by using [`ingestionMappingReference`](/kusto/management/mappings?view=azure-data-explorer#mapping-with-ingestionmappingreference&preserve-view=true). Inline mapping isn't supported. ++ The payload sent in the request can't exceed 10 MB, regardless of format or compression. + The `ignoreFirstRecord` property isn't supported for streaming ingestion, so ingested data must not contain a header row. For more information, see [Streaming Limitations](/azure/data-explorer/ingest-data-streaming#limitations). @@ -73,7 +73,7 @@ For more information, see [Streaming Limitations](/azure/data-explorer/ingest-da ## Before you begin -Before creating the app, the following steps are required. Each step is detailed in the following sections. +Before creating the app, complete the following steps. Each step is detailed in the following sections. 1. Configure streaming ingestion on your Azure Data Explorer cluster. 1. Create a Kusto table to ingest the data into. @@ -88,7 +88,7 @@ To configure streaming ingestion, see [Configure streaming ingestion on your Azu Run the following commands on your database via Kusto Explorer (Desktop) or Kusto Web Explorer. -1. Create a Table Called Storm Events +1. Create a table called Storm Events ```kql .create table MyStormEvents (StartTime:datetime, EndTime:datetime, State:string, DamageProperty:int, DamageCrops:int, Source:string, StormSummary:dynamic) @@ -96,7 +96,7 @@ Run the following commands on your database via Kusto Explorer (Desktop) or Kust ### Enable the streaming ingestion policy -Enable streaming ingestion on the table or on the entire database using one of the following commands: +Enable streaming ingestion on the table or on the entire database by using one of the following commands: Table level: @@ -117,7 +117,7 @@ For more information about streaming policy, see [Streaming ingestion policy](.. ## Create a basic client application -Create a basic client application which connects to the Kusto Help cluster. +Create a basic client application that connects to the Kusto Help cluster. Enter the cluster query and ingest URI and database name in the relevant variables. The app uses two clients: one for querying and one for ingestion. Each client brings up a browser window to authenticate the user. @@ -126,7 +126,7 @@ The app uses two clients: one for querying and one for ingestion. Each client br The code sample includes a service function `PrintResultAsValueList()` for printing query results. -Add the Kusto libraries using the following commands: +Add the Kusto libraries by using the following commands: ```powershell dotnet add package Microsoft.Azure.Kusto.Data @@ -483,7 +483,7 @@ printResultsAsValueList(primaryResults); The code sample includes a service function `PrintResultAsValueList()` for printing query results. -Add the Kusto libraries using the following commands: +Add the Kusto libraries by using the following commands: ```powershell dotnet add package Microsoft.Azure.Kusto.Data @@ -565,22 +565,22 @@ PrintResultAsValueList(result); ### [Python](#tab/python) -Not applicable +Not applicable. ### [TypeScript](#tab/typescript) -Not applicable +Not applicable. ### [Java](#tab/java) -Not applicable +Not applicable. --- :::zone-end -The first time you run the application the results are as follows: +The first time you run the application, you see the following results: ```plaintext Number of rows in MyStormEvents @@ -603,7 +603,7 @@ row 1 : ### Stream in-memory data for ingestion -To ingest data from memory, create a stream containing the data for ingestion. +To ingest data from memory, create a stream that contains the data for ingestion. :::zone pivot="latest" ### [C#](#tab/c-sharp) @@ -702,15 +702,15 @@ try ( ### [Python](#tab/python) -Not applicable +Not applicable. ### [TypeScript](#tab/typescript) -Not applicable +Not applicable. ### [Java](#tab/java) -Not applicable +Not applicable. --- :::zone-end diff --git a/data-explorer/kusto/api/monaco/host-web-ux-in-iframe.md b/data-explorer/kusto/api/monaco/host-web-ux-in-iframe.md index f5f1f56024..53182181e5 100644 --- a/data-explorer/kusto/api/monaco/host-web-ux-in-iframe.md +++ b/data-explorer/kusto/api/monaco/host-web-ux-in-iframe.md @@ -1,19 +1,20 @@ --- -title: Embed the Azure Data Explorer web UI in an **iframe**. -description: Learn how to embed the Azure Data Explorer web UI in an **iframe**. +title: Embed the Azure Data Explorer Web UI in an iframe. +description: Learn how to embed the Azure Data Explorer web UI in an iframe. ms.reviewer: izlisbon ms.topic: how-to ms.custom: has-azure-ad-ps-ref, azure-ad-ref-level-one-done -ms.date: 08/11/2024 +ms.date: 03/15/2026 monikerRange: "azure-data-explorer" --- + # Embed the Azure Data Explorer web UI in an iframe > [!INCLUDE [applies](../../includes/applies-to-version/applies.md)] [!INCLUDE [fabric](../../includes/applies-to-version/fabric.md)] [!INCLUDE [azure-data-explorer](../../includes/applies-to-version/azure-data-explorer.md)] -The Azure Data Explorer web UI can be embedded in an iframe and hosted in third-party websites. This article describes how to embed the Azure Data Explorer web UI in an iframe. +You can embed the Azure Data Explorer web UI in an iframe and host it on third-party websites. This article describes how to embed the Azure Data Explorer web UI in an iframe. -:::image type="content" source="../media/host-web-ux-in-iframe/web-ux.png" alt-text="Screenshot of the Azure Data Explorer web U I."::: +:::image type="content" source="../media/host-web-ux-in-iframe/web-ux.png" alt-text="Screenshot of the Azure Data Explorer web UI."::: All functionality is tested for accessibility and supports dark and light on-screen themes. @@ -27,13 +28,13 @@ Add the following code to your website: > ``` -The `f-IFrameAuth` query parameter tells the web UI *not* to redirect to get an authentication token and the `f-UseMeControl=false` parameter tells the web UI *not* to show the user account information pop-up window. These actions are necessary since the hosting website is responsible for authentication. +The `f-IFrameAuth` query parameter tells the web UI *not* to redirect to get an authentication token, and the `f-UseMeControl=false` parameter tells the web UI *not* to show the user account information pop-up window. These actions are necessary because the hosting website is responsible for authentication. -The `workspace=` query parameter creates a separate workspace for the embedded iframe. Workspace is a logic unit that contains tabs, queries, settings and connections. By setting it to a unique value, it prevents data sharing between the embedded and the non-embedded version of `https://dataexplorer.azure.com`. +The `workspace=` query parameter creates a separate workspace for the embedded iframe. A workspace is a logical unit that contains tabs, queries, settings, and connections. By setting it to a unique value, it prevents data sharing between the embedded and the nonembedded version of `https://dataexplorer.azure.com`. ### Handle authentication -When you embed the web UI, the hosting page is responsible for authentication. The following diagrams describe the authentication flow. +When you embed the web UI, the hosting page handles authentication. The following diagrams describe the authentication flow. :::image type="content" source="../media/host-web-ux-in-iframe/adx-embed-sequence-diagram.png" lightbox="../media/host-web-ux-in-iframe/adx-embed-sequence-diagram.png" alt-text="Diagram that shows the authentication flow for an embedded web U I iframe."::: @@ -91,7 +92,7 @@ Use the following steps to handle authentication: ``` > [!IMPORTANT] - > You can only use User Principal Name (UPN) for authentication, service principals aren't supported. + > You can only use User Principal Name (UPN) for authentication; service principals aren't supported. 1. Post a **postToken** message with the access token. This code replaces placeholder CODE-2: @@ -105,17 +106,17 @@ Use the following steps to handle authentication: ``` > [!IMPORTANT] - > The hosting window must refresh the token before expiration by sending a new **postToken** message with updated tokens. Otherwise, once the tokens expire, service calls will fail. + > The hosting window must refresh the token before expiration by sending a new **postToken** message with updated tokens. Otherwise, once the tokens expire, service calls fail. > [!TIP] -> In our sample project, you can view an [application](https://github.com/Azure/azure-kusto-webexplorer-embedding/blob/main/src/app.js) that uses authentication. +> In the sample project, you can view an [application](https://github.com/Azure/azure-kusto-webexplorer-embedding/blob/main/src/app.js) that uses authentication. ### Embed dashboards -To embed a dashboard, a trust relationship must be established between the host's Microsoft Entra app and the Azure Data Explorer dashboard service (**RTD Metadata Service**). +To embed a dashboard, you must establish a trust relationship between the host's Microsoft Entra app and the Azure Data Explorer dashboard service (**RTD Metadata Service**). 1. Follow the steps in [Perform Single Page Application (SPA) authentication](../rest/authenticate-with-msal.md#perform-single-page-application-spa-authentication). -1. Open the [Azure portal](https://portal.azure.com/) and make sure that you're signed into the correct tenant. In the top-right corner, verify the identity used to sign into the portal. +1. Open the [Azure portal](https://portal.azure.com/) and make sure that you're signed in to the correct tenant. In the top-right corner, verify the identity used to sign in to the portal. 1. In the resources pane, select **Microsoft Entra ID** > **App registrations**. 1. Locate the app that uses the **on-behalf-of** flow and open it. 1. Select **Manifest**. @@ -142,7 +143,7 @@ To embed a dashboard, a trust relationship must be established between the host' In the above code, `388e2b3a-fdb8-4f0b-ae3e-0692ca9efc1c` is the user_impersonation permission. -1. In the **Manifest**, save your changes. +1. Save your changes in the **Manifest**. 1. Select **API permissions** and validate you have a new entry: **RTD Metadata Service**. 1. Under Microsoft Graph, add permissions for `People.Read`, `User.ReadBasic.All`, and `Group.Read.All`. 1. In Azure PowerShell, add the following new service principal for the app: @@ -155,7 +156,7 @@ To embed a dashboard, a trust relationship must be established between the host' ``` - If you encounter the `Request_MultipleObjectsWithSameKeyValue` error, it means that the app is already in the tenant indicating it was added successfully. + If you encounter the `Request_MultipleObjectsWithSameKeyValue` error, the app is already in the tenant. 1. In the **API permissions** page, select **Grant admin consent** to consent for all users. @@ -182,7 +183,7 @@ To embed a dashboard, a trust relationship must be established between the host' > [!IMPORTANT] > The `f-IFrameAuth=true` flag is required for the iframe to work. The other flags are optional. -The hosting app may want to control certain aspects of the user experience. For example, hide the connection pane, or disable connecting to other clusters. +The hosting app might want to control certain aspects of the user experience. For example, hide the connection pane, or disable connecting to other clusters. For this scenario, the web explorer supports feature flags. A feature flag can be used in the URL as a query parameter. To disable adding other clusters, use in the hosting app. diff --git a/data-explorer/kusto/api/monaco/monaco-kusto.md b/data-explorer/kusto/api/monaco/monaco-kusto.md index 3d4af3735f..2ce0f2a0bf 100644 --- a/data-explorer/kusto/api/monaco/monaco-kusto.md +++ b/data-explorer/kusto/api/monaco/monaco-kusto.md @@ -1,21 +1,22 @@ --- -title: Integrate the Monaco Editor with Kusto Query Language support in your app +title: Integrate the Monaco Editor With Kusto Query Language Support in Your App description: Learn how to integrate the Monaco Editor with Kusto query support in your app. ms.reviewer: izlisbon ms.topic: how-to -ms.date: 08/11/2024 +ms.date: 03/15/2026 monikerRange: "azure-data-explorer || microsoft-fabric" ms.custom: sfi-ropc-nochange --- + # Integrate the Monaco Editor with Kusto Query Language support in your app > [!INCLUDE [applies](../../includes/applies-to-version/applies.md)] [!INCLUDE [fabric](../../includes/applies-to-version/fabric.md)] [!INCLUDE [azure-data-explorer](../../includes/applies-to-version/azure-data-explorer.md)] -You can integrate the [Monaco Editor](https://microsoft.github.io/monaco-editor) with Kusto Query Language support (*monaco-kusto*) into your app. Integrating *monaco-kusto* into your app offers you an editing experience such as completion, colorization, refactoring, renaming, and go-to-definition. It requires you to build a solution for authentication, query execution, result display, and schema exploration. It offers you full flexibility to fashion the user experience that fits your needs. +You can integrate the [Monaco Editor](https://microsoft.github.io/monaco-editor) with Kusto Query Language support (*monaco-kusto*) into your app. By integrating *monaco-kusto* into your app, you get an editing experience that includes completion, colorization, refactoring, renaming, and go-to-definition. You need to build a solution for authentication, query execution, result display, and schema exploration. This integration gives you full flexibility to create a user experience that fits your needs. -In this article, you'll learn how to add *monaco-kusto* to the Monaco Editor and integrate it into your app. The package is available on [GitHub](https://github.com/Azure/monaco-kusto) and on *npm*. +In this article, you'll learn how to add *monaco-kusto* to the Monaco Editor and integrate it into your app. You can find the package on [GitHub](https://github.com/Azure/monaco-kusto) and on *npm*. -Use the following steps to integrate *monaco-kusto* into your app using *npm*. +Use the following steps to integrate *monaco-kusto* into your app by using *npm*. **Step 1**: [Install the *monaco-kusto* package](#install-the-monaco-kusto-package) @@ -38,7 +39,7 @@ Try out the integration with our [Sample project](#sample-project)! ``` > [!NOTE] - > To customize the native Monaco Editor, see [Monaco Editor GitHub repo](https://github.com/microsoft/monaco-editor). + > To customize the native Monaco Editor, see the [Monaco Editor GitHub repo](https://github.com/microsoft/monaco-editor). 1. Install the *monaco-kusto* npm package: @@ -48,11 +49,11 @@ Try out the integration with our [Sample project](#sample-project)! ## Set up your app to use the *monaco-kusto* package -You can set up your app to use *monaco-kusto* using one of the following methods: +Set up your app to use *monaco-kusto* by using one of the following methods: ### [AMD module system](#tab/amd) -1. Add the following HTML to pages where the Monaco Editor is used, such as your *index.html* file. They're required due to a dependency the package has on `kusto-language-service`. +1. Add the following HTML to pages where you use the Monaco Editor, such as your *index.html* file. These elements are required due to a dependency the package has on `kusto-language-service`. ```html @@ -61,12 +62,12 @@ You can set up your app to use *monaco-kusto* using one of the following methods ``` -1. Copy the static files from the *monaco* and *monaco-kusto* packages to the **monaco-editor** folder on your web server. Your app will need to access these static files. +1. Copy the static files from the *monaco* and *monaco-kusto* packages to the **monaco-editor** folder on your web server. Your app needs to access these static files. 1. Use monaco to load them. For examples, see the [samples](https://github.com/Azure/monaco-kusto/tree/master/samples). ### [ESM (webpack)](#tab/esm) -The following steps describe how to set up your app to use *monaco-kusto* using webpack. The default entry point for a project is the *src/index.js* file and the default configuration file is the *src/webpack.config.js* file. The following steps assume that you're using the default webpack project setup to bundle your app. +The following steps describe how to set up your app to use *monaco-kusto* by using webpack. The default entry point for a project is the *src/index.js* file, and the default configuration file is the *src/webpack.config.js* file. The following steps assume that you're using the default webpack project setup to bundle your app. 1. In the configuration file, add the following snippets: 1. Under **resolve.alias**, add the following aliases: @@ -93,7 +94,7 @@ The following steps describe how to set up your app to use *monaco-kusto* using { test: /kustoMonarchLanguageDefinition/, loader: 'imports-loader?Kusto' }, ``` - 1. Under **entry**, add the following entry point. Make a note the worker path returned, you'll need it later. + 1. Under **entry**, add the following entry point. Make a note of the worker path returned, you need it later. ```javascript `@kusto/monaco-kusto/release/esm/kusto.worker.js` @@ -138,7 +139,7 @@ The following steps describe how to set up your app to use *monaco-kusto* using ## Add your database schema to the editor -The *monaco-kusto* package provides a way to add your database schema to the editor. The schema enables the editor to provide auto-complete suggestions and other features. +The *monaco-kusto* package provides a way to add your database schema to the editor. By adding the schema, the editor can offer auto-complete suggestions and other helpful features. Use the following structure to define the schema: @@ -162,7 +163,7 @@ export function setSchema(editor) { } ``` -You can get your database schema using one of the following methods: +You can get your database schema by using one of the following methods: ### [From your query environment](#tab/show) @@ -237,7 +238,7 @@ Run the following commands from the root of the cloned repo: npm install ``` -1. Verify the project is working. If successful, the *index.html* will open. +1. Verify the project is working. If successful, the *index.html* opens. ```bash npm run watch diff --git a/data-explorer/kusto/api/monaco/monaco-overview.md b/data-explorer/kusto/api/monaco/monaco-overview.md index 1d531a764e..d2a7c1b785 100644 --- a/data-explorer/kusto/api/monaco/monaco-overview.md +++ b/data-explorer/kusto/api/monaco/monaco-overview.md @@ -1,17 +1,18 @@ --- -title: Integrate capabilities in your app. +title: Integrate Capabilities in Your App description: Learn about the different ways you can integrate capabilities in your apps. ms.reviewer: izlisbon -ms.topic: integration -ms.date: 08/11/2024 +ms.topic: integration +ms.date: 03/15/2026 --- + # Integrate query capabilities in your app overview > [!INCLUDE [applies](../../includes/applies-to-version/applies.md)] [!INCLUDE [fabric](../../includes/applies-to-version/fabric.md)] [!INCLUDE [azure-data-explorer](../../includes/applies-to-version/azure-data-explorer.md)] -You can integrate query capabilities in your app with features to suit your needs. Integrating capabilities in your app enables you to: +You can integrate query capabilities in your app by adding features that suit your needs. When you integrate these capabilities, your app can: -- Edit queries (including all language features such as colorization and intellisense) +- Edit queries (including all language features such as colorization and IntelliSense) - Explore table schemas visually - Authenticate to Microsoft Entra ID - Execute queries @@ -22,16 +23,16 @@ You can integrate query capabilities in your app with features to suit your need ## Integration methods -You can integrate capabilities in your apps in the following ways: +You can integrate these capabilities into your apps in the following ways: - [Integrate the Monaco Editor with Kusto Query Language support in your app](monaco-kusto.md) - Integrating the [Monaco Editor](https://microsoft.github.io/monaco-editor/) in your app offers you an editing experience such as completion, colorization, refactoring, renaming, and go-to-definition. It requires you to build a solution for authentication, query execution, result display, and schema exploration. It offers you full flexibility to fashion the user experience that fits your needs. + By integrating the [Monaco Editor](https://microsoft.github.io/monaco-editor/) in your app, you get an editing experience that includes features such as completion, colorization, refactoring, renaming, and go-to-definition. You need to build a solution for authentication, query execution, result display, and schema exploration. This approach gives you full flexibility to design the user experience that fits your needs. ::: moniker range="azure-data-explorer" - [Embed the web UI in an IFrame](host-web-ux-in-iframe.md) - Embedding the web UI offers you extensive functionality with little effort, but contains limited flexibility for the user experience. There's a fixed set of query parameters that enable limited control over the system's look and behavior. + By embedding the web UI, you get extensive functionality with little effort, but it offers limited flexibility for the user experience. There's a fixed set of query parameters that provide limited control over the system's look and behavior. ::: moniker-end ## Related content diff --git a/data-explorer/kusto/management/row-level-security-external-sql.md b/data-explorer/kusto/management/row-level-security-external-sql.md index 35036037b1..acda05a7ba 100644 --- a/data-explorer/kusto/management/row-level-security-external-sql.md +++ b/data-explorer/kusto/management/row-level-security-external-sql.md @@ -1,16 +1,17 @@ --- -title: "Use row-level security with Azure SQL external tables" -description: "This document describes how to create a row-level security solution with SQL external tables." +title: Use Row-level Security With Azure SQL External Tables +description: This document describes how to create a row-level security solution with SQL external tables. ms.reviewer: danielkoralek -ms.topic: how-to -ms.date: 08/11/2024 +ms.topic: how-to +ms.date: 03/15/2026 #customer intent: As a Data Administrator, I want to restrict access to the data on Azure SQL External Tables so that each user can see only their data. --- + # Apply row-level security on Azure SQL external tables > [!INCLUDE [applies](../includes/applies-to-version/applies.md)] [!INCLUDE [fabric](../includes/applies-to-version/fabric.md)] [!INCLUDE [azure-data-explorer](../includes/applies-to-version/azure-data-explorer.md)] -This document describes how to apply a row-level security (RLS) solution with [SQL external tables](/azure/data-explorer/kusto/management/external-sql-tables). [row-level security](/azure/data-explorer/kusto/management/row-level-security-policy) implements data isolation at the user level, restricting the access to data based on the current user credential. However, Kusto external tables don't support RLS policy definitions, so data isolation on external SQL tables require a different approach. The following solution employs using row-level security in SQL Server, and Microsoft Entra ID Impersonation in the SQL Server connection string. This combination provides the same behavior as applying user access control with RLS on standard Kusto tables, such that the users querying the SQL External Table are able to only see the records addressed to them, based on the row-level security policy defined in the source database. +This article describes how to apply a row-level security (RLS) solution with [SQL external tables](/azure/data-explorer/kusto/management/external-sql-tables). [row-level security](/azure/data-explorer/kusto/management/row-level-security-policy) implements data isolation at the user level, restricting access to data based on the current user credential. However, Kusto external tables don't support RLS policy definitions, so data isolation on external SQL tables requires a different approach. The following solution uses row-level security in SQL Server and Microsoft Entra ID impersonation in the SQL Server connection string. This combination provides the same behavior as applying user access control with RLS on standard Kusto tables, so users querying the SQL external table can only see the records addressed to them based on the row-level security policy defined in the source database. ## Prerequisites @@ -19,7 +20,7 @@ This document describes how to apply a row-level security (RLS) solution with [S ## Sample table -The example source is a SQL Server table called `SourceTable`, with the following schema. The `systemuser` column contains the user email to whom the data record belongs. This is the same user who should have access to this data. +The example source is a SQL Server table called `SourceTable`, with the following schema. The `systemuser` column contains the user email to whom the data record belongs. This user is the same user who should have access to this data. ``` sql CREATE TABLE SourceTable ( @@ -32,9 +33,9 @@ CREATE TABLE SourceTable ( ## Configure row-level security in the source SQL Server - SQL Server side -For general information on SQL Server row-level security, see [row-level security in SQL Server](/sql/relational-databases/security/row-level-security). +For general information about SQL Server row-level security, see [row-level security in SQL Server](/sql/relational-databases/security/row-level-security). -1. Create a SQL Function with the logic for the data access policy. In this example, the row-level security is based on the current user's email matching the `systemuser` column. This logic could be modified to meet any other business requirement. +1. Create a SQL function with the logic for the data access policy. In this example, the row-level security is based on the current user's email matching the `systemuser` column. Modify this logic to meet any other business requirement. ``` sql CREATE SCHEMA Security; @@ -49,7 +50,7 @@ For general information on SQL Server row-level security, see [row-level securit GO ``` -1. Create the Security Policy on the table `SourceTable` with passing the column name as the parameter: +1. Create the security policy on the table `SourceTable` by passing the column name as the parameter: ``` sql CREATE SECURITY POLICY SourceTableFilter @@ -67,7 +68,7 @@ For general information on SQL Server row-level security, see [row-level securit The following steps depend on the SQL Server version that you're using. -1. Create a sign in and User for each Microsoft Entra ID credential that is going to access the data stored in SQL Server: +1. Create a sign-in and user for each Microsoft Entra ID credential that needs access to the data stored in SQL Server: ``` sql CREATE LOGIN [user@domain.com] FROM EXTERNAL PROVIDER --MASTER @@ -87,11 +88,11 @@ The following steps depend on the SQL Server version that you're using. GRANT SELECT ON dbo.SourceTable to [user@domain.com] ``` -### Define SQL external table connection String - Kusto side +### Define SQL external table connection string - Kusto side For more information on the connection string, see [SQL External Table Connection Strings](/azure/data-explorer/kusto/api/connection-strings/sql-connection-strings). -1. Create a SQL External Table with using Connection String with `Active Directory Integrated` authentication type. For more information, see [Microsoft Entra integrated (impersonation)](/azure/data-explorer/kusto/api/connection-strings/sql-connection-strings#microsoft-entra-integrated-impersonation). +1. Create a SQL external table by using a connection string with `Active Directory Integrated` authentication type. For more information, see [Microsoft Entra integrated (impersonation)](/azure/data-explorer/kusto/api/connection-strings/sql-connection-strings#microsoft-entra-integrated-impersonation). ``` KQL .create external table SQLSourceTable (id:long, region:string, central:string, systemser:string) @@ -115,13 +116,13 @@ For more information on the connection string, see [SQL External Table Connectio Server=tcp:[sql server endpoint],1433;Authentication=Active Directory Integrated;Initial Catalog=[database name]; ``` -1. Validate the data isolation based on the Microsoft Entra ID, like it would work with row-level security on in Kusto. In this case, the data is filtered based on the SourceTable's `systemuser` column, matching the Microsoft Entra ID user (email address) from the Kusto impersonation: +1. Validate the data isolation based on the Microsoft Entra ID, like it works with row-level security on in Kusto. In this case, the data is filtered based on the SourceTable's `systemuser` column, matching the Microsoft Entra ID user (email address) from the Kusto impersonation: ``` KQL external_table('SQLSourceTable') ``` > [!NOTE] - > The policy can be disabled and enabled again, on the SQL Server side, for testing purposes. + > For testing purposes, you can disable and enable the policy on the SQL Server side. To disable and enable the policy, use the following SQL commands: @@ -135,7 +136,7 @@ ALTER SECURITY POLICY SourceTableFilter WITH (STATE = ON); ``` -With the Security Policy enabled on the SQL Server side, Kusto users only see the records matching their Microsoft Entra IDs, as the result of the query against the SQL External table. With the Security Policy disabled, all users are able to access the full table content as the result of the query against the SQL External table. +By using the Security Policy enabled on the SQL Server side, Kusto users only see the records matching their Microsoft Entra IDs as the result of the query against the SQL External table. By using the Security Policy disabled, all users can access the full table content as the result of the query against the SQL External table. ## Related content diff --git a/data-explorer/kusto/query/tutorials/common-tasks-microsoft-sentinel.md b/data-explorer/kusto/query/tutorials/common-tasks-microsoft-sentinel.md index 6d147b9fe6..be1c1d7976 100644 --- a/data-explorer/kusto/query/tutorials/common-tasks-microsoft-sentinel.md +++ b/data-explorer/kusto/query/tutorials/common-tasks-microsoft-sentinel.md @@ -1,24 +1,25 @@ --- -title: Common tasks with KQL for Microsoft Sentinel -description: This article describes commonly used tasks in Kusto Query Language (KQL) when working with Microsoft Sentinel. +title: Common Tasks With KQL for Microsoft Sentinel +description: This article describes commonly used tasks in Kusto Query Language (KQL) when working with Microsoft Sentinel. ms.topic: concept-article ms.reviewer: batamig -ms.date: 01/20/2025 +ms.date: 03/15/2026 monikerRange: "microsoft-sentinel" #Customer intent: As a security analyst, I want to learn how to perform commonly used tasks with Kusto Query Language so that I can effectively analyze and manipulate data in Microsoft Sentinel for threat detection and incident response. --- + # Common tasks with KQL for Microsoft Sentinel > [!INCLUDE [applies](../../includes/applies-to-version/applies.md)] [!INCLUDE [sentinel](../../includes/applies-to-version/sentinel.md)] -Kusto Query Language (KQL) is a powerful tool for querying and analyzing data in Microsoft Sentinel. As a security analyst, mastering KQL can significantly enhance your ability to detect threats and respond to incidents effectively. This article provides a comprehensive guide to performing common tasks with KQL, helping you to manipulate and analyze data efficiently. +Kusto Query Language (KQL) is a powerful tool for querying and analyzing data in Microsoft Sentinel. As a security analyst, mastering KQL can significantly enhance your ability to detect threats and respond to incidents effectively. This article provides a comprehensive guide to performing common tasks with KQL, helping you manipulate and analyze data efficiently. -In this tutorial, we cover the basics of KQL, including understanding query structure, getting, limiting, sorting, and filtering data, summarizing data, and joining tables. Additionally, we explore advanced concepts such as using the `evaluate` operator and `let` statements to create more complex and maintainable queries. +In this tutorial, you learn the basics of KQL, including understanding query structure, getting, limiting, sorting, and filtering data, summarizing data, and joining tables. Additionally, you explore advanced concepts such as using the `evaluate` operator and `let` statements to create more complex and maintainable queries. ## Prerequisites -Before reading this article, make sure that you've familiarized yourself with the basics of Kusto Query Language (KQL). If you're new to KQL, see: +Before reading this article, make sure that you're familiar with the basics of Kusto Query Language (KQL). If you're new to KQL, see: * [Kusto Query Language (KQL) overview](../index.md) * [Syntax conventions for reference documentation](../syntax-conventions.md) @@ -27,16 +28,16 @@ Before reading this article, make sure that you've familiarized yourself with th ## Understanding query structure basics -A good place to start learning Kusto Query Language is to understand the overall query structure. The first thing you notice when looking at a Kusto query is the use of the pipe symbol (` | `). The structure of a Kusto query starts with getting your data from a data source and then passing the data across a "pipeline," and each step provides some level of processing and then passes the data to the next step. At the end of the pipeline, you get your final result. In effect, this is our pipeline: +A good place to start learning Kusto Query Language is to understand the overall query structure. The first thing you notice when looking at a Kusto query is the use of the pipe symbol (`|`). The structure of a Kusto query starts with getting your data from a data source and then passing the data across a pipeline. Each step provides some level of processing and then passes the data to the next step. At the end of the pipeline, you get your final result. In effect, this pipeline looks like: `Get Data | Filter | Summarize | Sort | Select` This concept of passing data down the pipeline makes for an intuitive structure, as it's easy to create a mental picture of your data at each step. -To illustrate this, let's take a look at the following query, which looks at Microsoft Entra sign-in logs. As you read through each line, you can see the keywords that indicate what's happening to the data. We've included the relevant stage in the pipeline as a comment in each line. +To illustrate this concept, take a look at the following query, which looks at Microsoft Entra sign-in logs. As you read through each line, you can see the keywords that indicate what's happening to the data. The relevant stage in the pipeline is included as a comment in each line. > [!NOTE] -> You can add comments to any line in a query by preceding them with a double slash (` // `). +> You can add comments to any line in a query by preceding them with a double slash (`//`). ```kusto SigninLogs // Get data @@ -50,19 +51,19 @@ SigninLogs // Get data Because the output of every step serves as the input for the following step, the order of the steps can determine the query's results and affect its performance. It's crucial that you order the steps according to what you want to get out of the query. -A good rule of thumb is to filter your data early, so you are only passing relevant data down the pipeline. This greatly increases performance and ensures that you aren't accidentally including irrelevant data in summarization steps. For more information, see [Best practices for Kusto Query Language queries](../best-practices.md). +A good rule of thumb is to filter your data early, so you pass only relevant data down the pipeline. This approach greatly increases performance and ensures that you don't accidentally include irrelevant data in summarization steps. For more information, see [Best practices for Kusto Query Language queries](../best-practices.md). -Hopefully, you now have an appreciation for the overall structure of a query in Kusto Query Language. Now let's look at the actual query operators themselves, which are used to create a query. +Hopefully, you now have appreciation for the overall structure of a query in Kusto Query Language. Now let's look at the actual query operators themselves, which you use to create a query. ## Getting, limiting, sorting, and filtering data -The core vocabulary of Kusto Query Language - the foundation that allows you to accomplish most of your tasks - is a collection of operators for filtering, sorting, and selecting your data. The remaining tasks require you to stretch your knowledge of the language to meet your more advanced needs. Let's expand a bit on some of the commands we used [in our earlier example](#understanding-query-structure-basics) and look at `take`, `sort`, and `where`. +The core vocabulary of Kusto Query Language - the foundation that allows you to accomplish most of your tasks - is a collection of operators for filtering, sorting, and selecting your data. The remaining tasks require you to stretch your knowledge of the language to meet your more advanced needs. Let's expand a bit on some of the commands used [in our earlier example](#understanding-query-structure-basics) and look at `take`, `sort`, and `where`. -For each of these operators, we examine its use in our previous *SigninLogs* example, and learn either a useful tip or a best practice. +For each of these operators, examine its use in the previous *SigninLogs* example, and learn either a useful tip or a best practice. ### Getting data -The first line of any basic query specifies which table you want to work with. In the case of Microsoft Sentinel, this is likely to be the name of a log type in your workspace, such as *SigninLogs*, *SecurityAlert*, or *CommonSecurityLog*. For example: +The first line of any basic query specifies which table you want to work with. In the case of Microsoft Sentinel, this table is likely to be the name of a log type in your workspace, such as *SigninLogs*, *SecurityAlert*, or *CommonSecurityLog*. For example: `SigninLogs` @@ -70,26 +71,26 @@ In Kusto Query Language, log names are case sensitive, so `SigninLogs` and `sign ### Limiting data: *take* / *limit* -The [*take*](../take-operator.md) operator (and the identical *limit* operator) is used to limit your results by returning only a given number of rows. It's followed by an integer that specifies the number of rows to return. Typically, it's used at the end of a query after you have determined your sort order, and in such a case it returns the given number of rows at the top of the sorted order. +Use the [*take*](../take-operator.md) operator (or the identical *limit* operator) to limit your results by returning only a specific number of rows. Follow it with an integer that specifies the number of rows to return. Typically, use it at the end of a query after you determine your sort order. In this case, it returns the specified number of rows at the top of the sorted order. -Using `take` earlier in the query can be useful for testing a query, when you don't want to return large datasets. However, if you place the `take` operation before any `sort` operations, `take` returns rows selected at random - and possibly a different set of rows every time the query is run. Here's an example of using take: +Using `take` earlier in the query can be useful for testing a query when you don't want to return large datasets. However, if you place the `take` operation before any `sort` operations, `take` returns rows selected at random - and possibly a different set of rows every time you run the query. Here's an example of using `take`: ```kusto SigninLogs | take 5 ``` -:::image type="content" source="../media/kql-tutorials/common-tasks-microsoft-sentinel/table-take-5.png" alt-text="Screenshot of sample results for the 'take' operator."::: +:::image type="content" source="../media/kql-tutorials/common-tasks-microsoft-sentinel/table-take-5.png" alt-text="Screenshot of sample results for the take operator."::: > [!TIP] -> When working on a brand-new query where you may not know what the query looks like, it can be useful to put a `take` statement at the beginning to artificially limit your dataset for faster processing and experimentation. Once you are happy with the full query, you can remove the initial `take` step. +> When working on a brand-new query where you might not know what the query looks like, it can be useful to put a `take` statement at the beginning to artificially limit your dataset for faster processing and experimentation. Once you're happy with the full query, remove the initial `take` step. ### Sorting data: *sort* / *order* -The [*sort*](../sort-operator.md) operator (and the identical *order* operator) is used to sort your data by a specified column. In the following example, we ordered the results by *TimeGenerated* and set the order direction to descending with the *desc* parameter, placing the highest values first; for ascending order we would use *asc*. +Use the [*sort*](../sort-operator.md) operator (and the identical *order* operator) to sort your data by a specified column. In the following example, the query orders the results by *TimeGenerated* and sets the order direction to descending with the *desc* parameter, placing the highest values first. For ascending order, use *asc*. > [!NOTE] -> The default direction for sorts is descending, so technically you only have to specify if you want to sort in ascending order. However, specifying the sort direction in any case makes your query more readable. +> The default direction for sorts is descending, so technically you only need to specify if you want to sort in ascending order. However, specifying the sort direction in any case makes your query more readable. ```kusto SigninLogs @@ -97,20 +98,20 @@ SigninLogs | take 5 ``` -As we mentioned, we put the `sort` operator before the `take` operator. We need to sort first to make sure we get the appropriate five records. +As mentioned earlier, place the `sort` operator before the `take` operator. You need to sort first to make sure you get the appropriate five records. -:::image type="content" source="../media/kql-tutorials/common-tasks-microsoft-sentinel/table-take-sort.png" alt-text="Screenshot of sample results for the 'sort' operator, with a 'take' limit."::: +:::image type="content" source="../media/kql-tutorials/common-tasks-microsoft-sentinel/table-take-sort.png" alt-text="Screenshot of sample results for the sort operator, with a take limit."::: #### *Top* -The [*top*](../top-operator.md) operator allows us to combine the `sort` and `take` operations into a single operator: +The [*top*](../top-operator.md) operator combines the `sort` and `take` operations into a single operator: ```kusto SigninLogs | top 5 by TimeGenerated desc ``` -In cases where two or more records have the same value in the column you're sorting by, you can add more columns to sort by. Add extra sorting columns in a comma-separated list, located after the first sorting column, but before the sort order keyword. For example: +In cases where two or more records have the same value in the column you're sorting by, add more columns to sort by. Add extra sorting columns in a comma-separated list, located after the first sorting column, but before the sort order keyword. For example: ```kusto SigninLogs @@ -118,7 +119,7 @@ SigninLogs | take 5 ``` -Now, if *TimeGenerated* is the same between multiple records, it then tries to sort by the value in the *Identity* column. +Now, if *TimeGenerated* is the same between multiple records, the query tries to sort by the value in the *Identity* column. > [!NOTE] > **When to use `sort` and `take`, and when to use `top`** @@ -129,7 +130,7 @@ Now, if *TimeGenerated* is the same between multiple records, it then tries to s ### Filtering data: *where* -The [*where*](../where-operator.md) operator is arguably the most important operator, because it's the key to making sure you're only working with the subset of data that is relevant to your scenario. You should do your best to filter your data as early in the query as possible because doing so improves query performance by reducing the amount of data that needs to be processed in subsequent steps; it also ensures that you're only performing calculations on the desired data. See this example: +The [*where*](../where-operator.md) operator is the most important operator because it's the key to making sure you're only working with the subset of data that's relevant to your scenario. Filter your data as early in the query as possible because it improves query performance by reducing the amount of data that needs to be processed in subsequent steps. It also ensures that you're only performing calculations on the desired data. See this example: ```kusto SigninLogs @@ -138,20 +139,20 @@ SigninLogs | take 5 ``` -The `where` operator specifies a variable, a comparison (*scalar*) operator, and a value. In our case, we used `>=` to denote that the value in the *TimeGenerated* column needs to be greater than (that is, later than) or equal to seven days ago. +The `where` operator specifies a variable, a comparison (*scalar*) operator, and a value. In this example, use `>=` to denote that the value in the *TimeGenerated* column needs to be greater than (that is, later than) or equal to seven days ago. There are two types of comparison operators in Kusto Query Language: string and numerical. String operators support permutations for case sensitivity, substring locations, prefixes, suffixes, and much more. -The `==` operator is both a numeric and string operator, meaning it can be used for both numbers and text. For example, both of the following statements would be valid where statements: +The `==` operator is both a numeric and string operator, meaning it can be used for both numbers and text. For example, both of the following statements are valid where statements: * `| where ResultType == 0` * `| where Category == 'SignInLogs'` For more information, see [Numerical operators](../numerical-operators.md) and [String operators](../datatypes-string-operators.md). -**Best Practice:** In most cases, you probably want to filter your data by more than one column, or filter the same column in more than one way. In these instances, there are two best practices you should keep in mind. +**Best Practice:** In most cases, filter your data by more than one column, or filter the same column in more than one way. In these instances, keep these two best practices in mind. -You can combine multiple `where` statements into a single step by using the *and* keyword. For example: +Combine multiple `where` statements into a single step by using the keyword *and*. For example: ```kusto SigninLogs @@ -159,7 +160,7 @@ SigninLogs and TimeGenerated >= ago(7d) ``` -When you have multiple filters joined into a single `where` statement using the *and* keyword, you get better performance by putting filters that only reference a single column first. So, a better way to write the previous query would be: +When you have multiple filters joined into a single `where` statement by using the keyword *and*, you get better performance by putting filters that only reference a single column first. So, a better way to write the previous query is: ```kusto SigninLogs @@ -167,11 +168,11 @@ SigninLogs and Resource == ResourceGroup ``` -In this example, the first filter mentions a single column (*TimeGenerated*), while the second references two columns (*Resource* and *ResourceGroup*). +In this example, the first filter mentions a single column (*TimeGenerated*), while the second filter references two columns (*Resource* and *ResourceGroup*). ## Summarizing data -[*Summarize*](../summarize-operator.md) is one of the most important tabular operators in Kusto Query Language, but it also is one of the more complex operators to learn if you're new to query languages in general. The job of `summarize` is to take in a table of data and output a *new table* that is aggregated by one or more columns. +[*Summarize*](../summarize-operator.md) is one of the most important tabular operators in Kusto Query Language. It's also one of the more complex operators to learn if you're new to query languages in general. The job of `summarize` is to take in a table of data and output a *new table* that's aggregated by one or more columns. ### Structure of the summarize statement @@ -179,16 +180,16 @@ The basic structure of a `summarize` statement is as follows: `| summarize by ` -For example, the following would return the count of records for each *CounterName* value in the *Perf* table: +For example, the following query returns the count of records for each *CounterName* value in the *Perf* table: ```kusto Perf | summarize count() by CounterName ``` -:::image type="content" source="../media/kql-tutorials/common-tasks-microsoft-sentinel/table-summarize-count.png" alt-text="Screenshot of sample results of the 'summarize' operator with a 'count' aggregation."::: +:::image type="content" source="../media/kql-tutorials/common-tasks-microsoft-sentinel/table-summarize-count.png" alt-text="Screenshot of sample results of the summarize operator with a count aggregation."::: -Because the output of `summarize` is a new table, any columns not explicitly specified in the `summarize` statement aren't passed down the pipeline. To illustrate this concept, consider this example: +Because the output of `summarize` is a new table, the query doesn't pass any columns that aren't explicitly specified in the `summarize` statement down the pipeline. To illustrate this concept, consider this example: ```kusto Perf @@ -197,7 +198,7 @@ Perf | sort by ObjectName asc ``` -On the second line, we're specifying that we only care about the columns *ObjectName*, *CounterValue*, and *CounterName*. We then summarized to get the record count by *CounterName* and finally, we attempt to sort the data in ascending order based on the *ObjectName* column. Unfortunately, this query fails with an error (indicating that the *ObjectName* is unknown) because when we summarized, we only included the *Count* and *CounterName* columns in our new table. To avoid this error, we can add *ObjectName* to the end of our `summarize` step, like this: +On the second line, you specify that you only care about the columns *ObjectName*, *CounterValue*, and *CounterName*. You then summarize to get the record count by *CounterName* and finally, you attempt to sort the data in ascending order based on the *ObjectName* column. Unfortunately, this query fails with an error (indicating that the *ObjectName* is unknown) because when you summarized, you only included the *Count* and *CounterName* columns in your new table. To avoid this error, add *ObjectName* to the end of your `summarize` step, like this: ```kusto Perf @@ -206,11 +207,11 @@ Perf | sort by ObjectName asc ``` -The way to read the `summarize` line in your head would be: "summarize the count of records by *CounterName*, and group by *ObjectName*." You can continue adding columns, separated by commas, to the end of the `summarize` statement. +The way to read the `summarize` line in your head is: "summarize the count of records by *CounterName*, and group by *ObjectName*." You can continue adding columns, separated by commas, to the end of the `summarize` statement. :::image type="content" source="../media/kql-tutorials/common-tasks-microsoft-sentinel/table-summarize-group.png" alt-text="Screenshot of results of summarize operator with two arguments."::: -Building on the previous example, if we want to aggregate multiple columns at the same time, we can achieve this by adding aggregations to the `summarize` operator, separated by commas. In the example below, we're getting not only a count of all the records but also a sum of the values in the *CounterValue* column across all records (that match any filters in the query): +Building on the previous example, if you want to aggregate multiple columns at the same time, add aggregations to the `summarize` operator, separated by commas. In the example below, you get not only a count of all the records but also a sum of the values in the *CounterValue* column across all records (that match any filters in the query): ```kusto Perf @@ -223,9 +224,9 @@ Perf #### Renaming aggregated columns -This seems like a good time to talk about column names for these aggregated columns. [At the start of this section](#summarizing-data), we said the `summarize` operator takes in a table of data and produces a new table, and only the columns you specify in the `summarize` statement continues down the pipeline. Therefore, if you were to run the above example, the resulting columns for our aggregation would be *count_* and *sum_CounterValue*. +This section explains column names for these aggregated columns. [At the start of this section](#summarizing-data), you learned that the `summarize` operator takes in a table of data and produces a new table, and only the columns you specify in the `summarize` statement continue down the pipeline. Therefore, if you run the preceding example, the resulting columns for your aggregation are *count_* and *sum_CounterValue*. -The Kusto engine automatically creates a column name without us having to be explicit, but often, you find that you prefer your new column have a friendlier name. You can easily rename your column in the `summarize` statement by specifying a new name, followed by ` = ` and the aggregation, like so: +The Kusto engine automatically creates a column name without you having to be explicit, but often, you prefer your new column to have a friendlier name. You can easily rename your column in the `summarize` statement by specifying a new name, followed by ` = ` and the aggregation, like so: ```kusto Perf @@ -234,23 +235,23 @@ Perf | sort by ObjectName asc ``` -Now, our summarized columns are named *Count* and *CounterSum*. +Now, your summarized columns are named *Count* and *CounterSum*. :::image type="content" source="../media/kql-tutorials/common-tasks-microsoft-sentinel/friendly-column-names.png" alt-text="Screenshot of friendly column names for aggregations."::: -There's more to the `summarize` operator than we can cover here, but you should invest the time to learn it because it's a key component to any data analysis you plan to perform on your Microsoft Sentinel data. +There's more to the `summarize` operator than this article can cover, but you should invest the time to learn it because it's a key component to any data analysis you plan to perform on your Microsoft Sentinel data. ### Aggregation reference -The are many aggregation functions, but some of the most commonly used are `sum()`, `count()`, and `avg()`. For more information, see [Aggregation function types at a glance](../aggregation-functions.md). +Many aggregation functions are available, but some of the most commonly used ones are `sum()`, `count()`, and `avg()`. For more information, see [Aggregation function types at a glance](../aggregation-functions.md). ## Selecting: adding and removing columns -As you start working more with queries, you might find that you have more information than you need on your subjects (that is, too many columns in your table). Or you might need more information than you have (that is, you need to add a new column that contains the results of analysis of other columns). Let's look at a few of the key operators for column manipulation. +As you work more with queries, you might find that you have more information than you need on your subjects (that is, too many columns in your table). Or you might need more information than you have (that is, you need to add a new column that contains the results of analysis of other columns). Let's look at a few of the key operators for column manipulation. ### *Project* and *project-away* -[*Project*](../project-operator.md) is roughly equivalent to many languages' *select* statements. It allows you to choose which columns to keep. The order of the columns returned match the order of the columns you list in your `project` statement, as shown in this example: +[*Project*](../project-operator.md) is roughly equivalent to many languages' *select* statements. It allows you to choose which columns to keep. The order of the columns returned matches the order of the columns you list in your `project` statement, as shown in this example: ```kusto Perf @@ -259,7 +260,7 @@ Perf :::image type="content" source="../media/kql-tutorials/common-tasks-microsoft-sentinel/table-project.png" alt-text="Screenshot of results of project operator."::: -As you can imagine, when you're working with wide datasets, you might have lots of columns you want to keep, and specifying them all by name would require much typing. For those cases, you have [*project-away*](../project-away-operator.md), which lets you specify which columns to remove, rather than which ones to keep, like so: +As you can imagine, when you're working with wide datasets, you might have lots of columns you want to keep, and specifying them all by name requires much typing. For those cases, you have [*project-away*](../project-away-operator.md), which lets you specify which columns to remove, rather than which ones to keep, like so: ```kusto Perf @@ -267,12 +268,12 @@ Perf ``` > [!TIP] -> It can be useful to use `project` in two locations in your queries, at the beginning and again at the end. Using `project` early in your query can help improve performance by stripping away large chunks of data you don't need to pass down the pipeline. Using it again at the end lets you get rid of any columns that may have been created in previous steps and aren't needed in your final output. +> It can be useful to use `project` in two locations in your queries, at the beginning and again at the end. Using `project` early in your query can help improve performance by stripping away large chunks of data you don't need to pass down the pipeline. Using it again at the end lets you get rid of any columns that previous steps created and aren't needed in your final output. > ### *Extend* -[*Extend*](../extend-operator.md) is used to create a new calculated column. This can be useful when you want to perform a calculation against existing columns and see the output for every row. Let's look at a simple example where we calculate a new column called *Kbytes*, which we can calculate by multiplying the MB value (in the existing *Quantity* column) by 1,024. +Use [*Extend*](../extend-operator.md) to create a new calculated column. This approach is helpful when you want to perform a calculation against existing columns and see the output for every row. Let's look at a simple example where you calculate a new column called *Kbytes*, which you calculate by multiplying the MB value (in the existing *Quantity* column) by 1,024. ```kusto Usage @@ -281,11 +282,11 @@ Usage | project DataType, MBytes=Quantity, KBytes ``` -On the final line in our `project` statement, we renamed the *Quantity* column to *Mbytes*, so we can easily tell which unit of measure is relevant to each column. +On the final line in the `project` statement, you rename the *Quantity* column to *Mbytes*, so you can easily tell which unit of measure is relevant to each column. :::image type="content" source="../media/kql-tutorials/common-tasks-microsoft-sentinel/table-extend.png" alt-text="Screenshot of results of extend operator."::: -It's worth noting that `extend` also works with already calculated columns. For example, we can add one more column called *Bytes* that is calculated from *Kbytes*: +It's worth noting that `extend` also works with already calculated columns. For example, you can add one more column called *Bytes* that is calculated from *Kbytes*: ```kusto Usage @@ -299,31 +300,31 @@ Usage ## Joining tables -Much of your work in Microsoft Sentinel can be carried out by using a single log type, but there are times when you want to correlate data together or perform a lookup against another set of data. Like most query languages, Kusto Query Language offers a few operators used to perform various types of joins. In this section, we look at the most-used operators, `union` and `join`. +You can carry out much of your work in Microsoft Sentinel by using a single log type, but there are times when you want to correlate data together or perform a lookup against another set of data. Like most query languages, Kusto Query Language offers a few operators that you can use to perform various types of joins. In this section, you look at the most-used operators, `union` and `join`. ### *Union* -[*Union*](../union-operator.md) simply takes two or more tables and returns all the rows. For example: +[*Union*](../union-operator.md) takes two or more tables and returns all the rows. For example: ```kusto OfficeActivity | union SecurityEvent ``` -This would return all rows from both the *OfficeActivity* and *SecurityEvent* tables. `Union` offers a few parameters that can be used to adjust how the union behaves. Two of the most useful are *withsource* and *kind*: +This query returns all rows from both the *OfficeActivity* and *SecurityEvent* tables. `Union` offers a few parameters that you can use to adjust how the union behaves. Two of the most useful parameters are *withsource* and *kind*: ```kusto OfficeActivity | union withsource = SourceTable kind = inner SecurityEvent ``` -The *withsource* parameter lets you specify the name of a new column whose value in a given row is the name of the table from which the row came. In the example, we named the column SourceTable, and depending on the row, the value is either *OfficeActivity* or *SecurityEvent*. +Use the *withsource* parameter to specify the name of a new column whose value in a given row is the name of the table from which the row came. In the example, you name the column `SourceTable`, and depending on the row, the value is either *OfficeActivity* or *SecurityEvent*. -The other parameter we specified was *kind*, which has two options: *inner* or *outer*. In the example, we specified *inner*, which means the only columns that are kept during the union are those that exist in both tables. Alternatively, if we had specified *outer* (which is the default value), then all columns from both tables would be returned. +The other parameter you specify is *kind*, which has two options: *inner* or *outer*. In the example, you specify *inner*, which means the only columns that are kept during the union are those that exist in both tables. Alternatively, if you specify *outer* (which is the default value), then the query returns all columns from both tables. ### *Join* -[*Join*](../join-operator.md) works similarly to `union`, except instead of joining tables to make a new table, we're joining *rows* to make a new table. Like most database languages, there are multiple types of joins you can perform. The general syntax for a `join` is: +[*Join*](../join-operator.md) works similarly to `union`, except instead of joining tables to make a new table, it joins *rows* to make a new table. Like most database languages, you can perform multiple types of joins. The general syntax for a `join` is: ```kusto T1 @@ -333,7 +334,7 @@ T1 ) on $left. == $right. ``` -After the `join` operator, we specify the *kind* of join we want to perform followed by an open parenthesis. Within the parentheses is where you specify the table you want to join, and any other query statements on *that* table you wish to add. After the closing parenthesis, we use the *on* keyword followed by our left (*$left.\* keyword) and right (*$right.\*) columns separated with the == operator. Here's an example of an *inner join*: +After the `join` operator, specify the *kind* of join you want to perform followed by an open parenthesis. Within the parentheses, specify the table you want to join, and add any other query statements on *that* table. After the closing parenthesis, use the *on* keyword followed by your left (*$left.\* keyword) and right (*$right.\*) columns separated by using the `==` operator. Here's an example of an *inner join*: ```kusto OfficeActivity @@ -347,16 +348,16 @@ OfficeActivity ``` > [!NOTE] -> If both tables have the same name for the columns on which you are performing a join, you don't need to use *$left* and *$right*; instead, you can just specify the column name. Using *$left* and *$right*, however, is more explicit and generally considered to be a good practice. +> If both tables have the same name for the columns on which you're performing a join, you don't need to use *$left* and *$right*. Instead, just specify the column name. However, using *$left* and *$right* is more explicit and generally considered to be a good practice. > [!TIP] -> **It's a best practice** to have your smallest table on the left. In some cases, following this rule can give you huge performance benefits, depending on the types of joins you are performing and the size of the tables. +> **It's a best practice** to have your smallest table on the left. In some cases, following this rule can give you huge performance benefits, depending on the types of joins you're performing and the size of the tables. For more information, see [join operator](../join-operator.md). ## Evaluate -You might remember that back [in the first example](#understanding-query-structure-basics), we saw the [*evaluate*](../evaluate-operator.md) operator on one of the lines. The `evaluate` operator is less commonly used than the ones we have touched on previously. However, knowing how the `evaluate` operator works is well worth your time. Once more, here's that first query, where you see `evaluate` on the second line. +You might remember that back [in the first example](#understanding-query-structure-basics), you saw the [*evaluate*](../evaluate-operator.md) operator on one of the lines. The `evaluate` operator is less commonly used than the ones discussed previously. However, knowing how the `evaluate` operator works is well worth your time. Once more, here's that first query, where you see `evaluate` on the second line. ```kusto SigninLogs @@ -368,9 +369,9 @@ SigninLogs | take 5 ``` -This operator allows you to invoke available plugins (built-in functions). Many of these plugins are focused around data science, such as [*autocluster*](../autocluster-plugin.md), [*diffpatterns*](../diffpatterns-plugin.md), and [*sequence_detect*](../sequence-detect-plugin.md), allowing you to perform advanced analysis and discover statistical anomalies and outliers. +This operator invokes available plugins (built-in functions). Many of these plugins focus on data science, such as [*autocluster*](../autocluster-plugin.md), [*diffpatterns*](../diffpatterns-plugin.md), and [*sequence_detect*](../sequence-detect-plugin.md). By using these plugins, you can perform advanced analysis and discover statistical anomalies and outliers. -The plugin used in the example is called [*bag_unpack*](../bag-unpack-plugin.md), and it makes it simple to take a chunk of dynamic data and convert it to columns. Remember, [dynamic data](../scalar-data-types/dynamic.md) is a data type that looks similar to JSON, as shown in this example: +The plugin used in the example is called [*bag_unpack*](../bag-unpack-plugin.md). It makes it simple to take a chunk of dynamic data and convert it to columns. Remember, [dynamic data](../scalar-data-types/dynamic.md) is a data type that looks similar to JSON, as shown in this example: ```json { @@ -384,17 +385,17 @@ The plugin used in the example is called [*bag_unpack*](../bag-unpack-plugin.md) } ``` -In this case, we wanted to summarize the data by city, but *city* is contained as a property within the *LocationDetails* column. To use the *city* property in our query, we had to first convert it to a column using *bag_unpack*. +In this case, you want to summarize the data by city, but *city* is contained as a property within the *LocationDetails* column. To use the *city* property in your query, you need to first convert it to a column by using *bag_unpack*. -Going back to our original pipeline steps, we saw this: +Going back to the original pipeline steps, you saw this: `Get Data | Filter | Summarize | Sort | Select` -Now that we've considered the `evaluate` operator, we can see that it represents a new stage in the pipeline, which now looks like this: +Now that you considered the `evaluate` operator, you can see that it represents a new stage in the pipeline, which now looks like this: `Get Data | `***`Parse`***` | Filter | Summarize | Sort | Select` -There are many other examples of operators and functions that can be used to parse data sources into a more readable and manipulable format. You can learn about them - and the rest of the Kusto Query Language - in [Kusto Query Language learning resources](../kql-learning-resources.md) and in the [workbook](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/advanced-kql-framework-workbook-empowering-you-to-become-kql/ba-p/3033766). +Many other examples of operators and functions exist that you can use to parse data sources into a more readable and manipulable format. Learn about them - and the rest of the Kusto Query Language - in [Kusto Query Language learning resources](../kql-learning-resources.md) and in the [workbook](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/advanced-kql-framework-workbook-empowering-you-to-become-kql/ba-p/3033766). ## Let statements diff --git a/data-explorer/manage-cluster-locks.md b/data-explorer/manage-cluster-locks.md index ca071f91ed..07a9c4e335 100644 --- a/data-explorer/manage-cluster-locks.md +++ b/data-explorer/manage-cluster-locks.md @@ -1,14 +1,14 @@ --- -title: Manage cluster locks in Azure Data Explorer +title: Manage Cluster Locks in Azure Data Explorer description: Learn how to manage Azure Data Explorer cluster locks to prevent accidental deletion of data using the Azure portal. ms.reviewer: orhasban ms.topic: how-to -ms.date: 02/26/2023 +ms.date: 03/15/2026 --- # Manage Azure Data Explorer cluster locks to prevent accidental deletion in your cluster -As an administrator, you can lock your cluster to prevent accidental deletion of data. The lock overrides any user permissions set using [Azure Data Explorer role-based access control](/kusto/access-control/role-based-access-control?view=azure-data-explorer&preserve-view=true). +As an administrator, you can lock your cluster to prevent accidental deletion of data. The lock overrides any user permissions set by using [Azure Data Explorer role-based access control](/kusto/access-control/role-based-access-control?view=azure-data-explorer&preserve-view=true). In the Azure portal, you can set **Delete** or **Read-only** locks that prevent either deletions or modifications. The following table describes the permissions that each lock provides: @@ -17,7 +17,7 @@ In the Azure portal, you can set **Delete** or **Read-only** locks that prevent | **Delete** | Authorized users can read and modify a cluster, but they can't delete it. | | **Read-only** | Authorized users can read a cluster, but they can't delete or update it. Applying this lock is similar to restricting all authorized users to the permissions that the Reader role provides. | -This article describes how to lock and unlock your cluster using the Azure portal. For more information about locking Azure resources using the portal, see [Lock your resources to protect your infrastructure](/azure/azure-resource-manager/management/lock-resources). For information about how to lock your cluster programmatically, see [Management Locks - Create Or Update At Resource Level](/rest/api/resources/management-locks/create-or-update-at-resource-level). +This article describes how to lock and unlock your cluster by using the Azure portal. For more information about locking Azure resources by using the portal, see [Lock your resources to protect your infrastructure](/azure/azure-resource-manager/management/lock-resources). For information about how to lock your cluster programmatically, see [Management Locks - Create Or Update At Resource Level](/rest/api/resources/management-locks/create-or-update-at-resource-level). ## Lock your cluster in the Azure portal @@ -26,13 +26,13 @@ This article describes how to lock and unlock your cluster using the Azure porta 1. Go to your Azure Data Explorer cluster. 1. In the left-hand menu, under **Settings**, select **Locks**. 1. Select **Add**. -1. Give the lock a name and lock level. Optionally, you can add notes that describe the lock. +1. Enter a name and lock level for the lock. Optionally, add notes that describe the lock. :::image type="content" source="media/manage-cluster-locks/add-cluster-lock.png" alt-text="Screenshot showing add a cluster lock to prevent accidental deletion."::: ## Unlock your cluster in the Azure portal -To delete a lock, in the row where the lock appears, select the **Delete** button. +To delete a lock, select the **Delete** button in the row where the lock appears. :::image type="content" source="media/manage-cluster-locks/delete-cluster-lock.png" alt-text="Screenshot showing delete a cluster lock."::: diff --git a/data-explorer/security-network-migrate-vnet-to-private-endpoint.md b/data-explorer/security-network-migrate-vnet-to-private-endpoint.md index dc5c7837b4..5207dca90f 100644 --- a/data-explorer/security-network-migrate-vnet-to-private-endpoint.md +++ b/data-explorer/security-network-migrate-vnet-to-private-endpoint.md @@ -1,22 +1,22 @@ --- -title: Migrate a Virtual Network injected cluster to private endpoints +title: Migrate a Virtual Network Injected Cluster to Private Endpoints description: Learn how to migrate a Virtual Network injected Azure Data Explorer cluster to private endpoints. ms.reviewer: cosh, gunjand ms.topic: how-to -ms.date: 10/07/2024 +ms.date: 03/15/2026 ms.custom: sfi-image-nochange --- # Migrate a Virtual Network injected cluster to private endpoints > [!WARNING] -> Virtual Network Injection was retired for Azure Data Explorer by 1 February 2025. +> By 1 February 2025, Azure Data Explorer retired Virtual Network Injection. -This article describes the migration of a Microsoft Azure Virtual Network injected Azure Data Explorer cluster to an Azure Private Endpoints network security model. +This article describes how to migrate a Microsoft Azure Virtual Network injected Azure Data Explorer cluster to an Azure Private Endpoints network security model. -The process of the migration takes several minutes. The migration creates a new cluster for the engine and data management services, which reside in a virtual network managed by Microsoft. The connection is switched to the newly created services for you. This process results in a minimal downtime for querying the cluster. +The migration process takes several minutes. The migration creates a new cluster for the engine and data management services, which reside in a virtual network managed by Microsoft. The connection switches to the newly created services. This process results in minimal downtime for querying the cluster. -Following the migration, you can still connect to your cluster using the `private-[clustername].[geo-region].kusto.windows.net` (engine) and `ingest-private-[clustername].[geo-region].kusto.windows.net`\\`private-ingest-[clustername].[geo-region].kusto.windows.net` (data management) FQDNs. Nevertheless, we recommend moving to the regular cluster endpoints that aren't prefixed with `private`. +After the migration, you can still connect to your cluster by using the `private-[clustername].[geo-region].kusto.windows.net` (engine) and `ingest-private-[clustername].[geo-region].kusto.windows.net`\\`private-ingest-[clustername].[geo-region].kusto.windows.net` (data management) FQDNs. Nevertheless, move to the regular cluster endpoints that aren't prefixed with `private`. ## Prerequisites @@ -28,19 +28,19 @@ Following the migration, you can still connect to your cluster using the `privat - (Optional) You have a virtual network and a subnet where you want to create the private endpoint for the Azure Data Explorer cluster. - (Optional) You have the necessary permissions to establish and oversee private endpoints and private DNS zones within your subscription and resource group. -> [!Tip] +> [!TIP] > Alternatively, while performing the migration, you can temporarily have at least Contributor permissions on the resource group containing your cluster. ## Find clusters that use Virtual Network injection -You can use Azure Resource Graph to determine which clusters in your subscription use Virtual Network injection by exploring your Azure resources with the Kusto Query Language (KQL). +You can use Azure Resource Graph to determine which clusters in your subscription use Virtual Network injection by exploring your Azure resources by using the Kusto Query Language (KQL). ### [Azure Resource Graph](#tab/arg) 1. Go to the Resource Graph Explorer in the [Azure portal](https://portal.azure.com/). 1. Copy and paste the following query. Then select **Run query** to list all clusters that use Virtual Network injection: - The query filters the resources to only include clusters (`microsoft.kusto/clusters`) where the `virtualNetworkConfiguration` property state is set to `Enabled`, indicating that the cluster is using Virtual Network injection. + The query filters the resources to only include clusters (`microsoft.kusto/clusters`) where the `virtualNetworkConfiguration` property state is set to `Enabled`, indicating that the cluster uses Virtual Network injection. ```kusto resources @@ -63,41 +63,41 @@ az graph query -q "resources | where type == 'microsoft.kusto/clusters' | where ## Prepare to migrate -We recommend configuring your cluster infrastructure in alignment with the Azure Private Endpoints network security model before initiating the migration process. While it's possible to perform this configuration post-migration, doing so can result in a service disruption. +Before you start the migration process, configure your cluster infrastructure to align with the Azure Private Endpoints network security model. You can configure this model after migration, but it can cause service disruption. -The following steps ensure that post-migration clients in the virtual network can connect to the cluster and that the cluster can connect to other services. When firewalls for [Azure Storage](/azure/storage/common/storage-network-security) or [Azure Event Hubs](/azure/event-hubs/event-hubs-ip-filtering) are employed, these steps are crucial. For instance, if Service Endpoints were used for Azure Storage and Azure Event Hubs namespace, migrating the cluster out of the virtual network disrupts connections to these services. To restore connectivity, you need to set up managed private endpoints for Azure Data Explorer. +The following steps ensure that post-migration clients in the virtual network can connect to the cluster and that the cluster can connect to other services. These steps are crucial when you use firewalls for [Azure Storage](/azure/storage/common/storage-network-security) or [Azure Event Hubs](/azure/event-hubs/event-hubs-ip-filtering). For example, if you used Service Endpoints for Azure Storage and Azure Event Hubs namespace, migrating the cluster out of the virtual network disrupts connections to these services. To restore connectivity, you need to set up managed private endpoints for Azure Data Explorer. To prepare your cluster for migration: -1. In the Azure portal, go to the **Azure Data Explorer** cluster you'd like to migrate. +1. In the Azure portal, go to the **Azure Data Explorer** cluster that you want to migrate. 1. From the left menu, select **Networking**. :::image type="content" source="./media/security-network-migrate/vnet-injection-migration-overview.png" lightbox="./media/security-network-migrate/vnet-injection-migration-overview.png" alt-text="Screenshot of the Networking option in the Azure portal for virtual network injected clusters."::: -1. In order to connect to your cluster even if the [public access](security-network-restrict-public-access.md) was set to `Disabled`, select the **Private Endpoints connections** tab and [create a private endpoint](security-network-private-endpoint-create.md). Make sure that you choose a different subnet than the one you used for your Azure Data Explorer cluster with Azure Virtual Network integration, otherwise, the private endpoint deployment will fail. +1. To connect to your cluster even if you set [public access](security-network-restrict-public-access.md) to `Disabled`, select the **Private Endpoints connections** tab and [create a private endpoint](security-network-private-endpoint-create.md). Make sure that you choose a different subnet than the one you used for your Azure Data Explorer cluster with Azure Virtual Network integration. Otherwise, the private endpoint deployment fails. :::image type="content" source="./media/security-network-migrate/vnet-injection-migration-pe.png" lightbox="./media/security-network-migrate/vnet-injection-migration-pe.png" alt-text="Screenshot of the Networking option in the Azure portal for virtual network injected clusters. Tab for private endpoints selected."::: > [!NOTE] - > This configuration will take effect only after the migration of your your cluster. + > This configuration takes effect only after the migration of your cluster. -1. In order to allow your cluster to connect to other network secured services, select the **Managed private endpoints tab** and [create a managed private endpoint](security-network-managed-private-endpoint-create.md). +1. To allow your cluster to connect to other network secured services, select the **Managed private endpoints** tab and [create a managed private endpoint](security-network-managed-private-endpoint-create.md). :::image type="content" source="./media/security-network-migrate/vnet-injection-migration-mpe.png" lightbox="./media/security-network-migrate/vnet-injection-migration-mpe.png" alt-text="Screenshot of the Networking option in the Azure portal for virtual network injected clusters. Tab for managed private endpoints selected."::: > [!NOTE] - > This configuration will take effect only after the migration of your your cluster. + > This configuration takes effect only after the migration of your cluster. 1. To restrict outbound access, select the **Restrict outbound access** tab and see the documentation for how to [Restrict outbound access](security-network-restrict-outbound-access.md). These restrictions take immediate effect. :::image type="content" source="./media/security-network-migrate/vnet-injection-migration-roa.png" alt-text="Screenshot of the Networking option in the Azure portal for virtual network injected clusters. Tab for restricted outbound access selected."::: > [!WARNING] -> Failure of your cluster to connect to essential services for ingestion and external tables poses a risk of data loss. Additionally, queries calling out to other network-protected services may cease to function. +> If your cluster can't connect to essential services for ingestion and external tables, it risks data loss. Also, queries that call out to other network-protected services might stop working. > [!WARNING] -> The migration step must be performed within a few hours of completing the preparation steps. Delaying the migration might cause the service to malfunction. +> You must perform the migration step within a few hours of completing the preparation steps. If you delay the migration, the service might malfunction. ## Migrate your cluster @@ -105,13 +105,13 @@ To prepare your cluster for migration: To migrate your cluster from the Azure portal: -1. Go to the **Azure Data Explorer** cluster you would like to migrate. +1. Go to the **Azure Data Explorer** cluster you want to migrate. 1. From the left menu, select **Networking**. :::image type="content" source="./media/security-network-migrate/vnet-injection-migration-overview.png" alt-text="Screenshot of the Networking option in the Azure portal for virtual network injected clusters."::: -1. Select on the **Migrate** button. +1. Select the **Migrate** button. :::image type="content" source="./media/security-network-migrate/vnet-injection-migration-migrate.png" alt-text="Screenshot of the Networking option in the Azure portal for virtual network injected clusters. Migration tab is selected."::: @@ -121,7 +121,7 @@ To migrate your cluster from the Azure portal: To migrate your cluster by modifying the ARM template: -1. Locate the [**VirtualNetworkConfiguration**](/azure/templates/microsoft.kusto/clusters?pivots=deployment-language-arm-template#virtualnetworkconfiguration-1) in the ARM template of your cluster +1. Locate the [**VirtualNetworkConfiguration**](/azure/templates/microsoft.kusto/clusters?pivots=deployment-language-arm-template#virtualnetworkconfiguration-1) in the ARM template of your cluster. ```json "virtualNetworkConfiguration": { @@ -143,13 +143,13 @@ To migrate your cluster by modifying the ARM template: }, ``` -1. [**Deploy**](/azure/azure-resource-manager/templates/deployment-tutorial-local-template?tabs=azure-powershell) the ARM template +1. [**Deploy**](/azure/azure-resource-manager/templates/deployment-tutorial-local-template?tabs=azure-powershell) the ARM template. ### [Python script](#tab/python) -You can use a Python script to automate the migration of multiple your clusters. The script [migrateAzure Data Explorerclusters.py](https://github.com/Azure/azure-kusto-vnet-migration/blob/main/python/migrateADXclusters.py) available in the [Azure Kusto Virtual Network Migration GitHub repository](https://github.com/Azure/azure-kusto-vnet-migration) can be used for this purpose. +You can use a Python script to automate the migration of multiple clusters. Use the script [migrateAzure Data Explorerclusters.py](https://github.com/Azure/azure-kusto-vnet-migration/blob/main/python/migrateADXclusters.py) in the [Azure Kusto Virtual Network Migration GitHub repository](https://github.com/Azure/azure-kusto-vnet-migration). -Detailed steps on how to use this script are provided in the [README](https://github.com/Azure/azure-kusto-vnet-migration/blob/main/python/README.md) file in the same repository. For instructions on how to clone the repository, refer to the [README](https://github.com/Azure/azure-kusto-vnet-migration/blob/main/python/README.md). Install the required Python packages, and run the script with the necessary configuration. +The [README](https://github.com/Azure/azure-kusto-vnet-migration/blob/main/python/README.md) file in the same repository provides detailed steps on how to use this script. For instructions on how to clone the repository, refer to the [README](https://github.com/Azure/azure-kusto-vnet-migration/blob/main/python/README.md). Install the required Python packages, and run the script with the necessary configuration. This script migrates the specified clusters in one go, saving you the time and effort of migrating them individually. @@ -159,9 +159,9 @@ This script migrates the specified clusters in one go, saving you the time and e After migrating to private endpoints, perform the following checks to verify the migration was successful: -1. If you created new private endpoints, check that they are working as expected. If needed, refer to the [troubleshooting guide](security-network-private-endpoint-troubleshoot.md). +1. If you created new private endpoints, check that they're working as expected. If needed, refer to the [troubleshooting guide](security-network-private-endpoint-troubleshoot.md). -1. Check that ingestion is working properly with the [.show ingestion failures command](/kusto/management/ingestion-failures?view=azure-data-explorer&preserve-view=true) or refer to the guidance in [Monitor queued ingestion with metrics](monitor-queued-ingestion.md). This verification is especially relevant if you need to connect to network secured services for ingestion with services like [Azure Event Hubs](ingest-data-event-hub.md). +1. Check that ingestion is working properly by using the [.show ingestion failures command](/kusto/management/ingestion-failures?view=azure-data-explorer&preserve-view=true) or refer to the guidance in [Monitor queued ingestion with metrics](monitor-queued-ingestion.md). This verification is especially relevant if you need to connect to network secured services for ingestion with services like [Azure Event Hubs](ingest-data-event-hub-overview.md). ## Rollback @@ -191,7 +191,7 @@ To roll back your migration to the previous Virtual Network injected configurati 1. [**Deploy**](/azure/azure-resource-manager/templates/deployment-tutorial-local-template?tabs=azure-powershell) the ARM template to apply the changes. -This restores your cluster to the previous Virtual Network injected configuration. +This change restores your cluster to the previous Virtual Network injected configuration. ## Related content diff --git a/data-explorer/security-network-private-endpoint-create.md b/data-explorer/security-network-private-endpoint-create.md index 831f4f13e2..9b9b611719 100644 --- a/data-explorer/security-network-private-endpoint-create.md +++ b/data-explorer/security-network-private-endpoint-create.md @@ -1,9 +1,9 @@ --- -title: Create a private endpoint for Azure Data Explorer -description: In this article, you'll learn how to create a private endpoint for Azure Data Explorer. +title: Create a Private Endpoint for Azure Data Explorer +description: In this article, youll learn how to create a private endpoint for Azure Data Explorer. ms.reviewer: eladb ms.topic: how-to -ms.date: 04/05/2022 +ms.date: 03/15/2026 ms.custom: sfi-image-nochange --- @@ -22,7 +22,7 @@ Private endpoints use private IP addresses from your virtual network to connect ## Create a private endpoint -There are several ways to create a private endpoint for a cluster. +You can create a private endpoint for a cluster in several ways: * During the deployment of your cluster in the portal * By [creating a private endpoint](/azure/private-link/create-private-endpoint-portal) resource directly @@ -30,9 +30,9 @@ There are several ways to create a private endpoint for a cluster. ### Create a private endpoint during the deployment of your cluster in the portal -Use the following information to create a private endpoint whilst [creating your cluster](create-cluster-and-database.md). +Use the following information to create a private endpoint while [creating your cluster](create-cluster-and-database.md). -1. In the **Create an Azure Data Explorer cluster** page, select the **Network** tab. +1. In **Create an Azure Data Explorer cluster**, select the **Network** tab. 1. Under **Connectivity method**, select **Private Endpoints**. 1. Under **Private Endpoint**, select **Add**. @@ -46,7 +46,7 @@ Use the following information to create a private endpoint whilst [creating your Use the following information to create a private endpoint on an existing cluster. -1. In the Azure portal, navigate to your cluster and then select **Networking**. +1. In the Azure portal, go to your cluster and then select **Networking**. 1. Select **Private endpoint connections**, and then select **+ Private endpoint**. @@ -56,7 +56,7 @@ Use the following information to create a private endpoint on an existing cluste ### Configure your private endpoint -1. On the **Basics** tab, fill out the basic cluster details with the following information, and then select on **Next**. +1. On the **Basics** tab, fill out the basic cluster details with the following information, and then select **Next**. :::image type="content" source="media/security-network-private-endpoint/pe-create-2.png" alt-text="Screenshot of the create private endpoint page, showing the basic information."::: @@ -80,11 +80,11 @@ Use the following information to create a private endpoint on an existing cluste | Target sub-resource | *cluster* | There's no other option | | | | | - Alternatively, you can select **Connect to an Azure resource by resource ID or alias**. This enables you to create a private endpoint to a cluster in another tenant or if you don't have at least **Reader** access on the resource. + Alternatively, you can select **Connect to an Azure resource by resource ID or alias**. This option enables you to create a private endpoint to a cluster in another tenant or if you don't have at least **Reader** access on the resource. | **Setting** | **Suggested value** | **Field description** | |---|---|---| - | ResourceId or alias | /subscriptions/... | The resource ID or alias that someone has shared with you. The easiest way to get the resource ID is to navigate to the cluster in the Azure portal and copy the Resource ID from the **Properties** sections | + | ResourceId or alias | /subscriptions/... | The resource ID or alias that someone shared with you. The easiest way to get the resource ID is to go to the cluster in the Azure portal and copy the Resource ID from the **Properties** sections | | Target sub-resource | *cluster* | There's no other option | | Request message | *Please approve* | The resource owner sees this message while managing private endpoint connection | | | | | @@ -93,12 +93,12 @@ Use the following information to create a private endpoint on an existing cluste 1. Under **Private IP configuration**, select **Dynamically allocate IP address**. > [!NOTE] - > The **Statically allocate IP address** option is not supported. + > The **Statically allocate IP address** option isn't supported. 1. Under **Private DNS integration**, turn on the **Integrate with the private DNS zone**. It's needed to resolve the engine and data management endpoints including the storage accounts required for ingestion and export features. > [!NOTE] - > We recommend that you use the **Private DNS integration** option. If you have a situation where you can't use the option, follow the instructions under [Use a custom DNS server](#use-a-custom-dns-server). + > Use the **Private DNS integration** option. If you have a situation where you can't use the option, see [Use a custom DNS server](#use-a-custom-dns-server). 1. Select **Next**. @@ -112,15 +112,15 @@ Use the following information to create a private endpoint on an existing cluste ### Verify the private endpoint creation -Once the creation of the private endpoint is complete, you'll be able to access it in the Azure portal. +After the private endpoint is created, you can access it in the Azure portal. :::image type="content" source="media/security-network-private-endpoint/pe-create-6.png" alt-text="Screenshot of the create private endpoint page, showing the results of the private endpoint creation."::: -To see all the private endpoints created for your cluster: +To see all the private endpoints that you created for your cluster: -1. In the Azure portal, navigate to your cluster and then select **Networking** +1. In the Azure portal, go to your cluster and select **Networking**. -1. Select **Private endpoint**. In the table, you can see all private endpoints created for your cluster. +1. Select **Private endpoint**. In the table, you can see all private endpoints for your cluster. :::image type="content" source="media/security-network-private-endpoint/pe-create-7.png" alt-text="Screenshot of the networking page, showing the all private endpoints of the cluster in the Azure portal."::: @@ -128,16 +128,16 @@ To see all the private endpoints created for your cluster: ### Use a custom DNS server -In some situations, you may not be able to integrate with the private DNS zone of the virtual network. For example, you may be using your own DNS server or you create DNS records using the host files on your virtual machines. This section describes how to get to the DNS zones. +In some situations, you can't integrate with the private DNS zone of the virtual network. For example, you might be using your own DNS server or you create DNS records by using the host files on your virtual machines. This section describes how to get to the DNS zones. -1. Install [choco](https://chocolatey.org/install) -1. Install *ARMClient* +1. Install [choco](https://chocolatey.org/install). +1. Install *ARMClient*. ```powerShell choco install armclient ``` -1. Log in with ARMClient +1. Sign in by using *ARMClient*. ```powerShell armclient login @@ -150,7 +150,7 @@ In some situations, you may not be able to integrate with the private DNS zone o armclient GET /subscriptions//resourceGroups//providers/Microsoft.Kusto/clusters//privateLinkResources?api-version=2022-02-01 ``` -1. Check the response. The required DNS zones are in the "requiredZoneNames" array in the response of the result. +1. Check the response. The required DNS zones are in the `requiredZoneNames` array in the response of the result. ```json { @@ -185,12 +185,12 @@ In some situations, you may not be able to integrate with the private DNS zone o } ``` -1. in the Azure portal, navigate to your private endpoint, and **select DNS configuration**. On this page, you can get the required information for the IP address mapping to the DNS name. +1. In the Azure portal, navigate to your private endpoint, and **select DNS configuration**. On this page, you can get the required information for the IP address mapping to the DNS name. :::image type="content" source="media/security-network-private-endpoint/pe-dns-config-inline.png" alt-text="Screenshot of the DNS configuration page, showing the DNS configuration of the private endpoint." lightbox="media/security-network-private-endpoint/pe-dns-config.png"::: > [!WARNING] - > This information allows you to propagate your custom DNS server with the necessary records. We highly recommend that you integrate with the private DNS Zones of the virtual network and don't configure your own custom DNS server. The nature of private endpoints for Azure Data Explorer clusters is different than for other Azure PaaS services. In some situations, such as high ingestion loads, in order to increase throughput it might be necessary for the service to scale out the number of storage accounts that are accessible via the private endpoint. If you choose to propagate your own custom DNS server, it is your responsibility to take care of updating the DNS records in such situations, and later removing records i the number of storage accounts is scaled back in. + > This information allows you to propagate your custom DNS server with the necessary records. Integrate with the private DNS Zones of the virtual network and don't configure your own custom DNS server. The nature of private endpoints for Azure Data Explorer clusters is different than for other Azure PaaS services. In some situations, such as high ingestion loads, to increase throughput it might be necessary for the service to scale out the number of storage accounts that are accessible through the private endpoint. If you choose to propagate your own custom DNS server, you're responsible for updating the DNS records in such situations, and later removing records if the number of storage accounts scales back. ## Related content diff --git a/data-explorer/security.md b/data-explorer/security.md index 43af2a0712..cf179da223 100644 --- a/data-explorer/security.md +++ b/data-explorer/security.md @@ -1,22 +1,22 @@ --- -title: Secure Azure Data Explorer clusters in Azure +title: Secure Azure Data Explorer Clusters in Azure description: Learn about how to secure clusters in Azure Data Explorer. ms.reviewer: itsagui ms.topic: concept-article -ms.date: 12/26/2023 +ms.date: 03/15/2026 --- # Security in Azure Data Explorer -This article provides an introduction to security in Azure Data Explorer to help you protect your data and resources in the cloud and meet the security needs of your business. It's important to keep your clusters secure. Securing your clusters includes one or more Azure features that include secure access and storage. This article provides information to help you keep your cluster secure. +This article provides an introduction to security in Azure Data Explorer to help you protect your data and resources in the cloud and meet the security needs of your business. It's important to keep your clusters secure. Securing your clusters includes one or more Azure features that provide secure access and storage. This article provides information to help you keep your cluster secure. For more resources regarding compliance for your business or organization, see the [Azure compliance documentation](/azure/compliance). ## Network security -Network security is a requirement shared by many of our security-conscious enterprise customers. The intent is to isolate the network traffic and limit the attack surface for Azure Data Explorer and corresponding communications. You can therefore block traffic originating from non-Azure Data Explorer network segments and assure that only traffic from known sources reach Azure Data Explorer end points. This includes traffic originating on-premises or outside of Azure, with an Azure destination and vice versa. +Network security is a requirement shared by many security-conscious enterprise customers. The intent is to isolate the network traffic and limit the attack surface for Azure Data Explorer and corresponding communications. You can block traffic originating from non-Azure Data Explorer network segments and assure that only traffic from known sources reaches Azure Data Explorer endpoints. This protection includes traffic originating on-premises or outside of Azure, with an Azure destination and vice versa. -Azure Data Explorer supports private endpoints to achieve network isolation and security. Private endpoints provide a secure way to connect to your Azure Data Explorer cluster by using a private IP address from your virtual network, effectively bringing the service into your VNet. This ensures that traffic between your VNet and the service travels over the Microsoft backbone network, eliminating exposure from the public internet. +Azure Data Explorer supports private endpoints to achieve network isolation and security. Private endpoints provide a secure way to connect to your Azure Data Explorer cluster by using a private IP address from your virtual network, effectively bringing the service into your VNet. This configuration ensures that traffic between your VNet and the service travels over the Microsoft backbone network, eliminating exposure from the public internet. For more information about configuring private endpoints for your cluster, see [Private endpoint](security-network-overview.md#private-endpoint). @@ -24,52 +24,52 @@ For more information about configuring private endpoints for your cluster, see [ ### Role-based access control -Use [role-based access control (RBAC)](/azure/role-based-access-control/overview) to segregate duties and grant only the required access to cluster users. Instead of giving everybody unrestricted permissions on the cluster, you can allow only users assigned to specific roles to perform certain actions. You can configure [access control for the databases](manage-database-permissions.md) in the [Azure portal](/azure/role-based-access-control/role-assignments-portal), using the [Azure CLI](/azure/role-based-access-control/role-assignments-cli), or [Azure PowerShell](/azure/role-based-access-control/role-assignments-powershell). +Use [role-based access control (RBAC)](/azure/role-based-access-control/overview) to segregate duties and grant only the required access to cluster users. Instead of giving everyone unrestricted permissions on the cluster, allow only users assigned to specific roles to perform certain actions. You can configure [access control for the databases](manage-database-permissions.md) in the [Azure portal](/azure/role-based-access-control/role-assignments-portal), by using the [Azure CLI](/azure/role-based-access-control/role-assignments-cli), or [Azure PowerShell](/azure/role-based-access-control/role-assignments-powershell). ### Managed identities for Azure resources -A common challenge when building cloud applications is credentials management in your code for authenticating to cloud services. Keeping the credentials secure is an important task. The credentials shouldn't be stored in developer workstations or checked into source control. Azure Key Vault provides a way to securely store credentials, secrets, and other keys, but your code has to authenticate to Key Vault to retrieve them. +A common challenge when building cloud applications is credentials management in your code for authenticating to cloud services. Keeping the credentials secure is an important task. You shouldn't store the credentials on developer workstations or check them into source control. Azure Key Vault provides a way to securely store credentials, secrets, and other keys, but your code has to authenticate to Key Vault to retrieve them. -The Microsoft Entra managed identities for Azure resources feature solves this problem. The feature provides Azure services with an automatically managed identity in Microsoft Entra ID. You can use the identity to authenticate to any service that supports Microsoft Entra authentication, including Key Vault, without any credentials in your code. For more information about this service, see [managed identities for Azure resources](/azure/active-directory/managed-identities-azure-resources/overview) overview page. +The Microsoft Entra managed identities for Azure resources feature solves this problem. The feature provides Azure services with an automatically managed identity in Microsoft Entra ID. You can use the identity to authenticate to any service that supports Microsoft Entra authentication, including Key Vault, without any credentials in your code. For more information about this service, see the [managed identities for Azure resources](/azure/active-directory/managed-identities-azure-resources/overview) overview page. ## Data protection ### Azure disk encryption -[Azure Disk Encryption](/azure/security/azure-security-disk-encryption-overview) helps protect and safeguard your data to meet your organizational security and compliance commitments. It provides volume encryption for the OS and data disks of your cluster's virtual machines. Azure Disk Encryption also integrates with [Azure Key Vault](/azure/key-vault/), which allows us to control and manage the disk encryption keys and secrets, and ensure all data on the VM disks is encrypted. +[Azure Disk Encryption](/azure/security/azure-security-disk-encryption-overview) helps protect and safeguard your data so you can meet your organizational security and compliance commitments. It provides volume encryption for the OS and data disks of your cluster's virtual machines. Azure Disk Encryption also integrates with [Azure Key Vault](/azure/key-vault/), which you can use to control and manage the disk encryption keys and secrets, and ensure all data on the VM disks is encrypted. ### Customer-managed keys with Azure Key Vault -By default, data is encrypted with Microsoft-managed keys. For additional control over encryption keys, you can supply customer-managed keys to use for data encryption. You can manage encryption of your data at the storage level with your own keys. A customer-managed key is used to protect and control access to the root encryption key, which is used to encrypt and decrypt all data. Customer-managed keys offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data. +By default, Microsoft-managed keys encrypt data. For extra control over encryption keys, provide customer-managed keys for data encryption. You can manage the encryption of your data at the storage level by using your own keys. A customer-managed key protects and controls access to the root encryption key, which encrypts and decrypts all data. Customer-managed keys give you greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys that protect your data. -Use Azure Key Vault to store your customer-managed keys. You can create your own keys and store them in a key vault, or you can use an Azure Key Vault API to generate keys. The Azure Data Explorer cluster and the Azure Key Vault must be in the same region, but they can be in different subscriptions. For more information about Azure Key Vault, see [What is Azure Key Vault?](/azure/key-vault/key-vault-overview). For a detailed explanation on customer-managed keys, see [Customer-managed keys with Azure Key Vault](/azure/storage/common/storage-service-encryption). Configure customer-managed keys in your Azure Data Explorer cluster using the [Portal](customer-managed-keys.md?tabs=portal), [C#](customer-managed-keys.md?tabs=csharp), [Azure Resource Manager template](customer-managed-keys.md?tabs=arm), [CLI](customer-managed-keys.md?tabs=azcli), or the [PowerShell](customer-managed-keys.md?tabs=powershell). +Use Azure Key Vault to store your customer-managed keys. You can create your own keys and store them in a key vault, or you can use an Azure Key Vault API to generate keys. The Azure Data Explorer cluster and the Azure Key Vault must be in the same region, but they can be in different subscriptions. For more information about Azure Key Vault, see [What is Azure Key Vault?](/azure/key-vault/key-vault-overview). For a detailed explanation about customer-managed keys, see [Customer-managed keys with Azure Key Vault](/azure/storage/common/storage-service-encryption). Configure customer-managed keys in your Azure Data Explorer cluster by using the [Portal](customer-managed-keys.md?tabs=portal), [C#](customer-managed-keys.md?tabs=csharp), [Azure Resource Manager template](customer-managed-keys.md?tabs=arm), [CLI](customer-managed-keys.md?tabs=azcli), or the [PowerShell](customer-managed-keys.md?tabs=powershell). > [!NOTE] > Customer-managed keys rely on managed identities for Azure resources, a feature of Microsoft Entra ID. To configure customer-managed keys in the Azure portal, configure a managed identity to your cluster as described in [Configure managed identities for your Azure Data Explorer cluster](configure-managed-identities-cluster.md). #### Store customer-managed keys in Azure Key Vault -To enable customer-managed keys on a cluster, use an Azure Key Vault to store your keys. You must enable both the **Soft Delete** and **Do Not Purge** properties on the key vault. The key vault must be located in the same region as the cluster. Azure Data Explorer uses managed identities for Azure resources to authenticate to the key vault for encryption and decryption operations. Managed identities don't support cross-directory scenarios. +To enable customer-managed keys on a cluster, use an Azure Key Vault to store your keys. You must enable both the **Soft Delete** and **Do Not Purge** properties on the key vault. The key vault must be in the same region as the cluster. Azure Data Explorer uses managed identities for Azure resources to authenticate to the key vault for encryption and decryption operations. Managed identities don't support cross-directory scenarios. ##### Rotate customer-managed keys -You can rotate a customer-managed key in Azure Key Vault according to your compliance policies. To rotate a key, in Azure Key Vault, update the key version or create a new key, and then update the cluster to encrypt data using the new key URI. You can do these steps using the Azure CLI or in the portal. Rotating the key doesn't trigger re-encryption of existing data in the cluster. +You can rotate a customer-managed key in Azure Key Vault according to your compliance policies. To rotate a key, update the key version or create a new key in Azure Key Vault, and then update the cluster to encrypt data by using the new key URI. You can do these steps by using the Azure CLI or in the portal. Rotating the key doesn't trigger re-encryption of existing data in the cluster. -When rotating a key, typically you specify the same identity used when creating the cluster. Optionally, configure a new user-assigned identity for key access, or enable and specify the cluster's system-assigned identity. +When you rotate a key, typically you specify the same identity used when creating the cluster. Optionally, configure a new user-assigned identity for key access, or enable and specify the cluster's system-assigned identity. > [!NOTE] > Ensure that the required **Get**, **Unwrap Key**, and **Wrap Key** permissions are set for the identity you configure for key access. ##### Update key version -A common scenario is to update the version of the key used as a customer-managed key. Depending on how the cluster encryption is configured, the customer-managed key in the cluster is automatically updated, or must be manually updated. +A common scenario is to update the version of the key used as a customer-managed key. Depending on how you configure cluster encryption, the customer-managed key in the cluster is automatically updated or you must manually update it. ##### Revoke access to customer-managed keys To revoke access to customer-managed keys, use PowerShell or Azure CLI. For more information, see [Azure Key Vault PowerShell](/powershell/module/az.keyvault/) or [Azure Key Vault CLI](/cli/azure/keyvault). Revoking access blocks access to all data in the cluster's storage level, since the encryption key is consequently inaccessible by Azure Data Explorer. -> [!Note] -> When Azure Data Explorer identifies that access to a customer-managed key is revoked, it will automatically suspend the cluster to delete any cached data. Once access to the key is returned, the cluster will be resumed automatically. +> [!NOTE] +> When Azure Data Explorer identifies that access to a customer-managed key is revoked, it automatically suspends the cluster to delete any cached data. Once access to the key is returned, the cluster automatically resumes. ## Related content