diff --git a/TOC-tidb-cloud-premium.md b/TOC-tidb-cloud-premium.md
index 64bae50b8cb26..e3571c5d7b8b3 100644
--- a/TOC-tidb-cloud-premium.md
+++ b/TOC-tidb-cloud-premium.md
@@ -202,6 +202,8 @@
- Migrate or Import Data
- [Overview](/tidb-cloud/tidb-cloud-migration-overview.md)
- Migrate Data into TiDB Cloud
+ - [Migrate Existing and Incremental Data Using Data Migration](/tidb-cloud/migrate-from-mysql-using-data-migration.md)
+ - [Migrate Incremental Data Using Data Migration](/tidb-cloud/migrate-incremental-data-from-mysql-using-data-migration.md)
- [Migrate from TiDB Self-Managed to TiDB Cloud Premium](/tidb-cloud/premium/migrate-from-op-tidb-premium.md)
- [Migrate and Merge MySQL Shards of Large Datasets](/tidb-cloud/migrate-sql-shards.md)
- [Migrate from Amazon RDS for Oracle Using AWS DMS](/tidb-cloud/migrate-from-oracle-using-aws-dms.md)
diff --git a/tidb-cloud/migrate-from-mysql-using-data-migration.md b/tidb-cloud/migrate-from-mysql-using-data-migration.md
index 3ad766aca2a85..f86d8baede181 100644
--- a/tidb-cloud/migrate-from-mysql-using-data-migration.md
+++ b/tidb-cloud/migrate-from-mysql-using-data-migration.md
@@ -6,7 +6,7 @@ aliases: ['/tidbcloud/migrate-data-into-tidb','/tidbcloud/migrate-incremental-da
# Migrate MySQL-Compatible Databases to TiDB Cloud Using Data Migration
-This document guides you through migrating your MySQL databases from Amazon Aurora MySQL, Amazon RDS, Azure Database for MySQL - Flexible Server, Google Cloud SQL for MySQL, or self-managed MySQL instances to {{{ .dedicated }}}{{{ .essential }}} using the Data Migration feature in the [TiDB Cloud console](https://tidbcloud.com/).
+This document guides you through migrating your MySQL databases from Amazon Aurora MySQL, Amazon RDS, Azure Database for MySQL - Flexible Server, Google Cloud SQL for MySQL, or self-managed MySQL instances to {{{ .dedicated }}}{{{ .essential }}}{{{ .premium }}} using the Data Migration feature in the [TiDB Cloud console](https://tidbcloud.com/).
@@ -16,6 +16,14 @@ This document guides you through migrating your MySQL databases from Amazon Auro
+
+
+> **Note:**
+>
+> Currently, the Data Migration feature is in public preview for {{{ .premium }}}.
+
+
+
This feature enables you to migrate your existing MySQL data and continuously replicate ongoing changes (binlog) from your MySQL-compatible source databases directly to TiDB Cloud, maintaining data consistency whether in the same region or across different regions. The streamlined process eliminates the need for separate dump and load operations, reducing downtime and simplifying your migration from MySQL to a more scalable platform.
If you only want to replicate ongoing binlog changes from your MySQL-compatible database to TiDB Cloud, see [Migrate Incremental Data from MySQL-Compatible Databases to TiDB Cloud Using Data Migration](/tidb-cloud/migrate-incremental-data-from-mysql-using-data-migration.md).
@@ -34,6 +42,17 @@ If you only want to replicate ongoing binlog changes from your MySQL-compatible
- Amazon Aurora MySQL writer instances support both existing data and incremental data migration. Amazon Aurora MySQL reader instances only support existing data migration and do not support incremental data migration.
+
+
+- The Data Migration feature for {{{ .premium }}} is in public preview.
+
+ - You cannot save or reuse source connection details across migration jobs.
+ - During public preview, additional restrictions might apply to migration jobs as the feature matures. For more information, contact [TiDB Cloud Support](/tidb-cloud/tidb-cloud-support.md).
+
+
+
+
+
### Maximum number of migration jobs
@@ -47,6 +66,8 @@ You can create up to 100 migration jobs on {{{ .essential }}} instances for each
+
+
### Filtered out and deleted databases
- The system databases will be filtered out and not migrated to TiDB Cloud even if you select all of the databases to migrate. That is, `mysql`, `information_schema`, `performance_schema`, and `sys` will not be migrated using this feature.
@@ -57,8 +78,6 @@ You can create up to 100 migration jobs on {{{ .essential }}} instances for each
-
-
### Limitations of Alibaba Cloud RDS
When using Alibaba Cloud RDS as a data source, every table must have an explicit primary key. For tables without one, RDS appends a hidden primary key to the binlog, which leads to a schema mismatch with the source table and causes the migration to fail.
@@ -69,8 +88,6 @@ During full data migration, PolarDB-X schemas might contain keywords that are in
To prevent this, create the target tables in the downstream database before starting the migration process.
-
-
### Limitations of existing data migration
- During existing data migration, if the target database already contains the table to be migrated and there are duplicate keys, the rows with duplicate keys will be replaced.
@@ -86,18 +103,30 @@ To prevent this, create the target tables in the downstream database before star
+
+
+- For {{{ .premium }}}, both logical mode (default) and physical mode are supported. Logical mode exports data from MySQL source databases as SQL statements and then executes them on the target {{{ .premium }}} instance, which consumes Request Capacity Units (RCUs) during the load. Physical mode uses `IMPORT INTO` on the target {{{ .premium }}} instance and is recommended for large datasets when load throughput and cost are priorities.
+- When you use physical mode and the migration job has started, do **NOT** enable PITR (Point-in-time Recovery) or have any changefeed on the {{{ .premium }}} instance. Otherwise, the migration job stops. If you need to enable PITR or have any changefeed, use logical mode instead to migrate data.
+- When you use physical mode, you cannot create a second migration job or import task for the {{{ .premium }}} instance before the existing data migration is completed.
+
+
+
### Limitations of incremental data migration
-- During incremental data migration, if the table to be migrated already exists in the target database with duplicate keys, an error is reported and the migration is interrupted. In this situation, you need to verify that the MySQL source data is accurate. If it is accurate, click the **Restart** button of the migration job, and the migration job will replace the conflicting records in the target {{{ .dedicated }}} cluster{{{ .essential }}} instance with the MySQL source records.
+
+
+- During incremental data migration, if the table to be migrated already exists in the target database with duplicate keys, an error is reported and the migration is interrupted. In this situation, you need to verify that the MySQL source data is accurate. If it is accurate, click the **Restart** button of the migration job, and the migration job will replace the conflicting records in the target {{{ .dedicated }}} cluster with the MySQL source records.
+
-- During incremental data migration (migrating ongoing changes to your {{{ .essential }}} instance), if the migration job recovers from an abrupt error, it might open the safe mode for 60 seconds. During the safe mode, `INSERT` statements are migrated as `REPLACE`, `UPDATE` statements as `DELETE` and `REPLACE`, and then these transactions are migrated to the target {{{ .essential }}} instance to ensure that all the data during the abrupt error has been migrated smoothly to the target {{{ .essential }}} instance. In this scenario, for MySQL source tables without primary keys or non-null unique indexes, some data might be duplicated in the target {{{ .essential }}} instance because the data might be inserted repeatedly into the target {{{ .essential }}} instance.
+- During incremental data migration, if the table to be migrated already exists in the target database with duplicate keys, an error is reported and the migration is interrupted. In this situation, you need to verify that the MySQL source data is accurate. If it is accurate, click the **Restart** button of the migration job, and the migration job will replace the conflicting records in the target {{{ .essential }}} instance with the MySQL source records.
+- During incremental data migration (migrating ongoing changes to your {{{ .essential }}} instance), if the migration job recovers from an abrupt error, it might enter safe mode for 60 seconds. During safe mode, TiDB Cloud migrates `INSERT` statements as `REPLACE` and `UPDATE` statements as `DELETE` and `REPLACE`, and then applies these transactions to the target {{{ .essential }}} instance so that all data during the abrupt error reaches the target safely. For source tables without primary keys or non-null unique indexes, this can result in duplicated rows on the target {{{ .essential }}} instance.
-- During incremental data migration (migrating ongoing changes to your {{{ .dedicated }}} cluster), if the migration job recovers from an abrupt error, it might open the safe mode for 60 seconds. During the safe mode, `INSERT` statements are migrated as `REPLACE`, `UPDATE` statements as `DELETE` and `REPLACE`, and then these transactions are migrated to the target {{{ .dedicated }}} cluster to ensure that all the data during the abrupt error has been migrated smoothly to the target {{{ .dedicated }}} cluster. In this scenario, for MySQL source tables without primary keys or non-null unique indexes, some data might be duplicated in the target {{{ .dedicated }}} cluster because the data might be inserted repeatedly into the target {{{ .dedicated }}} cluster.
+- During incremental data migration (migrating ongoing changes to your {{{ .dedicated }}} cluster), if the migration job recovers from an abrupt error, it might enter safe mode for 60 seconds. During safe mode, TiDB Cloud migrates `INSERT` statements as `REPLACE` and `UPDATE` statements as `DELETE` and `REPLACE`, and then applies these transactions to the target {{{ .dedicated }}} cluster so that all data during the abrupt error reaches the target safely. For source tables without primary keys or non-null unique indexes, this can result in duplicated rows on the target {{{ .dedicated }}} cluster.
- In the following scenarios, if the migration job takes longer than 24 hours, do not purge binary logs in the source database. This allows Data Migration to get consecutive binary logs for incremental data migration:
@@ -106,9 +135,15 @@ To prevent this, create the target tables in the downstream database before star
+
+
+- During incremental data migration (migrating ongoing changes to your {{{ .premium }}} instance), if the migration job recovers from an abrupt error, it might enter safe mode for 60 seconds. During safe mode, TiDB Cloud migrates `INSERT` statements as `REPLACE` and `UPDATE` statements as `DELETE` and `REPLACE`, and then applies these transactions to the target {{{ .premium }}} instance so that all data during the abrupt error reaches the target safely. For source tables without primary keys or non-null unique indexes, this can result in duplicated rows on the target {{{ .premium }}} instance.
+
+
+
## Prerequisites
-Before migrating, check whether your data source is supported, enable binary logging in your MySQL-compatible database, ensure network connectivity, and grant required privileges for both the source database and the target {{{ .dedicated }}} cluster{{{ .essential }}} instance database.
+Before migrating, check whether your data source is supported, enable binary logging in your MySQL-compatible database, ensure network connectivity, and grant required privileges for both the source database and the target {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance database.
### Make sure your data source and version are supported
@@ -141,9 +176,24 @@ For {{{ .essential }}}, the Data Migration feature supports the following data s
+
+
+For {{{ .premium }}}, the Data Migration feature supports any MySQL-compatible source database, and **MySQL** is the only data source type available in the migration job wizard. For supported connection methods, see [Ensure network connectivity](#ensure-network-connectivity).
+
+| Data source | Supported versions |
+|:-------------------------------------------------|:-------------------|
+| Self-managed MySQL (on-premises or public cloud) | 8.0, 5.7 |
+| Amazon Aurora MySQL | 8.0, 5.7 |
+| Amazon RDS MySQL | 8.0, 5.7 |
+| Azure Database for MySQL - Flexible Server | 8.0, 5.7 |
+| Google Cloud SQL for MySQL | 8.0, 5.7 |
+| Alibaba Cloud RDS MySQL | 8.0, 5.7 |
+
+
+
### Enable binary logs in the source MySQL-compatible database for replication
-To continuously replicate incremental changes from the source MySQL-compatible database to the target {{{ .dedicated }}} cluster{{{ .essential }}} instance using DM, you need the following configurations to enable binary logs in the source database:
+To continuously replicate incremental changes from the source MySQL-compatible database to the target {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance using DM, you need the following configurations to enable binary logs in the source database:
| Configuration | Required value | Why |
|:---------------------------------|:---------------|:----|
@@ -255,7 +305,7 @@ For more information, see [Set instance parameters](https://www.alibabacloud.com
### Ensure network connectivity
-Before creating a migration job, you need to plan and set up proper network connectivity between your source MySQL instance, the TiDB Cloud Data Migration (DM) service, and your target {{{ .dedicated }}} cluster{{{ .essential }}} instance.
+Before creating a migration job, you need to plan and set up proper network connectivity between your source MySQL instance, the TiDB Cloud Data Migration (DM) service, and your target {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance.
@@ -277,6 +327,16 @@ For {{{ .essential }}}, the available connection methods are as follows:
| Public endpoints or IP addresses | All cloud providers supported by TiDB Cloud | Quick proof-of-concept migrations, testing, or when private connectivity is unavailable |
| Private links or private endpoints | AWS and Alibaba Cloud only | Production workloads without exposing data to the public internet |
+
+
+
+For {{{ .premium }}}, the available connection methods are as follows:
+
+| Connection method | Availability | Recommended for |
+|:---------------------|:-------------|:----------------|
+| Public endpoints or IP addresses | All cloud providers supported by {{{ .premium }}} | Quick proof-of-concept migrations, testing, or when private connectivity is unavailable |
+| Private links | AWS only | Production workloads without exposing data to the public internet |
+
Choose a connection method that best fits your cloud provider, network topology, and security requirements, and then follow the setup instructions for that method.
@@ -335,23 +395,51 @@ If you use a provider-native private link or private endpoint, create a private
Set up AWS PrivateLink and Private Endpoint for the MySQL source database
-AWS does not support direct PrivateLink access to RDS or Aurora. Therefore, you need to create a Network Load Balancer (NLB) and publish it as an endpoint service associated with your source MySQL instance.
+AWS does not support direct PrivateLink access to RDS or Aurora. Therefore, you need to create a Network Load Balancer (NLB), publish it as an endpoint service associated with your source MySQL instance, and authorize TiDB Cloud's AWS principal to consume the service.
+
+1. In the [Amazon EC2 console](https://console.aws.amazon.com/ec2/), create an internal NLB with a TCP listener on port `3306` that forwards to a target group containing your database's private IP. Key configuration:
-1. In the [Amazon EC2 console](https://console.aws.amazon.com/ec2/), create an NLB in the same subnet(s) as your RDS or Aurora writer. Configure the NLB with a TCP listener on port `3306` that forwards traffic to the database endpoint.
+ - **Scheme**: **Internal**. The load balancer stays inside your VPC; only the endpoint service in the next step exposes it to TiDB Cloud.
+ - **VPC**: the same VPC as your RDS or Aurora instance. The form defaults to your account's Default VPC, which is rarely where your database lives, so switch the **VPC** dropdown before continuing.
+ - **Availability Zones**: select subnets in **at least 2 Availability Zones**. An NLB requires multi-AZ for endpoint service availability. If your RDS is single-AZ, you still need a second subnet in a different AZ in the same VPC.
+ - **Listener port**: `3306`. The wizard default is `80`, change it before creating the listener.
+ - **Target group**: target type **IP addresses**, protocol **TCP**, port **3306**, in the same VPC as your database. RDS endpoints are not directly registerable, so you register the database's private IP.
+
+ To find your database's private IP, in the [Amazon EC2 console](https://console.aws.amazon.com/ec2/), click **Network Interfaces** in the left navigation pane, and filter by **Description** = `RDSNetworkInterface` and **VPC** = your VPC. Use the **Primary private IPv4 address** shown in the matching network interface.
+
+ > **Note:**
+ >
+ > The RDS private IP can change on failover, maintenance, or storage scaling. For production deployments, see [Access Amazon RDS across VPCs using AWS PrivateLink and Network Load Balancer](https://aws.amazon.com/blogs/database/access-amazon-rds-across-vpcs-using-aws-privatelink-and-network-load-balancer/) in the AWS Database Blog for an automated IP-rotation pattern.
For detailed instructions, see [Create a Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-network-load-balancer.html) in AWS documentation.
-2. In the [Amazon VPC console](https://console.aws.amazon.com/vpc/), click **Endpoint Services** in the left navigation pane, and then create an endpoint service. During the setup, select the NLB created in the previous step as the backing load balancer, and enable the **Require acceptance for endpoint** option. After the endpoint service is created, copy the service name (in the `com.amazonaws.vpce-svc-xxxxxxxxxxxxxxxxx` format) for later use.
+2. In the [Amazon VPC console](https://console.aws.amazon.com/vpc/), click **Endpoint Services** in the left navigation pane, and click **Create endpoint service**. Configure the following:
+
+ - **Load balancer type**: **Network**, and select the NLB created in the previous step. If the **Available load balancers** list is empty, wait until the NLB shows the **Active** state and click the refresh icon next to the list.
+ - **Acceptance required**: enabled (this is the default).
+ - **Supported IP address types**: check **IPv4**.
+
+ After the endpoint service is created, copy the service name for later use. The service name is in the `com.amazonaws.vpce..vpce-svc-` format, for example, `com.amazonaws.vpce.us-east-1.vpce-svc-0123456789abcdef0`.
For detailed instructions, see [Create an endpoint service](https://docs.aws.amazon.com/vpc/latest/privatelink/create-endpoint-service.html) in AWS documentation.
-3. Optional: Test connectivity from a bastion or client inside the same VPC or VNet before starting the migration:
+3. Authorize TiDB Cloud's AWS principal to use your endpoint service. On the endpoint service detail page in the [Amazon VPC console](https://console.aws.amazon.com/vpc/), open the **Allow principals** tab, click **Allow principals**, and add the following ARN:
+
+ ```text
+ arn:aws:iam::886436925895:root
+ ```
+
+ Without this step, TiDB Cloud cannot create the VPC endpoint that connects to your service, and the **Create Private Endpoint for External Services** dialog in TiDB Cloud hangs indefinitely with no error.
+
+ For detailed instructions, see [Manage permissions](https://docs.aws.amazon.com/vpc/latest/privatelink/configure-endpoint-service.html#add-remove-permissions) in AWS documentation.
+
+4. Optional: Test connectivity from a bastion or client inside the same VPC or VNet before starting the migration:
```shell
mysql -h -P 3306 -u -p --ssl-ca= -e "SELECT version();"
```
-4. Later, when configuring TiDB Cloud DM to connect via PrivateLink, you will need to return to the AWS console and approve the pending connection request from TiDB Cloud to this private endpoint.
+5. Later, when configuring TiDB Cloud DM to connect via PrivateLink, you will need to return to the AWS console and approve the pending connection request from TiDB Cloud to this private endpoint.
@@ -386,6 +474,79 @@ To add a new private endpoint, take the following steps:
If you use a provider-native private link or private endpoint, create a [Private Link Connection](/tidb-cloud/serverless-private-link-connection.md) for your source MySQL instance.
+
+
+
+For {{{ .premium }}} instances hosted on AWS, you can use AWS PrivateLink to connect to your source MySQL instance without exposing the database to the public internet. You can reuse a private endpoint across multiple Data Migration jobs and changefeeds on the same {{{ .premium }}} instance.
+
+
+ Set up AWS PrivateLink and Private Endpoint for the MySQL source database
+
+AWS does not support direct PrivateLink access to RDS or Aurora. Therefore, you need to create a Network Load Balancer (NLB), publish it as an endpoint service associated with your source MySQL instance, and authorize TiDB Cloud's AWS principal to consume the service.
+
+1. In the [Amazon EC2 console](https://console.aws.amazon.com/ec2/), create an internal NLB with a TCP listener on port `3306` that forwards to a target group containing your database's private IP. Key configuration:
+
+ - **Scheme**: **Internal**. The load balancer stays inside your VPC; only the endpoint service in the next step exposes it to TiDB Cloud.
+ - **VPC**: the same VPC as your RDS or Aurora instance. The form defaults to your account's Default VPC, which is rarely where your database lives, so switch the **VPC** dropdown before continuing.
+ - **Availability Zones**: select subnets in **at least 2 Availability Zones**. An NLB requires multi-AZ for endpoint service availability. If your RDS is single-AZ, you still need a second subnet in a different AZ in the same VPC.
+ - **Listener port**: `3306`. The wizard default is `80`, change it before creating the listener.
+ - **Target group**: target type **IP addresses**, protocol **TCP**, port **3306**, in the same VPC as your database. RDS endpoints are not directly registerable, so you register the database's private IP.
+
+ To find your database's private IP, in the [Amazon EC2 console](https://console.aws.amazon.com/ec2/), click **Network Interfaces** in the left navigation pane, and filter by **Description** = `RDSNetworkInterface` and **VPC** = your VPC. Use the **Primary private IPv4 address** shown in the matching network interface.
+
+ > **Note:**
+ >
+ > The RDS private IP can change on failover, maintenance, or storage scaling. For production deployments, see [Access Amazon RDS across VPCs using AWS PrivateLink and Network Load Balancer](https://aws.amazon.com/blogs/database/access-amazon-rds-across-vpcs-using-aws-privatelink-and-network-load-balancer/) in the AWS Database Blog for an automated IP-rotation pattern.
+
+ For detailed instructions, see [Create a Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-network-load-balancer.html) in AWS documentation.
+
+2. In the [Amazon VPC console](https://console.aws.amazon.com/vpc/), click **Endpoint Services** in the left navigation pane, and click **Create endpoint service**. Configure the following:
+
+ - **Load balancer type**: **Network**, and select the NLB created in the previous step. If the **Available load balancers** list is empty, wait until the NLB shows the **Active** state and click the refresh icon next to the list.
+ - **Acceptance required**: enabled (this is the default).
+ - **Supported IP address types**: check **IPv4**.
+
+ After the endpoint service is created, copy the service name for later use. The service name is in the `com.amazonaws.vpce..vpce-svc-` format, for example, `com.amazonaws.vpce.us-east-1.vpce-svc-0123456789abcdef0`.
+
+ For detailed instructions, see [Create an endpoint service](https://docs.aws.amazon.com/vpc/latest/privatelink/create-endpoint-service.html) in AWS documentation.
+
+3. Authorize TiDB Cloud's AWS principal to use your endpoint service. On the endpoint service detail page in the [Amazon VPC console](https://console.aws.amazon.com/vpc/), open the **Allow principals** tab, click **Allow principals**, and add the following ARN:
+
+ ```text
+ arn:aws:iam::886436925895:root
+ ```
+
+ Without this step, TiDB Cloud cannot create the VPC endpoint that connects to your service, and the **Create Private Endpoint for External Services** dialog in TiDB Cloud hangs indefinitely with no error.
+
+ For detailed instructions, see [Manage permissions](https://docs.aws.amazon.com/vpc/latest/privatelink/configure-endpoint-service.html#add-remove-permissions) in AWS documentation.
+
+4. Optional: Test connectivity from a bastion or client inside the same VPC or VNet before starting the migration:
+
+ ```shell
+ mysql -h -P 3306 -u -p --ssl-ca= -e "SELECT version();"
+ ```
+
+5. Later, when configuring TiDB Cloud DM to connect via PrivateLink, you will need to return to the AWS console and approve the pending connection request from TiDB Cloud to this private endpoint.
+
+
+
+You can create the private endpoint either on the **Networking** page of your {{{ .premium }}} instance or during Data Migration job creation (see [Step 2](#step-2-configure-the-source-and-target-connections)).
+
+To create a private endpoint from the **Networking** page, take the following steps:
+
+1. Log in to the [TiDB Cloud console](https://tidbcloud.com/) and navigate to the overview page of your {{{ .premium }}} instance.
+2. In the left navigation pane, click **Settings** > **Networking**.
+3. In the **AWS Private Endpoint for External Services** section, click **Create Private Endpoint for External Services**.
+4. In the **Create Private Endpoint for External Services** dialog, enter a name for the private endpoint and the **Endpoint Service Name** you copied when setting up AWS PrivateLink for the MySQL source database.
+
+ > **Note:**
+ >
+ > Before clicking **Create**, ensure that you have authorized TiDB Cloud's AWS principal (`arn:aws:iam::886436925895:root`) on your endpoint service in AWS, as described in Step 3 of **Set up AWS PrivateLink and Private Endpoint for the MySQL source database** above. Otherwise, this dialog hangs indefinitely with no error.
+
+5. Click **Create**.
+
+ After the private endpoint becomes available, you can select it when creating a Data Migration job.
+
@@ -399,7 +560,7 @@ If you use AWS VPC peering or Google Cloud VPC network peering, see the followin
If your MySQL service is in an AWS VPC, take the following steps:
-1. [Set up a VPC peering connection](/tidb-cloud/set-up-vpc-peering-connections.md) between the VPC of the MySQL service and your {{{ .dedicated }}} cluster{{{ .essential }}} instance.
+1. [Set up a VPC peering connection](/tidb-cloud/set-up-vpc-peering-connections.md) between the VPC of the MySQL service and your {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance.
2. Modify the inbound rules of the security group that the MySQL service is associated with.
@@ -451,7 +612,7 @@ If your MySQL service is in a Google Cloud VPC, take the following steps:
### Grant required privileges for migration
-Before starting migration, you need to set up appropriate database users with the required privileges on both the source and target databases. These privileges enable TiDB Cloud DM to read data from MySQL, replicate changes, and write to your {{{ .dedicated }}} cluster{{{ .essential }}} instance securely. Because the migration involves both full data dumps for existing data and binlog replication for incremental changes, your migration user requires specific permissions beyond basic read access.
+Before starting migration, you need to set up appropriate database users with the required privileges on both the source and target databases. These privileges enable TiDB Cloud DM to read data from MySQL, replicate changes, and write to your {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance securely. Because the migration involves both full data dumps for existing data and binlog replication for incremental changes, your migration user requires specific permissions beyond basic read access.
#### Grant required privileges to the migration user in the source MySQL database
@@ -477,11 +638,11 @@ GRANT SELECT, RELOAD, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'dm_source
GRANT SELECT, RELOAD, LOCK TABLES, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'dm_source_user'@'%';
```
-#### Grant required privileges in the target {{{ .dedicated }}} cluster{{{ .essential }}} instance
+#### Grant required privileges in the target {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance
-For testing purposes, you can use the `root` account of your {{{ .dedicated }}} cluster{{{ .essential }}} instance.
+For testing purposes, you can use the `root` account of your {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance.
-For production workloads, it is recommended to have a dedicated user for replication in the target {{{ .dedicated }}} cluster{{{ .essential }}} instance and grant only the necessary privileges:
+For production workloads, it is recommended to have a dedicated user for replication in the target {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance and grant only the necessary privileges:
| Privilege | Scope | Purpose |
|:----------|:------|:--------|
@@ -495,7 +656,7 @@ For production workloads, it is recommended to have a dedicated user for replica
| `INDEX` | Tables | Creates and modifies indexes |
| `CREATE VIEW` | Views | Creates views used by migration |
-For example, you can execute the following `GRANT` statement in your target {{{ .dedicated }}} cluster{{{ .essential }}} instance to grant corresponding privileges:
+For example, you can execute the following `GRANT` statement in your target {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance to grant corresponding privileges:
```sql
GRANT CREATE, SELECT, INSERT, UPDATE, DELETE, ALTER, DROP, INDEX ON *.* TO 'dm_target_user'@'%';
@@ -505,7 +666,7 @@ GRANT CREATE, SELECT, INSERT, UPDATE, DELETE, ALTER, DROP, INDEX ON *.* TO 'dm_t
1. Log in to the [TiDB Cloud console](https://tidbcloud.com/) and navigate to the [**My TiDB**](https://tidbcloud.com/tidbs) page.
-2. Click the name of your target {{{ .dedicated }}} cluster{{{ .essential }}} instance to go to its overview page, and then click **Data** > **Data Migration** in the left navigation pane.
+2. Click the name of your target {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance to go to its overview page, and then click **Data** > **Data Migration** in the left navigation pane.
3. On the **Data Migration** page, click **Create Migration Job** in the upper-right corner. The **Create Migration Job** page is displayed.
@@ -535,6 +696,14 @@ On the **Create Migration Job** page, configure the source and target connection
- **Public**: available for all cloud providers (recommended for testing and proof-of-concept migrations).
- **Private Link**: available for AWS and Alibaba Cloud only (recommended for production workloads requiring private connectivity).
+
+
+
+ - **Connectivity method**: select a connection method for your data source based on your security requirements and cloud provider:
+
+ - **Public**: available for all cloud providers supported by {{{ .premium }}} (recommended for testing and proof-of-concept migrations).
+ - **Private Link**: available for AWS only (recommended for production workloads requiring private connectivity).
+
@@ -543,7 +712,7 @@ On the **Create Migration Job** page, configure the source and target connection
- If **Public IP** or **VPC Peering** is selected, fill in the **Hostname or IP address** field with the hostname or IP address of the data source.
- If **Private Link** is selected, fill in the following information:
- - **Endpoint Service Name** (available if **Data source** is from AWS): enter the VPC endpoint service name (format: `com.amazonaws.vpce-svc-xxxxxxxxxxxxxxxxx`) that you created for your RDS or Aurora instance.
+ - **Endpoint Service Name** (available if **Data source** is from AWS): enter the VPC endpoint service name (format: `com.amazonaws.vpce..vpce-svc-`, for example, `com.amazonaws.vpce.us-east-1.vpce-svc-0123456789abcdef0`) that you created for your RDS or Aurora instance.
- **Private Endpoint Resource ID** (available if **Data source** is from Azure): enter the resource ID of your MySQL Flexible Server instance (format: `/subscriptions//resourceGroups//providers/Microsoft.DBforMySQL/flexibleServers/`).
@@ -554,6 +723,14 @@ On the **Create Migration Job** page, configure the source and target connection
- If **Public** is selected, fill in the **Hostname or IP address** field with the hostname or IP address of the data source.
- If **Private Link** is selected, select the private link connection that you created in the [Private link or private endpoint](#private-link-or-private-endpoint) section.
+
+
+
+ - Based on the selected **Connectivity method**, do the following:
+
+ - If **Public** is selected, fill in the **Hostname or IP address** field with the hostname or IP address of the data source.
+ - If **Private Link** is selected, in the **Private Endpoint** field, select an existing private endpoint, or click **Create a Private Endpoint here** to create one. Private endpoints are managed under **Networking** > **Private Endpoint for External Services** for your {{{ .premium }}} instance. You can reuse a private endpoint across multiple Data Migration jobs and changefeeds. For setup details, see [Private link or private endpoint](#private-link-or-private-endpoint).
+
- **Port**: the port of the data source.
@@ -589,7 +766,7 @@ On the **Create Migration Job** page, configure the source and target connection
3. Fill in the target connection profile.
- - **User Name**: enter the username of the target {{{ .dedicated }}} cluster{{{ .essential }}} instance in TiDB Cloud.
+ - **User Name**: enter the username of the target {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance in TiDB Cloud.
- **Password**: enter the password of the TiDB Cloud username.
4. Click **Validate Connection and Next** to validate the information you have entered.
@@ -600,7 +777,7 @@ On the **Create Migration Job** page, configure the source and target connection
- If you use **Public IP** or **VPC Peering** as the connectivity method, you need to add the Data Migration service's IP addresses to the IP Access List of your source database and firewall (if any).
- If you use **Private Link** as the connectivity method, you are prompted to accept the endpoint request:
- - For AWS: go to the [AWS VPC console](https://us-west-2.console.aws.amazon.com/vpc/home), click **Endpoint services**, and accept the endpoint request from TiDB Cloud.
+ - For AWS: in the [AWS VPC console](https://console.aws.amazon.com/vpc/home), switch to the AWS region where you created the endpoint service, click **Endpoint services**, and accept the endpoint request from TiDB Cloud.
- For Azure: go to the [Azure portal](https://portal.azure.com), search for your MySQL Flexible Server by name, click **Setting** > **Networking** in the left navigation pane, locate the **Private endpoint** section on the right side, and then approve the pending connection request from TiDB Cloud.
@@ -609,6 +786,12 @@ On the **Create Migration Job** page, configure the source and target connection
If you use Public IP, you need to add the Data Migration service's IP addresses to the IP Access List of your source database and firewall (if any).
+
+
+ - If you use **Public** as the connectivity method, you need to add the Data Migration service's IP addresses to the IP Access List of your source database and firewall (if any).
+ - If you use **Private Link** and the selected private endpoint has not yet been accepted in AWS, in the [AWS VPC console](https://console.aws.amazon.com/vpc/home), switch to the AWS region where you created the endpoint service, select **Endpoint services**, and accept the endpoint connection request from TiDB Cloud.
+
+
## Step 3: Choose migration job type
@@ -638,8 +821,8 @@ You can use **physical mode** or **logical mode** to migrate **existing data** a
> **Note:**
>
-> - When you use physical mode, you cannot create a second migration job or import task for the {{{ .dedicated }}} cluster{{{ .essential }}} instance before the existing data migration is completed.
-> - When you use physical mode and the migration job has started, do **NOT** enable PITR (Point-in-time Recovery) or have any changefeed on the {{{ .dedicated }}} cluster{{{ .essential }}} instance. Otherwise, the migration job will be stuck. If you need to enable PITR or have any changefeed, use logical mode instead to migrate data.
+> - When you use physical mode, you cannot create a second migration job or import task for the {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance before the existing data migration is completed.
+> - When you use physical mode and the migration job has started, do **NOT** enable PITR (Point-in-time Recovery) or have any changefeed on the {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance. Otherwise, the migration job stops. If you need to enable PITR or have any changefeed, use logical mode instead to migrate data.
Physical mode exports the MySQL source data as fast as possible, so [different specifications](/tidb-cloud/tidb-cloud-billing-dm.md#specifications-for-data-migration) have different performance impacts on QPS and TPS of the MySQL source database during data export. The following table shows the performance regression of each specification.
@@ -755,7 +938,7 @@ When scaling a migration job specification, note the following:
1. Log in to the [TiDB Cloud console](https://tidbcloud.com/) and navigate to the [**My TiDB**](https://tidbcloud.com/tidbs) page.
-2. Click the name of your target {{{ .dedicated }}} cluster{{{ .essential }}} instance to go to its overview page, and then click **Data** > **Data Migration** in the left navigation pane.
+2. Click the name of your target {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance to go to its overview page, and then click **Data** > **Data Migration** in the left navigation pane.
3. On the **Data Migration** page, locate the migration job you want to scale. In the **Action** column, click **...** > **Scale Up/Down**.
diff --git a/tidb-cloud/migrate-incremental-data-from-mysql-using-data-migration.md b/tidb-cloud/migrate-incremental-data-from-mysql-using-data-migration.md
index 5d0a3d1d10d19..03573453c2cee 100644
--- a/tidb-cloud/migrate-incremental-data-from-mysql-using-data-migration.md
+++ b/tidb-cloud/migrate-incremental-data-from-mysql-using-data-migration.md
@@ -5,7 +5,7 @@ summary: Learn how to migrate incremental data from MySQL-compatible databases h
# Migrate Only Incremental Data from MySQL-Compatible Databases to TiDB Cloud Using Data Migration
-This document describes how to migrate incremental data from a MySQL-compatible database on a cloud provider (Amazon Aurora MySQL, Amazon Relational Database Service (RDS), Google Cloud SQL for MySQL, Azure Database for MySQL, or Alibaba Cloud RDS) or self-hosted source database to {{{ .dedicated }}}{{{ .essential }}} using the Data Migration feature of the TiDB Cloud console.
+This document describes how to migrate incremental data from a MySQL-compatible database on a cloud provider (Amazon Aurora MySQL, Amazon Relational Database Service (RDS), Google Cloud SQL for MySQL, Azure Database for MySQL, or Alibaba Cloud RDS) or self-hosted source database to {{{ .dedicated }}}{{{ .essential }}}{{{ .premium }}} using the Data Migration feature of the TiDB Cloud console.
@@ -15,6 +15,14 @@ This document describes how to migrate incremental data from a MySQL-compatible
+
+
+> **Note:**
+>
+> Currently, the Data Migration feature is in public preview for {{{ .premium }}}.
+
+
+
For instructions about how to migrate existing data or both existing data and incremental data, see [Migrate MySQL-Compatible Databases to TiDB Cloud Using Data Migration](/tidb-cloud/migrate-from-mysql-using-data-migration.md).
## Limitations
@@ -148,7 +156,7 @@ To enable the GTID mode for a self-hosted MySQL instance, follow these steps:
>
> If you are in multiple organizations, use the combo box in the upper-left corner to switch to your target organization first.
-2. Click the name of your target {{{ .dedicated }}} cluster{{{ .essential }}} instance to go to its overview page, and then click **Data** > **Data Migration** in the left navigation pane.
+2. Click the name of your target {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance to go to its overview page, and then click **Data** > **Data Migration** in the left navigation pane.
3. On the **Data Migration** page, click **Create Migration Job** in the upper-right corner. The **Create Migration Job** page is displayed.
@@ -162,7 +170,7 @@ On the **Create Migration Job** page, configure the source and target connection
- **Data source**: the data source type.
- **Region**: the region of the data source, which is required for cloud databases only.
- - **Connectivity method**: the connection method for the data source. Currently, you can choose public IP, VPC Peering, or Private Link according to your connection method.You can choose public IP or Private Link according to your connection method.
+ - **Connectivity method**: the connection method for the data source. Currently, you can choose public IP, VPC Peering, or Private Link according to your connection method.You can choose public IP or Private Link according to your connection method.You can choose Public or Private Link (AWS only) according to your connection method.
@@ -175,6 +183,12 @@ On the **Create Migration Job** page, configure the source and target connection
- **Hostname or IP address** (for public IP): the hostname or IP address of the data source.
- **Private Link Connection** (for Private Link): the private link connection that you created in the [Private Link Connections](/tidb-cloud/serverless-private-link-connection.md) section.
+
+
+
+ - **Hostname or IP address** (for Public): the hostname or IP address of the data source.
+ - **Private Endpoint** (for Private Link): the private endpoint that you created in **Networking** > **Private Endpoint for External Services** for your {{{ .premium }}} instance. Alternatively, click **Create a Private Endpoint here** to create one. For setup details, see the [Private link or private endpoint](/tidb-cloud/migrate-from-mysql-using-data-migration.md#private-link-or-private-endpoint) section in the Data Migration guide.
+
- **Port**: the port of the data source.
@@ -187,7 +201,7 @@ On the **Create Migration Job** page, configure the source and target connection
3. Fill in the target connection profile.
- - **Username**: enter the username of the target {{{ .dedicated }}} cluster{{{ .essential }}} instance.
+ - **Username**: enter the username of the target {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance.
- **Password**: enter the password of the TiDB Cloud username.
4. Click **Validate Connection and Next** to validate the information you have entered.
@@ -197,7 +211,7 @@ On the **Create Migration Job** page, configure the source and target connection
- If you use Public IP or VPC Peering, you need to add the Data Migration service's IP addresses to the IP Access List of your source database and firewall (if any).
- - If you use AWS Private Link, you are prompted to accept the endpoint request. Go to the [AWS VPC console](https://us-west-2.console.aws.amazon.com/vpc/home), and click **Endpoint services** to accept the endpoint request.
+ - If you use AWS Private Link, you are prompted to accept the endpoint request. In the [AWS VPC console](https://console.aws.amazon.com/vpc/home), switch to the AWS region where you created the endpoint service, and click **Endpoint services** to accept the endpoint request.
@@ -206,6 +220,13 @@ On the **Create Migration Job** page, configure the source and target connection
+
+
+ - If you use **Public** as the connectivity method, you need to add the Data Migration service's IP addresses to the IP Access List of your source database and firewall (if any).
+ - If you use **Private Link** and the selected private endpoint has not yet been accepted in AWS, in the [AWS VPC console](https://console.aws.amazon.com/vpc/home), switch to the AWS region where you created the endpoint service, click **Endpoint services**, and accept the endpoint connection request from TiDB Cloud.
+
+
+
## Step 3: Choose migration job type
To migrate only the incremental data of the source database to TiDB Cloud, select **Incremental data migration** and do not select **Existing data migration**. In this way, the migration job only migrates ongoing changes of the source database to TiDB Cloud.