From 5eb1a46fd8f9d70f0a6550663cccb6d317ce5db2 Mon Sep 17 00:00:00 2001 From: huoyao1125 <90880576+huoyao1125@users.noreply.github.com> Date: Fri, 20 Mar 2026 17:27:47 +0800 Subject: [PATCH 01/10] Create prometheus-grafana-integration-premium --- .../prometheus-grafana-integration-premium | 148 ++++++++++++++++++ 1 file changed, 148 insertions(+) create mode 100644 tidb-cloud/premium/prometheus-grafana-integration-premium diff --git a/tidb-cloud/premium/prometheus-grafana-integration-premium b/tidb-cloud/premium/prometheus-grafana-integration-premium new file mode 100644 index 0000000000000..49c61ddb418c1 --- /dev/null +++ b/tidb-cloud/premium/prometheus-grafana-integration-premium @@ -0,0 +1,148 @@ +--- +title: Integrate TiDB Cloud with Prometheus and Grafana +summary: Learn how to monitor your TiDB cluster with the Prometheus and Grafana integration. +--- + +# Integrate TiDB Cloud with Prometheus and Grafana + +TiDB Cloud provides a [Prometheus](https://prometheus.io/) API endpoint. If you have a Prometheus service, you can monitor key metrics of TiDB Cloud from the endpoint easily. + +This document describes how to configure your Prometheus service to read key metrics from the TiDB Cloud endpoint and how to view the metrics using [Grafana](https://grafana.com/). + +## Prometheus integration versions + +TiDB Cloud has supported the project-level Prometheus integration (Beta) since March 15, 2022. Starting from October 21, 2025, TiDB Cloud introduces the cluster-level Prometheus integration (Preview). Starting from December 2, 2025, the cluster-level Prometheus integration becomes generally available (GA). + +- **Cluster-level Prometheus integration**: if no legacy project-level Prometheus integration remains undeleted within your organization by October 21, 2025, TiDB Cloud provides the cluster-level Prometheus integration for your organization to experience the latest enhancements. + +- **Legacy project-level Prometheus integration (Beta)**: if at least one legacy project-level Prometheus integration remains undeleted within your organization by October 21, 2025, TiDB Cloud retains both existing and new integrations at the project level for your organization to avoid affecting current dashboards. + + > **Note** + > + > The legacy project-level Prometheus integrations will be deprecated on January 9, 2026. If your organization is still using these legacy integrations, follow [Migrate Prometheus Integrations](/tidb-cloud/migrate-prometheus-metrics-integrations.md) to migrate to the new cluster-level integrations and minimize disruptions to your metrics-related services. + +## Prerequisites + +- To integrate TiDB Cloud with Prometheus, you must have a self-hosted or managed Prometheus service. + +- To set up third-party metrics integration for TiDB Cloud, you must have the `Organization Owner` or `Project Owner` access in TiDB Cloud. To view the integration page, you need at least the `Project Viewer` role to access the target clusters under your project in TiDB Cloud. + +## Limitation + +- Prometheus and Grafana integrations now are only available for [TiDB Cloud Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-cloud-dedicated) clusters. +- Prometheus and Grafana integrations are not available when the cluster status is **CREATING**, **RESTORING**, **PAUSED**, or **RESUMING**. + +## Steps + +### Step 1. Get a scrape_config file for Prometheus + +Before configuring your Prometheus service to read metrics of TiDB Cloud, you need to generate a `scrape_config` YAML file in TiDB Cloud first. The `scrape_config` file contains a unique bearer token that allows the Prometheus service to monitor your target clusters. + +Depending on your [Prometheus integration version](#prometheus-integration-versions), the steps to get the `scrape_config` file for Prometheus and access the integration page are different. + + +
+ +1. In the [TiDB Cloud console](https://tidbcloud.com/), navigate to the [**Clusters**](https://tidbcloud.com/project/clusters) page of your project, and then click the name of your target cluster to go to its overview page. +2. In the left navigation pane, click **Settings** > **Integrations**. +3. On the **Integrations** page, click **Integration to Prometheus**. +4. Click **Add File** to generate and show the `scrape_config` file for the current cluster. +5. Make a copy of the `scrape_config` file content for later use. + +
+
+ +1. In the [TiDB Cloud console](https://tidbcloud.com), switch to your target project using the combo box in the upper-left corner. +2. In the left navigation pane, click **Project Settings** > **Integrations**. +3. On the **Integrations** page, click **Integration to Prometheus (BETA)**. +4. Click **Add File** to generate and show the scrape_config file for the current project. +5. Make a copy of the `scrape_config` file content for later use. + +
+
+ +> **Note:** +> +> For security reasons, TiDB Cloud only shows a newly generated `scrape_config` file once. Ensure that you copy the content before closing the file window. If you forget to do so, you need to delete the `scrape_config` file in TiDB Cloud and generate a new one. To delete a `scrape_config` file, select the file, click **...**, and then click **Delete**. + +### Step 2. Integrate with Prometheus + +1. In the monitoring directory specified by your Prometheus service, locate the Prometheus configuration file. + + For example, `/etc/prometheus/prometheus.yml`. + +2. In the Prometheus configuration file, locate the `scrape_configs` section, and then copy the `scrape_config` file content obtained from TiDB Cloud to the section. + +3. In your Prometheus service, check **Status** > **Targets** to confirm that the new `scrape_config` file has been read. If not, you might need to restart the Prometheus service. + +### Step 3. Use Grafana GUI dashboards to visualize the metrics + +After your Prometheus service is reading metrics from TiDB Cloud, you can use Grafana GUI dashboards to visualize the metrics as follows: + +1. Depending on your [Prometheus integration version](#prometheus-integration-versions), the link to download the Grafana dashboard JSON of TiDB Cloud for Prometheus is different. + + - For cluster-level Prometheus integration, download the Grafana dashboard JSON file [here](https://github.com/pingcap/docs/blob/master/tidb-cloud/monitor-prometheus-and-grafana-integration-tidb-cloud-dynamic-tracker.json). + - For legacy project-level Prometheus integration (Beta), download the Grafana dashboard JSON file [here](https://github.com/pingcap/docs/blob/master/tidb-cloud/monitor-prometheus-and-grafana-integration-grafana-dashboard-UI.json). + +2. [Import this JSON to your own Grafana GUI](https://grafana.com/docs/grafana/v8.5/dashboards/export-import/#import-dashboard) to visualize the metrics. + + > **Note:** + > + > If you are already using Prometheus and Grafana to monitor TiDB Cloud and want to incorporate the newly available metrics, it is recommended that you create a new dashboard instead of directly updating the JSON of the existing one. + +3. (Optional) Customize the dashboard as needed by adding or removing panels, changing data sources, and modifying display options. + +For more information about how to use Grafana, see [Grafana documentation](https://grafana.com/docs/grafana/latest/getting-started/getting-started-prometheus/). + +## Best practice of rotating scrape_config + +To improve data security, it is a general best practice to periodically rotate `scrape_config` file bearer tokens. + +1. Follow [Step 1](#step-1-get-a-scrape_config-file-for-prometheus) to create a new `scrape_config` file for Prometheus. +2. Add the content of the new file to your Prometheus configuration file. +3. Once you have confirmed that your Prometheus service is still able to read from TiDB Cloud, remove the content of the old `scrape_config` file from your Prometheus configuration file. +4. On the **Integrations** page of your project or cluster, delete the corresponding old `scrape_config` file to block anyone else from using it to read from the TiDB Cloud Prometheus endpoint. + +## Metrics available to Prometheus + +Prometheus tracks the following metric data for your TiDB clusters. + +| Metric name | Metric type | Labels | Description | +|:--- |:--- |:--- |:--- | +| tidbcloud_db_queries_total| count | sql_type: `Select\|Insert\|...`
cluster_name: ``
instance: `tidb-0\|tidb-1…`
component: `tidb` | The total number of statements executed | +| tidbcloud_db_failed_queries_total | count | type: `planner:xxx\|executor:2345\|...`
cluster_name: ``
instance: `tidb-0\|tidb-1…`
component: `tidb` | The total number of execution errors | +| tidbcloud_db_connections | gauge | cluster_name: ``
instance: `tidb-0\|tidb-1…`
component: `tidb` | Current number of connections in your TiDB server | +| tidbcloud_db_query_duration_seconds | histogram | sql_type: `Select\|Insert\|...`
cluster_name: ``
instance: `tidb-0\|tidb-1…`
component: `tidb` | The duration histogram of statements | +| tidbcloud_changefeed_latency | gauge | changefeed_id | The data replication latency between the upstream and the downstream of a changefeed | +| tidbcloud_changefeed_checkpoint_ts | gauge | changefeed_id | The checkpoint timestamp of a changefeed, representing the largest TSO (Timestamp Oracle) successfully written to the downstream | +| tidbcloud_changefeed_replica_rows | gauge | changefeed_id | The number of replicated rows that a changefeed writes to the downstream per second | +| tidbcloud_node_storage_used_bytes | gauge | cluster_name: ``
instance: `tikv-0\|tikv-1…\|tiflash-0\|tiflash-1…`
component: `tikv\|tiflash` | The disk usage, in bytes, for TiKV or TiFlash nodes. This metric primarily represents the logical data size in the storage engine, and excludes WAL files and temporary files. To calculate the actual disk usage rate, use `(capacity - available) / capacity` instead. When the storage usage of TiKV exceeds 80%, latency spikes might occur, and higher usage might cause requests to fail. When the storage usage of all TiFlash nodes reaches 80%, any DDL statement that adds a TiFlash replica hangs indefinitely. | +| tidbcloud_node_storage_capacity_bytes | gauge | cluster_name: ``
instance: `tikv-0\|tikv-1…\|tiflash-0\|tiflash-1…`
component: `tikv\|tiflash` | The disk capacity bytes of TiKV/TiFlash nodes | +| tidbcloud_node_cpu_seconds_total | count | cluster_name: ``
instance: `tidb-0\|tidb-1…\|tikv-0…\|tiflash-0…`
component: `tidb\|tikv\|tiflash` | The CPU usage of TiDB/TiKV/TiFlash nodes | +| tidbcloud_node_cpu_capacity_cores | gauge | cluster_name: ``
instance: `tidb-0\|tidb-1…\|tikv-0…\|tiflash-0…`
component: `tidb\|tikv\|tiflash` | The CPU limit cores of TiDB/TiKV/TiFlash nodes | +| tidbcloud_node_memory_used_bytes | gauge | cluster_name: ``
instance: `tidb-0\|tidb-1…\|tikv-0…\|tiflash-0…`
component: `tidb\|tikv\|tiflash` | The used memory bytes of TiDB/TiKV/TiFlash nodes | +| tidbcloud_node_memory_capacity_bytes | gauge | cluster_name: ``
instance: `tidb-0\|tidb-1…\|tikv-0…\|tiflash-0…`
component: `tidb\|tikv\|tiflash` | The memory capacity bytes of TiDB/TiKV/TiFlash nodes | +| tidbcloud_node_storage_available_bytes | gauge | instance: `tidb-0\|tidb-1\|...`
component: `tikv\|tiflash`
cluster_name: `` | The available disk space in bytes for TiKV/TiFlash nodes | +| tidbcloud_disk_read_latency | histogram | instance: `tidb-0\|tidb-1\|...`
component: `tikv\|tiflash`
cluster_name: ``
`device`: `nvme.*\|dm.*` | The read latency in seconds per storage device | +| tidbcloud_disk_write_latency | histogram | instance: `tidb-0\|tidb-1\|...`
component: `tikv\|tiflash`
cluster_name: ``
`device`: `nvme.*\|dm.*` | The write latency in seconds per storage device | +| tidbcloud_kv_request_duration | histogram | instance: `tidb-0\|tidb-1\|...`
component: `tikv`
cluster_name: ``
`type`: `BatchGet\|Commit\|Prewrite\|...` | The duration in seconds of TiKV requests by type | +| tidbcloud_component_uptime | histogram | instance: `tidb-0\|tidb-1\|...`
component: `tidb\|tikv\|tiflash`
cluster_name: `` | The uptime in seconds of TiDB components | +| tidbcloud_ticdc_owner_resolved_ts_lag | gauge | changefeed_id: ``
cluster_name: `` | The resolved timestamp lag in seconds for changefeed owner | +| tidbcloud_changefeed_status | gauge | changefeed_id: ``
cluster_name: `` | Changefeed status:
`-1`: Unknown
`0`: Normal
`1`: Warning
`2`: Failed
`3`: Stopped
`4`: Finished
`6`: Warning
`7`: Other | +| tidbcloud_resource_manager_resource_unit_read_request_unit | gauge | cluster_name: ``
resource_group: `` | The read request units consumed by Resource Manager | +| tidbcloud_resource_manager_resource_unit_write_request_unit | gauge | cluster_name: ``
resource_group: `` | The write request units consumed by Resource Manager | + +For cluster-level Prometheus integration, the following additional metrics are also available: + +| Metric name | Metric type | Labels | Description | +|:--- |:--- |:--- |:--- | +| tidbcloud_dm_task_status | gauge | instance: `instance`
task: `task`
cluster_name: `` | Task state of Data Migration:
0: Invalid
1: New
2: Running
3: Paused
4: Stopped
5: Finished
15: Error | +| tidbcloud_dm_syncer_replication_lag_bucket | gauge | instance: `instance`
cluster_name: `` | Replicate lag (bucket) of Data Migration. | +| tidbcloud_dm_syncer_replication_lag_gauge | gauge | instance: `instance`
task: `task`
cluster_name: `` | Replicate lag (gauge) of Data Migration. | +| tidbcloud_dm_relay_read_error_count | count | instance: `instance`
cluster_name: `` | The number of failed attempts to read binlog from the master. | + +## FAQ + +- Why does the same metric have different values on Grafana and the TiDB Cloud console at the same time? + + The aggregation calculation logic is different between Grafana and TiDB Cloud, so the displayed aggregated values might differ. You can adjust the `mini step` configuration in Grafana to get more fine-grained metric values. From a03071bb1831722505ac2f1d8edff6d18cad15f5 Mon Sep 17 00:00:00 2001 From: huoyao1125 <90880576+huoyao1125@users.noreply.github.com> Date: Mon, 23 Mar 2026 12:00:31 +0800 Subject: [PATCH 02/10] Update prometheus-grafana-integration-premium --- .../prometheus-grafana-integration-premium | 20 ++++--------------- 1 file changed, 4 insertions(+), 16 deletions(-) diff --git a/tidb-cloud/premium/prometheus-grafana-integration-premium b/tidb-cloud/premium/prometheus-grafana-integration-premium index 49c61ddb418c1..d11752a5a44e5 100644 --- a/tidb-cloud/premium/prometheus-grafana-integration-premium +++ b/tidb-cloud/premium/prometheus-grafana-integration-premium @@ -9,34 +9,22 @@ TiDB Cloud provides a [Prometheus](https://prometheus.io/) API endpoint. If you This document describes how to configure your Prometheus service to read key metrics from the TiDB Cloud endpoint and how to view the metrics using [Grafana](https://grafana.com/). -## Prometheus integration versions - -TiDB Cloud has supported the project-level Prometheus integration (Beta) since March 15, 2022. Starting from October 21, 2025, TiDB Cloud introduces the cluster-level Prometheus integration (Preview). Starting from December 2, 2025, the cluster-level Prometheus integration becomes generally available (GA). - -- **Cluster-level Prometheus integration**: if no legacy project-level Prometheus integration remains undeleted within your organization by October 21, 2025, TiDB Cloud provides the cluster-level Prometheus integration for your organization to experience the latest enhancements. - -- **Legacy project-level Prometheus integration (Beta)**: if at least one legacy project-level Prometheus integration remains undeleted within your organization by October 21, 2025, TiDB Cloud retains both existing and new integrations at the project level for your organization to avoid affecting current dashboards. - - > **Note** - > - > The legacy project-level Prometheus integrations will be deprecated on January 9, 2026. If your organization is still using these legacy integrations, follow [Migrate Prometheus Integrations](/tidb-cloud/migrate-prometheus-metrics-integrations.md) to migrate to the new cluster-level integrations and minimize disruptions to your metrics-related services. - ## Prerequisites - To integrate TiDB Cloud with Prometheus, you must have a self-hosted or managed Prometheus service. -- To set up third-party metrics integration for TiDB Cloud, you must have the `Organization Owner` or `Project Owner` access in TiDB Cloud. To view the integration page, you need at least the `Project Viewer` role to access the target clusters under your project in TiDB Cloud. +- To set up third-party metrics integration for TiDB Cloud, you must have the `Organization Owner` or `Instance Manager` access in TiDB Cloud. To view the integration page, you need at least the `Instance Viewer` role to access the target Premium Instances under your Organization in TiDB Cloud. ## Limitation -- Prometheus and Grafana integrations now are only available for [TiDB Cloud Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-cloud-dedicated) clusters. -- Prometheus and Grafana integrations are not available when the cluster status is **CREATING**, **RESTORING**, **PAUSED**, or **RESUMING**. +- Prometheus and Grafana integrations now are available for [TiDB Cloud Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-cloud-dedicated) clusters, [TiDB Cloud Essential](/tidb-cloud/select-cluster-tier.md#tidb-cloud-essential) instances, [TiDB Cloud Premium](/tidb-cloud/select-cluster-tier.md#tidb-cloud-premium) instances. +- Prometheus and Grafana integrations are not available when the cluster/instance status is **CREATING**, **RESTORING**, **PAUSED**, or **RESUMING**. ## Steps ### Step 1. Get a scrape_config file for Prometheus -Before configuring your Prometheus service to read metrics of TiDB Cloud, you need to generate a `scrape_config` YAML file in TiDB Cloud first. The `scrape_config` file contains a unique bearer token that allows the Prometheus service to monitor your target clusters. +Before configuring your Prometheus service to read metrics of TiDB Cloud, you need to generate a `scrape_config` YAML file in TiDB Cloud first. The `scrape_config` file contains a unique bearer token that allows the Prometheus service to monitor your target clusters/instances. Depending on your [Prometheus integration version](#prometheus-integration-versions), the steps to get the `scrape_config` file for Prometheus and access the integration page are different. From 7bcfba2704c5166f3a6dee828be64698bb7976e2 Mon Sep 17 00:00:00 2001 From: xixirangrang Date: Mon, 23 Mar 2026 14:01:39 +0800 Subject: [PATCH 03/10] Apply suggestions from code review --- .../prometheus-grafana-integration-premium | 25 ++++++++++--------- 1 file changed, 13 insertions(+), 12 deletions(-) diff --git a/tidb-cloud/premium/prometheus-grafana-integration-premium b/tidb-cloud/premium/prometheus-grafana-integration-premium index d11752a5a44e5..aa7f5377c35a3 100644 --- a/tidb-cloud/premium/prometheus-grafana-integration-premium +++ b/tidb-cloud/premium/prometheus-grafana-integration-premium @@ -22,11 +22,11 @@ This document describes how to configure your Prometheus service to read key met ## Steps -### Step 1. Get a scrape_config file for Prometheus +### Step 1. Get a `scrape_config` file for Prometheus Before configuring your Prometheus service to read metrics of TiDB Cloud, you need to generate a `scrape_config` YAML file in TiDB Cloud first. The `scrape_config` file contains a unique bearer token that allows the Prometheus service to monitor your target clusters/instances. -Depending on your [Prometheus integration version](#prometheus-integration-versions), the steps to get the `scrape_config` file for Prometheus and access the integration page are different. +The steps to get the `scrape_config` file and access the integration page vary depending on your [Prometheus integration version](#prometheus-integration-versions).
@@ -43,7 +43,7 @@ Depending on your [Prometheus integration version](#prometheus-integration-versi 1. In the [TiDB Cloud console](https://tidbcloud.com), switch to your target project using the combo box in the upper-left corner. 2. In the left navigation pane, click **Project Settings** > **Integrations**. 3. On the **Integrations** page, click **Integration to Prometheus (BETA)**. -4. Click **Add File** to generate and show the scrape_config file for the current project. +4. Click **Add File** to generate and show the `scrape_config` file for the current project. 5. Make a copy of the `scrape_config` file content for later use.
@@ -51,7 +51,8 @@ Depending on your [Prometheus integration version](#prometheus-integration-versi > **Note:** > -> For security reasons, TiDB Cloud only shows a newly generated `scrape_config` file once. Ensure that you copy the content before closing the file window. If you forget to do so, you need to delete the `scrape_config` file in TiDB Cloud and generate a new one. To delete a `scrape_config` file, select the file, click **...**, and then click **Delete**. +> - For security reasons, TiDB Cloud only shows a newly generated `scrape_config` file once. Ensure that you copy the content before closing the file window. +> - If you forget, delete the `scrape_config` file in TiDB Cloud and generate a new one. To delete a `scrape_config` file, select the file, click **...**, and then click **Delete**. ### Step 2. Integrate with Prometheus @@ -61,13 +62,13 @@ Depending on your [Prometheus integration version](#prometheus-integration-versi 2. In the Prometheus configuration file, locate the `scrape_configs` section, and then copy the `scrape_config` file content obtained from TiDB Cloud to the section. -3. In your Prometheus service, check **Status** > **Targets** to confirm that the new `scrape_config` file has been read. If not, you might need to restart the Prometheus service. +3. In your Prometheus service, check **Status** > **Targets** to verify that the new `scrape_config` file has been read. If not, you might need to restart the Prometheus service. ### Step 3. Use Grafana GUI dashboards to visualize the metrics -After your Prometheus service is reading metrics from TiDB Cloud, you can use Grafana GUI dashboards to visualize the metrics as follows: +After your Prometheus service reads metrics from TiDB Cloud, you can use Grafana GUI dashboards to visualize the metrics as follows: -1. Depending on your [Prometheus integration version](#prometheus-integration-versions), the link to download the Grafana dashboard JSON of TiDB Cloud for Prometheus is different. +1. The link to download the Grafana dashboard JSON of TiDB Cloud for Prometheus differs depending on your [Prometheus integration version](#prometheus-integration-versions). - For cluster-level Prometheus integration, download the Grafana dashboard JSON file [here](https://github.com/pingcap/docs/blob/master/tidb-cloud/monitor-prometheus-and-grafana-integration-tidb-cloud-dynamic-tracker.json). - For legacy project-level Prometheus integration (Beta), download the Grafana dashboard JSON file [here](https://github.com/pingcap/docs/blob/master/tidb-cloud/monitor-prometheus-and-grafana-integration-grafana-dashboard-UI.json). @@ -82,13 +83,13 @@ After your Prometheus service is reading metrics from TiDB Cloud, you can use Gr For more information about how to use Grafana, see [Grafana documentation](https://grafana.com/docs/grafana/latest/getting-started/getting-started-prometheus/). -## Best practice of rotating scrape_config +## Best practice of rotating `scrape_config` -To improve data security, it is a general best practice to periodically rotate `scrape_config` file bearer tokens. +To improve data security, periodically rotate `scrape_config` file bearer tokens. 1. Follow [Step 1](#step-1-get-a-scrape_config-file-for-prometheus) to create a new `scrape_config` file for Prometheus. 2. Add the content of the new file to your Prometheus configuration file. -3. Once you have confirmed that your Prometheus service is still able to read from TiDB Cloud, remove the content of the old `scrape_config` file from your Prometheus configuration file. +3. Once you confirm that your Prometheus service can read from TiDB Cloud, remove the content of the old `scrape_config` file from your Prometheus configuration file. 4. On the **Integrations** page of your project or cluster, delete the corresponding old `scrape_config` file to block anyone else from using it to read from the TiDB Cloud Prometheus endpoint. ## Metrics available to Prometheus @@ -104,7 +105,7 @@ Prometheus tracks the following metric data for your TiDB clusters. | tidbcloud_changefeed_latency | gauge | changefeed_id | The data replication latency between the upstream and the downstream of a changefeed | | tidbcloud_changefeed_checkpoint_ts | gauge | changefeed_id | The checkpoint timestamp of a changefeed, representing the largest TSO (Timestamp Oracle) successfully written to the downstream | | tidbcloud_changefeed_replica_rows | gauge | changefeed_id | The number of replicated rows that a changefeed writes to the downstream per second | -| tidbcloud_node_storage_used_bytes | gauge | cluster_name: ``
instance: `tikv-0\|tikv-1…\|tiflash-0\|tiflash-1…`
component: `tikv\|tiflash` | The disk usage, in bytes, for TiKV or TiFlash nodes. This metric primarily represents the logical data size in the storage engine, and excludes WAL files and temporary files. To calculate the actual disk usage rate, use `(capacity - available) / capacity` instead. When the storage usage of TiKV exceeds 80%, latency spikes might occur, and higher usage might cause requests to fail. When the storage usage of all TiFlash nodes reaches 80%, any DDL statement that adds a TiFlash replica hangs indefinitely. | +| tidbcloud_node_storage_used_bytes | gauge | cluster_name: ``
instance: `tikv-0\|tikv-1…\|tiflash-0\|tiflash-1…`
component: `tikv\|tiflash` | The disk usage, in bytes, for TiKV or TiFlash nodes. This metric primarily represents the logical data size in the storage engine, and excludes WAL files and temporary files. To calculate the actual disk usage rate, use `(capacity - available) / capacity` instead.
  • When the storage usage of TiKV exceeds 80%, latency spikes might occur, and higher usage might cause requests to fail.
  • When the storage usage of all TiFlash nodes reaches 80%, any DDL statement that adds a TiFlash replica hangs indefinitely.
| | tidbcloud_node_storage_capacity_bytes | gauge | cluster_name: ``
instance: `tikv-0\|tikv-1…\|tiflash-0\|tiflash-1…`
component: `tikv\|tiflash` | The disk capacity bytes of TiKV/TiFlash nodes | | tidbcloud_node_cpu_seconds_total | count | cluster_name: ``
instance: `tidb-0\|tidb-1…\|tikv-0…\|tiflash-0…`
component: `tidb\|tikv\|tiflash` | The CPU usage of TiDB/TiKV/TiFlash nodes | | tidbcloud_node_cpu_capacity_cores | gauge | cluster_name: ``
instance: `tidb-0\|tidb-1…\|tikv-0…\|tiflash-0…`
component: `tidb\|tikv\|tiflash` | The CPU limit cores of TiDB/TiKV/TiFlash nodes | @@ -133,4 +134,4 @@ For cluster-level Prometheus integration, the following additional metrics are a - Why does the same metric have different values on Grafana and the TiDB Cloud console at the same time? - The aggregation calculation logic is different between Grafana and TiDB Cloud, so the displayed aggregated values might differ. You can adjust the `mini step` configuration in Grafana to get more fine-grained metric values. + Grafana and TiDB Cloud use different aggregation calculation logic, so the displayed aggregated values might differ. You can adjust the `mini step` configuration in Grafana to get more fine-grained metric values. From 6e051f76f962b6c518b7083b38f3b77310c6ee30 Mon Sep 17 00:00:00 2001 From: houfaxin Date: Mon, 23 Mar 2026 14:13:21 +0800 Subject: [PATCH 04/10] Update TOC-tidb-cloud-premium.md --- TOC-tidb-cloud-premium.md | 1 + 1 file changed, 1 insertion(+) diff --git a/TOC-tidb-cloud-premium.md b/TOC-tidb-cloud-premium.md index 4710c30e36c8f..74ff5e5089ecb 100644 --- a/TOC-tidb-cloud-premium.md +++ b/TOC-tidb-cloud-premium.md @@ -147,6 +147,7 @@ - Monitor and Alert - [Overview](/tidb-cloud/monitor-tidb-cluster.md) - [Built-in Metrics](/tidb-cloud/premium/built-in-monitoring-premium.md) + - [Integrate TiDB Cloud with Prometheus and Grafana](/tidb-cloud/premium/prometheus-grafana-integration-premium.md) - Tune Performance - [Overview](/tidb-cloud/tidb-cloud-tune-performance-overview.md) - [Analyze Performance](/tidb-cloud/tune-performance.md) From cc5b372ed72bc0165bb65bbef4e217a2a10bdd30 Mon Sep 17 00:00:00 2001 From: houfaxin Date: Mon, 23 Mar 2026 14:22:23 +0800 Subject: [PATCH 05/10] Update TOC-tidb-cloud-essential.md --- TOC-tidb-cloud-essential.md | 1 + 1 file changed, 1 insertion(+) diff --git a/TOC-tidb-cloud-essential.md b/TOC-tidb-cloud-essential.md index d53eb7ec44d85..e6a504ac48840 100644 --- a/TOC-tidb-cloud-essential.md +++ b/TOC-tidb-cloud-essential.md @@ -58,6 +58,7 @@ - [Overview](/tidb-cloud/monitor-tidb-cluster.md) - [Built-in Metrics](/tidb-cloud/built-in-monitoring.md) - [Built-in Alerting](/tidb-cloud/monitor-built-in-alerting.md) + - [Integrate TiDB Cloud with Prometheus and Grafana](/tidb-cloud/premium/prometheus-grafana-integration-premium.md) - Subscribe to Alert Notifications - [Subscribe via Email](/tidb-cloud/monitor-alert-email.md) - [Subscribe via Slack](/tidb-cloud/monitor-alert-slack.md) From a10df441ba5fc76009fa905aacad5851236fcde1 Mon Sep 17 00:00:00 2001 From: houfaxin Date: Mon, 23 Mar 2026 14:25:01 +0800 Subject: [PATCH 06/10] add .md --- ...egration-premium => prometheus-grafana-integration-premium.md} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename tidb-cloud/premium/{prometheus-grafana-integration-premium => prometheus-grafana-integration-premium.md} (100%) diff --git a/tidb-cloud/premium/prometheus-grafana-integration-premium b/tidb-cloud/premium/prometheus-grafana-integration-premium.md similarity index 100% rename from tidb-cloud/premium/prometheus-grafana-integration-premium rename to tidb-cloud/premium/prometheus-grafana-integration-premium.md From d145803af18df012d24db2b586b4214df951e102 Mon Sep 17 00:00:00 2001 From: huoyao1125 <90880576+huoyao1125@users.noreply.github.com> Date: Wed, 25 Mar 2026 18:01:42 +0800 Subject: [PATCH 07/10] Update prometheus-grafana-integration-premium.md --- .../prometheus-grafana-integration-premium.md | 48 ++++--------------- 1 file changed, 9 insertions(+), 39 deletions(-) diff --git a/tidb-cloud/premium/prometheus-grafana-integration-premium.md b/tidb-cloud/premium/prometheus-grafana-integration-premium.md index aa7f5377c35a3..63afba5f8ec75 100644 --- a/tidb-cloud/premium/prometheus-grafana-integration-premium.md +++ b/tidb-cloud/premium/prometheus-grafana-integration-premium.md @@ -24,30 +24,12 @@ This document describes how to configure your Prometheus service to read key met ### Step 1. Get a `scrape_config` file for Prometheus -Before configuring your Prometheus service to read metrics of TiDB Cloud, you need to generate a `scrape_config` YAML file in TiDB Cloud first. The `scrape_config` file contains a unique bearer token that allows the Prometheus service to monitor your target clusters/instances. +Before configuring your Prometheus service to read metrics of TiDB Cloud, you need to generate a `scrape_config` YAML file in TiDB Cloud first. The `scrape_config` file contains a unique bearer token that allows the Prometheus service to monitor your target Premium Instances. -The steps to get the `scrape_config` file and access the integration page vary depending on your [Prometheus integration version](#prometheus-integration-versions). - - -
- -1. In the [TiDB Cloud console](https://tidbcloud.com/), navigate to the [**Clusters**](https://tidbcloud.com/project/clusters) page of your project, and then click the name of your target cluster to go to its overview page. -2. In the left navigation pane, click **Settings** > **Integrations**. -3. On the **Integrations** page, click **Integration to Prometheus**. -4. Click **Add File** to generate and show the `scrape_config` file for the current cluster. -5. Make a copy of the `scrape_config` file content for later use. - -
-
- -1. In the [TiDB Cloud console](https://tidbcloud.com), switch to your target project using the combo box in the upper-left corner. -2. In the left navigation pane, click **Project Settings** > **Integrations**. -3. On the **Integrations** page, click **Integration to Prometheus (BETA)**. -4. Click **Add File** to generate and show the `scrape_config` file for the current project. -5. Make a copy of the `scrape_config` file content for later use. - -
-
+1. In the [TiDB Cloud console](https://tidbcloud.com/), navigate to the [**My TiDB**](https://tidbcloud.com/tidbs) page, and then click the name of your target instance to go to its overview page. +2. In the left navigation pane, click **Integrations**>>**Integration to Prometheus**. +3. Click **Add File** to generate and show the `scrape_config` file for the current cluster. +4. Make a copy of the `scrape_config` file content for later use. > **Note:** > @@ -68,16 +50,13 @@ The steps to get the `scrape_config` file and access the integration page vary d After your Prometheus service reads metrics from TiDB Cloud, you can use Grafana GUI dashboards to visualize the metrics as follows: -1. The link to download the Grafana dashboard JSON of TiDB Cloud for Prometheus differs depending on your [Prometheus integration version](#prometheus-integration-versions). - - - For cluster-level Prometheus integration, download the Grafana dashboard JSON file [here](https://github.com/pingcap/docs/blob/master/tidb-cloud/monitor-prometheus-and-grafana-integration-tidb-cloud-dynamic-tracker.json). - - For legacy project-level Prometheus integration (Beta), download the Grafana dashboard JSON file [here](https://github.com/pingcap/docs/blob/master/tidb-cloud/monitor-prometheus-and-grafana-integration-grafana-dashboard-UI.json). +1. The link to download the Grafana dashboard JSON file of TiDB Cloud for Prometheus:[here, need to be updated by dev](https://github.com/pingcap/docs/blob/master/tidb-cloud/monitor-prometheus-and-grafana-integration-tidb-cloud-dynamic-tracker.json) 2. [Import this JSON to your own Grafana GUI](https://grafana.com/docs/grafana/v8.5/dashboards/export-import/#import-dashboard) to visualize the metrics. > **Note:** > - > If you are already using Prometheus and Grafana to monitor TiDB Cloud and want to incorporate the newly available metrics, it is recommended that you create a new dashboard instead of directly updating the JSON of the existing one. + > If you are already using Prometheus and Grafana to monitor TiDB Cloud Premium instances and want to incorporate the newly available metrics, it is recommended that you create a new dashboard instead of directly updating the JSON of the existing one. 3. (Optional) Customize the dashboard as needed by adding or removing panels, changing data sources, and modifying display options. @@ -90,11 +69,11 @@ To improve data security, periodically rotate `scrape_config` file bearer tokens 1. Follow [Step 1](#step-1-get-a-scrape_config-file-for-prometheus) to create a new `scrape_config` file for Prometheus. 2. Add the content of the new file to your Prometheus configuration file. 3. Once you confirm that your Prometheus service can read from TiDB Cloud, remove the content of the old `scrape_config` file from your Prometheus configuration file. -4. On the **Integrations** page of your project or cluster, delete the corresponding old `scrape_config` file to block anyone else from using it to read from the TiDB Cloud Prometheus endpoint. +4. On the **Integrations** page of your Premium Instance, delete the corresponding old `scrape_config` file to block anyone else from using it to read from the TiDB Cloud Prometheus endpoint. ## Metrics available to Prometheus -Prometheus tracks the following metric data for your TiDB clusters. +Prometheus tracks the following metric data for your Premium Instance. This module need to be updated by dev @guohu. | Metric name | Metric type | Labels | Description | |:--- |:--- |:--- |:--- | @@ -121,15 +100,6 @@ Prometheus tracks the following metric data for your TiDB clusters. | tidbcloud_resource_manager_resource_unit_read_request_unit | gauge | cluster_name: ``
resource_group: `` | The read request units consumed by Resource Manager | | tidbcloud_resource_manager_resource_unit_write_request_unit | gauge | cluster_name: ``
resource_group: `` | The write request units consumed by Resource Manager | -For cluster-level Prometheus integration, the following additional metrics are also available: - -| Metric name | Metric type | Labels | Description | -|:--- |:--- |:--- |:--- | -| tidbcloud_dm_task_status | gauge | instance: `instance`
task: `task`
cluster_name: `` | Task state of Data Migration:
0: Invalid
1: New
2: Running
3: Paused
4: Stopped
5: Finished
15: Error | -| tidbcloud_dm_syncer_replication_lag_bucket | gauge | instance: `instance`
cluster_name: `` | Replicate lag (bucket) of Data Migration. | -| tidbcloud_dm_syncer_replication_lag_gauge | gauge | instance: `instance`
task: `task`
cluster_name: `` | Replicate lag (gauge) of Data Migration. | -| tidbcloud_dm_relay_read_error_count | count | instance: `instance`
cluster_name: `` | The number of failed attempts to read binlog from the master. | - ## FAQ - Why does the same metric have different values on Grafana and the TiDB Cloud console at the same time? From e343f10b5ce1009395d40efd527ed8bfb2f0b478 Mon Sep 17 00:00:00 2001 From: huoyao1125 <90880576+huoyao1125@users.noreply.github.com> Date: Wed, 25 Mar 2026 18:11:03 +0800 Subject: [PATCH 08/10] Create prometheus-grafana-integration-essential --- .../prometheus-grafana-integration-essential | 107 ++++++++++++++++++ 1 file changed, 107 insertions(+) create mode 100644 tidb-cloud/prometheus-grafana-integration-essential diff --git a/tidb-cloud/prometheus-grafana-integration-essential b/tidb-cloud/prometheus-grafana-integration-essential new file mode 100644 index 0000000000000..c6b8246296a4f --- /dev/null +++ b/tidb-cloud/prometheus-grafana-integration-essential @@ -0,0 +1,107 @@ +--- +title: Integrate TiDB Cloud with Prometheus and Grafana +summary: Learn how to monitor your TiDB Essential Instances with the Prometheus and Grafana integration. +--- + +# Integrate TiDB Cloud with Prometheus and Grafana + +TiDB Cloud provides a [Prometheus](https://prometheus.io/) API endpoint. If you have a Prometheus service, you can monitor key metrics of TiDB Cloud from the endpoint easily. + +This document describes how to configure your Prometheus service to read key metrics from the TiDB Cloud endpoint and how to view the metrics using [Grafana](https://grafana.com/). + +## Prerequisites + +- To integrate TiDB Cloud with Prometheus, you must have a self-hosted or managed Prometheus service. + +- To set up third-party metrics integration for TiDB Cloud, you must have the `Organization Owner` or `Instance Manager` access in TiDB Cloud. To view the integration page, you need at least the `Instance Viewer` role to access the target Essential Instances under your Organization in TiDB Cloud. + +## Limitation + +- Prometheus and Grafana integrations now are available for [TiDB Cloud Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-cloud-dedicated) clusters, [TiDB Cloud Essential](/tidb-cloud/select-cluster-tier.md#tidb-cloud-essential) instances, [TiDB Cloud Premium](/tidb-cloud/select-cluster-tier.md#tidb-cloud-premium) instances. +- Prometheus and Grafana integrations are not available when the cluster/instance status is **CREATING**, **RESTORING**, **PAUSED**, or **RESUMING**. + +## Steps + +### Step 1. Get a `scrape_config` file for Prometheus + +Before configuring your Prometheus service to read metrics of TiDB Cloud, you need to generate a `scrape_config` YAML file in TiDB Cloud first. The `scrape_config` file contains a unique bearer token that allows the Prometheus service to monitor your target Essential Instances. + +1. In the [TiDB Cloud console](https://tidbcloud.com/), navigate to the [**My TiDB**](https://tidbcloud.com/tidbs) page, and then click the name of your target instance to go to its overview page. +2. In the left navigation pane, click **Integrations**>>**Integration to Prometheus**. +3. Click **Add File** to generate and show the `scrape_config` file for the current instance. +4. Make a copy of the `scrape_config` file content for later use. + +> **Note:** +> +> - For security reasons, TiDB Cloud only shows a newly generated `scrape_config` file once. Ensure that you copy the content before closing the file window. +> - If you forget, delete the `scrape_config` file in TiDB Cloud and generate a new one. To delete a `scrape_config` file, select the file, click **...**, and then click **Delete**. + +### Step 2. Integrate with Prometheus + +1. In the monitoring directory specified by your Prometheus service, locate the Prometheus configuration file. + + For example, `/etc/prometheus/prometheus.yml`. + +2. In the Prometheus configuration file, locate the `scrape_configs` section, and then copy the `scrape_config` file content obtained from TiDB Cloud to the section. + +3. In your Prometheus service, check **Status** > **Targets** to verify that the new `scrape_config` file has been read. If not, you might need to restart the Prometheus service. + +### Step 3. Use Grafana GUI dashboards to visualize the metrics + +After your Prometheus service reads metrics from TiDB Cloud, you can use Grafana GUI dashboards to visualize the metrics as follows: + +1. The link to download the Grafana dashboard JSON file of TiDB Cloud for Prometheus:[here, need to be updated by dev](https://github.com/pingcap/docs/blob/master/tidb-cloud/monitor-prometheus-and-grafana-integration-tidb-cloud-dynamic-tracker.json) + +2. [Import this JSON to your own Grafana GUI](https://grafana.com/docs/grafana/v8.5/dashboards/export-import/#import-dashboard) to visualize the metrics. + + > **Note:** + > + > If you are already using Prometheus and Grafana to monitor TiDB Cloud Essential instances and want to incorporate the newly available metrics, it is recommended that you create a new dashboard instead of directly updating the JSON of the existing one. + +3. (Optional) Customize the dashboard as needed by adding or removing panels, changing data sources, and modifying display options. + +For more information about how to use Grafana, see [Grafana documentation](https://grafana.com/docs/grafana/latest/getting-started/getting-started-prometheus/). + +## Best practice of rotating `scrape_config` + +To improve data security, periodically rotate `scrape_config` file bearer tokens. + +1. Follow [Step 1](#step-1-get-a-scrape_config-file-for-prometheus) to create a new `scrape_config` file for Prometheus. +2. Add the content of the new file to your Prometheus configuration file. +3. Once you confirm that your Prometheus service can read from TiDB Cloud, remove the content of the old `scrape_config` file from your Prometheus configuration file. +4. On the **Integrations** page of your Essential Instance, delete the corresponding old `scrape_config` file to block anyone else from using it to read from the TiDB Cloud Prometheus endpoint. + +## Metrics available to Prometheus + +Prometheus tracks the following metric data for your Essential Instance. This module need to be updated by dev @guohu. + +| Metric name | Metric type | Labels | Description | +|:--- |:--- |:--- |:--- | +| tidbcloud_db_queries_total| count | sql_type: `Select\|Insert\|...`
cluster_name: ``
instance: `tidb-0\|tidb-1…`
component: `tidb` | The total number of statements executed | +| tidbcloud_db_failed_queries_total | count | type: `planner:xxx\|executor:2345\|...`
cluster_name: ``
instance: `tidb-0\|tidb-1…`
component: `tidb` | The total number of execution errors | +| tidbcloud_db_connections | gauge | cluster_name: ``
instance: `tidb-0\|tidb-1…`
component: `tidb` | Current number of connections in your TiDB server | +| tidbcloud_db_query_duration_seconds | histogram | sql_type: `Select\|Insert\|...`
cluster_name: ``
instance: `tidb-0\|tidb-1…`
component: `tidb` | The duration histogram of statements | +| tidbcloud_changefeed_latency | gauge | changefeed_id | The data replication latency between the upstream and the downstream of a changefeed | +| tidbcloud_changefeed_checkpoint_ts | gauge | changefeed_id | The checkpoint timestamp of a changefeed, representing the largest TSO (Timestamp Oracle) successfully written to the downstream | +| tidbcloud_changefeed_replica_rows | gauge | changefeed_id | The number of replicated rows that a changefeed writes to the downstream per second | +| tidbcloud_node_storage_used_bytes | gauge | cluster_name: ``
instance: `tikv-0\|tikv-1…\|tiflash-0\|tiflash-1…`
component: `tikv\|tiflash` | The disk usage, in bytes, for TiKV or TiFlash nodes. This metric primarily represents the logical data size in the storage engine, and excludes WAL files and temporary files. To calculate the actual disk usage rate, use `(capacity - available) / capacity` instead.
  • When the storage usage of TiKV exceeds 80%, latency spikes might occur, and higher usage might cause requests to fail.
  • When the storage usage of all TiFlash nodes reaches 80%, any DDL statement that adds a TiFlash replica hangs indefinitely.
| +| tidbcloud_node_storage_capacity_bytes | gauge | cluster_name: ``
instance: `tikv-0\|tikv-1…\|tiflash-0\|tiflash-1…`
component: `tikv\|tiflash` | The disk capacity bytes of TiKV/TiFlash nodes | +| tidbcloud_node_cpu_seconds_total | count | cluster_name: ``
instance: `tidb-0\|tidb-1…\|tikv-0…\|tiflash-0…`
component: `tidb\|tikv\|tiflash` | The CPU usage of TiDB/TiKV/TiFlash nodes | +| tidbcloud_node_cpu_capacity_cores | gauge | cluster_name: ``
instance: `tidb-0\|tidb-1…\|tikv-0…\|tiflash-0…`
component: `tidb\|tikv\|tiflash` | The CPU limit cores of TiDB/TiKV/TiFlash nodes | +| tidbcloud_node_memory_used_bytes | gauge | cluster_name: ``
instance: `tidb-0\|tidb-1…\|tikv-0…\|tiflash-0…`
component: `tidb\|tikv\|tiflash` | The used memory bytes of TiDB/TiKV/TiFlash nodes | +| tidbcloud_node_memory_capacity_bytes | gauge | cluster_name: ``
instance: `tidb-0\|tidb-1…\|tikv-0…\|tiflash-0…`
component: `tidb\|tikv\|tiflash` | The memory capacity bytes of TiDB/TiKV/TiFlash nodes | +| tidbcloud_node_storage_available_bytes | gauge | instance: `tidb-0\|tidb-1\|...`
component: `tikv\|tiflash`
cluster_name: `` | The available disk space in bytes for TiKV/TiFlash nodes | +| tidbcloud_disk_read_latency | histogram | instance: `tidb-0\|tidb-1\|...`
component: `tikv\|tiflash`
cluster_name: ``
`device`: `nvme.*\|dm.*` | The read latency in seconds per storage device | +| tidbcloud_disk_write_latency | histogram | instance: `tidb-0\|tidb-1\|...`
component: `tikv\|tiflash`
cluster_name: ``
`device`: `nvme.*\|dm.*` | The write latency in seconds per storage device | +| tidbcloud_kv_request_duration | histogram | instance: `tidb-0\|tidb-1\|...`
component: `tikv`
cluster_name: ``
`type`: `BatchGet\|Commit\|Prewrite\|...` | The duration in seconds of TiKV requests by type | +| tidbcloud_component_uptime | histogram | instance: `tidb-0\|tidb-1\|...`
component: `tidb\|tikv\|tiflash`
cluster_name: `` | The uptime in seconds of TiDB components | +| tidbcloud_ticdc_owner_resolved_ts_lag | gauge | changefeed_id: ``
cluster_name: `` | The resolved timestamp lag in seconds for changefeed owner | +| tidbcloud_changefeed_status | gauge | changefeed_id: ``
cluster_name: `` | Changefeed status:
`-1`: Unknown
`0`: Normal
`1`: Warning
`2`: Failed
`3`: Stopped
`4`: Finished
`6`: Warning
`7`: Other | +| tidbcloud_resource_manager_resource_unit_read_request_unit | gauge | cluster_name: ``
resource_group: `` | The read request units consumed by Resource Manager | +| tidbcloud_resource_manager_resource_unit_write_request_unit | gauge | cluster_name: ``
resource_group: `` | The write request units consumed by Resource Manager | + +## FAQ + +- Why does the same metric have different values on Grafana and the TiDB Cloud console at the same time? + + Grafana and TiDB Cloud use different aggregation calculation logic, so the displayed aggregated values might differ. You can adjust the `mini step` configuration in Grafana to get more fine-grained metric values. From 7e7179b010779645b891b4634f01e8cf27d7b189 Mon Sep 17 00:00:00 2001 From: huoyao1125 <90880576+huoyao1125@users.noreply.github.com> Date: Wed, 25 Mar 2026 18:13:54 +0800 Subject: [PATCH 09/10] Update prometheus-grafana-integration-premium.md --- tidb-cloud/premium/prometheus-grafana-integration-premium.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/tidb-cloud/premium/prometheus-grafana-integration-premium.md b/tidb-cloud/premium/prometheus-grafana-integration-premium.md index 63afba5f8ec75..2e00d1ef6c06d 100644 --- a/tidb-cloud/premium/prometheus-grafana-integration-premium.md +++ b/tidb-cloud/premium/prometheus-grafana-integration-premium.md @@ -1,6 +1,6 @@ --- title: Integrate TiDB Cloud with Prometheus and Grafana -summary: Learn how to monitor your TiDB cluster with the Prometheus and Grafana integration. +summary: Learn how to monitor your Premium Instances with the Prometheus and Grafana integration. --- # Integrate TiDB Cloud with Prometheus and Grafana @@ -28,7 +28,7 @@ Before configuring your Prometheus service to read metrics of TiDB Cloud, you ne 1. In the [TiDB Cloud console](https://tidbcloud.com/), navigate to the [**My TiDB**](https://tidbcloud.com/tidbs) page, and then click the name of your target instance to go to its overview page. 2. In the left navigation pane, click **Integrations**>>**Integration to Prometheus**. -3. Click **Add File** to generate and show the `scrape_config` file for the current cluster. +3. Click **Add File** to generate and show the `scrape_config` file for the current Premium Instance. 4. Make a copy of the `scrape_config` file content for later use. > **Note:** From 1a484e2ba9a3a7d39efd7676f7e008f9a2394a62 Mon Sep 17 00:00:00 2001 From: Yiwen Chen Date: Wed, 25 Mar 2026 18:28:34 +0800 Subject: [PATCH 10/10] Rename prometheus-grafana-integration-essential to prometheus-grafana-integration-essential.md --- ...tion-essential => prometheus-grafana-integration-essential.md} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename tidb-cloud/{prometheus-grafana-integration-essential => prometheus-grafana-integration-essential.md} (100%) diff --git a/tidb-cloud/prometheus-grafana-integration-essential b/tidb-cloud/prometheus-grafana-integration-essential.md similarity index 100% rename from tidb-cloud/prometheus-grafana-integration-essential rename to tidb-cloud/prometheus-grafana-integration-essential.md