-
Notifications
You must be signed in to change notification settings - Fork 710
Create prometheus-grafana-integration-premium #22603
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
huoyao1125
wants to merge
11
commits into
pingcap:release-8.5
Choose a base branch
from
huoyao1125:patch-38
base: release-8.5
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
+216
−0
Open
Changes from all commits
Commits
Show all changes
11 commits
Select commit
Hold shift + click to select a range
5eb1a46
Create prometheus-grafana-integration-premium
huoyao1125 a03071b
Update prometheus-grafana-integration-premium
huoyao1125 7bcfba2
Apply suggestions from code review
hfxsd 6e051f7
Update TOC-tidb-cloud-premium.md
hfxsd cc5b372
Update TOC-tidb-cloud-essential.md
hfxsd 5f274ba
Merge branch 'release-8.5' into pr/22603
hfxsd a10df44
add .md
hfxsd d145803
Update prometheus-grafana-integration-premium.md
huoyao1125 e343f10
Create prometheus-grafana-integration-essential
huoyao1125 7e7179b
Update prometheus-grafana-integration-premium.md
huoyao1125 1a484e2
Rename prometheus-grafana-integration-essential to prometheus-grafana…
handlerww File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
107 changes: 107 additions & 0 deletions
107
tidb-cloud/premium/prometheus-grafana-integration-premium.md
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,107 @@ | ||
| --- | ||
| title: Integrate TiDB Cloud with Prometheus and Grafana | ||
| summary: Learn how to monitor your Premium Instances with the Prometheus and Grafana integration. | ||
| --- | ||
|
|
||
| # Integrate TiDB Cloud with Prometheus and Grafana | ||
|
|
||
| TiDB Cloud provides a [Prometheus](https://prometheus.io/) API endpoint. If you have a Prometheus service, you can monitor key metrics of TiDB Cloud from the endpoint easily. | ||
|
|
||
| This document describes how to configure your Prometheus service to read key metrics from the TiDB Cloud endpoint and how to view the metrics using [Grafana](https://grafana.com/). | ||
|
|
||
| ## Prerequisites | ||
|
|
||
| - To integrate TiDB Cloud with Prometheus, you must have a self-hosted or managed Prometheus service. | ||
|
|
||
| - To set up third-party metrics integration for TiDB Cloud, you must have the `Organization Owner` or `Instance Manager` access in TiDB Cloud. To view the integration page, you need at least the `Instance Viewer` role to access the target Premium Instances under your Organization in TiDB Cloud. | ||
|
|
||
| ## Limitation | ||
|
|
||
| - Prometheus and Grafana integrations now are available for [TiDB Cloud Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-cloud-dedicated) clusters, [TiDB Cloud Essential](/tidb-cloud/select-cluster-tier.md#tidb-cloud-essential) instances, [TiDB Cloud Premium](/tidb-cloud/select-cluster-tier.md#tidb-cloud-premium) instances. | ||
| - Prometheus and Grafana integrations are not available when the cluster/instance status is **CREATING**, **RESTORING**, **PAUSED**, or **RESUMING**. | ||
|
|
||
| ## Steps | ||
|
|
||
| ### Step 1. Get a `scrape_config` file for Prometheus | ||
|
|
||
| Before configuring your Prometheus service to read metrics of TiDB Cloud, you need to generate a `scrape_config` YAML file in TiDB Cloud first. The `scrape_config` file contains a unique bearer token that allows the Prometheus service to monitor your target Premium Instances. | ||
|
|
||
| 1. In the [TiDB Cloud console](https://tidbcloud.com/), navigate to the [**My TiDB**](https://tidbcloud.com/tidbs) page, and then click the name of your target instance to go to its overview page. | ||
| 2. In the left navigation pane, click **Integrations**>>**Integration to Prometheus**. | ||
| 3. Click **Add File** to generate and show the `scrape_config` file for the current Premium Instance. | ||
| 4. Make a copy of the `scrape_config` file content for later use. | ||
|
|
||
| > **Note:** | ||
| > | ||
hfxsd marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| > - For security reasons, TiDB Cloud only shows a newly generated `scrape_config` file once. Ensure that you copy the content before closing the file window. | ||
| > - If you forget, delete the `scrape_config` file in TiDB Cloud and generate a new one. To delete a `scrape_config` file, select the file, click **...**, and then click **Delete**. | ||
|
|
||
| ### Step 2. Integrate with Prometheus | ||
|
|
||
| 1. In the monitoring directory specified by your Prometheus service, locate the Prometheus configuration file. | ||
hfxsd marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
|
||
| For example, `/etc/prometheus/prometheus.yml`. | ||
hfxsd marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
|
||
hfxsd marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| 2. In the Prometheus configuration file, locate the `scrape_configs` section, and then copy the `scrape_config` file content obtained from TiDB Cloud to the section. | ||
|
|
||
| 3. In your Prometheus service, check **Status** > **Targets** to verify that the new `scrape_config` file has been read. If not, you might need to restart the Prometheus service. | ||
|
|
||
| ### Step 3. Use Grafana GUI dashboards to visualize the metrics | ||
hfxsd marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
|
||
| After your Prometheus service reads metrics from TiDB Cloud, you can use Grafana GUI dashboards to visualize the metrics as follows: | ||
|
|
||
| 1. The link to download the Grafana dashboard JSON file of TiDB Cloud for Prometheus:[here, need to be updated by dev](https://github.com/pingcap/docs/blob/master/tidb-cloud/monitor-prometheus-and-grafana-integration-tidb-cloud-dynamic-tracker.json) | ||
|
|
||
| 2. [Import this JSON to your own Grafana GUI](https://grafana.com/docs/grafana/v8.5/dashboards/export-import/#import-dashboard) to visualize the metrics. | ||
|
|
||
| > **Note:** | ||
| > | ||
| > If you are already using Prometheus and Grafana to monitor TiDB Cloud Premium instances and want to incorporate the newly available metrics, it is recommended that you create a new dashboard instead of directly updating the JSON of the existing one. | ||
|
|
||
| 3. (Optional) Customize the dashboard as needed by adding or removing panels, changing data sources, and modifying display options. | ||
|
|
||
| For more information about how to use Grafana, see [Grafana documentation](https://grafana.com/docs/grafana/latest/getting-started/getting-started-prometheus/). | ||
|
|
||
| ## Best practice of rotating `scrape_config` | ||
|
|
||
| To improve data security, periodically rotate `scrape_config` file bearer tokens. | ||
|
|
||
| 1. Follow [Step 1](#step-1-get-a-scrape_config-file-for-prometheus) to create a new `scrape_config` file for Prometheus. | ||
hfxsd marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| 2. Add the content of the new file to your Prometheus configuration file. | ||
| 3. Once you confirm that your Prometheus service can read from TiDB Cloud, remove the content of the old `scrape_config` file from your Prometheus configuration file. | ||
| 4. On the **Integrations** page of your Premium Instance, delete the corresponding old `scrape_config` file to block anyone else from using it to read from the TiDB Cloud Prometheus endpoint. | ||
|
|
||
| ## Metrics available to Prometheus | ||
|
|
||
| Prometheus tracks the following metric data for your Premium Instance. This module need to be updated by dev @guohu. | ||
|
|
||
| | Metric name | Metric type | Labels | Description | | ||
| |:--- |:--- |:--- |:--- | | ||
| | tidbcloud_db_queries_total| count | sql_type: `Select\|Insert\|...`<br/>cluster_name: `<cluster name>`<br/>instance: `tidb-0\|tidb-1…`<br/>component: `tidb` | The total number of statements executed | | ||
| | tidbcloud_db_failed_queries_total | count | type: `planner:xxx\|executor:2345\|...`<br/>cluster_name: `<cluster name>`<br/>instance: `tidb-0\|tidb-1…`<br/>component: `tidb` | The total number of execution errors | | ||
| | tidbcloud_db_connections | gauge | cluster_name: `<cluster name>`<br/>instance: `tidb-0\|tidb-1…`<br/>component: `tidb` | Current number of connections in your TiDB server | | ||
| | tidbcloud_db_query_duration_seconds | histogram | sql_type: `Select\|Insert\|...`<br/>cluster_name: `<cluster name>`<br/>instance: `tidb-0\|tidb-1…`<br/>component: `tidb` | The duration histogram of statements | | ||
| | tidbcloud_changefeed_latency | gauge | changefeed_id | The data replication latency between the upstream and the downstream of a changefeed | | ||
| | tidbcloud_changefeed_checkpoint_ts | gauge | changefeed_id | The checkpoint timestamp of a changefeed, representing the largest TSO (Timestamp Oracle) successfully written to the downstream | | ||
| | tidbcloud_changefeed_replica_rows | gauge | changefeed_id | The number of replicated rows that a changefeed writes to the downstream per second | | ||
| | tidbcloud_node_storage_used_bytes | gauge | cluster_name: `<cluster name>`<br/>instance: `tikv-0\|tikv-1…\|tiflash-0\|tiflash-1…`<br/>component: `tikv\|tiflash` | The disk usage, in bytes, for TiKV or TiFlash nodes. This metric primarily represents the logical data size in the storage engine, and excludes WAL files and temporary files. To calculate the actual disk usage rate, use `(capacity - available) / capacity` instead. <ul><li>When the storage usage of TiKV exceeds 80%, latency spikes might occur, and higher usage might cause requests to fail. </li><li>When the storage usage of all TiFlash nodes reaches 80%, any DDL statement that adds a TiFlash replica hangs indefinitely.</li></ul> | | ||
| | tidbcloud_node_storage_capacity_bytes | gauge | cluster_name: `<cluster name>`<br/>instance: `tikv-0\|tikv-1…\|tiflash-0\|tiflash-1…`<br/>component: `tikv\|tiflash` | The disk capacity bytes of TiKV/TiFlash nodes | | ||
| | tidbcloud_node_cpu_seconds_total | count | cluster_name: `<cluster name>`<br/>instance: `tidb-0\|tidb-1…\|tikv-0…\|tiflash-0…`<br/>component: `tidb\|tikv\|tiflash` | The CPU usage of TiDB/TiKV/TiFlash nodes | | ||
| | tidbcloud_node_cpu_capacity_cores | gauge | cluster_name: `<cluster name>`<br/>instance: `tidb-0\|tidb-1…\|tikv-0…\|tiflash-0…`<br/>component: `tidb\|tikv\|tiflash` | The CPU limit cores of TiDB/TiKV/TiFlash nodes | | ||
| | tidbcloud_node_memory_used_bytes | gauge | cluster_name: `<cluster name>`<br/>instance: `tidb-0\|tidb-1…\|tikv-0…\|tiflash-0…`<br/>component: `tidb\|tikv\|tiflash` | The used memory bytes of TiDB/TiKV/TiFlash nodes | | ||
| | tidbcloud_node_memory_capacity_bytes | gauge | cluster_name: `<cluster name>`<br/>instance: `tidb-0\|tidb-1…\|tikv-0…\|tiflash-0…`<br/>component: `tidb\|tikv\|tiflash` | The memory capacity bytes of TiDB/TiKV/TiFlash nodes | | ||
| | tidbcloud_node_storage_available_bytes | gauge | instance: `tidb-0\|tidb-1\|...`<br/>component: `tikv\|tiflash`<br/>cluster_name: `<cluster name>` | The available disk space in bytes for TiKV/TiFlash nodes | | ||
| | tidbcloud_disk_read_latency | histogram | instance: `tidb-0\|tidb-1\|...`<br/>component: `tikv\|tiflash`<br/>cluster_name: `<cluster name>`<br/>`device`: `nvme.*\|dm.*` | The read latency in seconds per storage device | | ||
| | tidbcloud_disk_write_latency | histogram | instance: `tidb-0\|tidb-1\|...`<br/>component: `tikv\|tiflash`<br/>cluster_name: `<cluster name>`<br/>`device`: `nvme.*\|dm.*` | The write latency in seconds per storage device | | ||
| | tidbcloud_kv_request_duration | histogram | instance: `tidb-0\|tidb-1\|...`<br/>component: `tikv`<br/>cluster_name: `<cluster name>`<br/>`type`: `BatchGet\|Commit\|Prewrite\|...` | The duration in seconds of TiKV requests by type | | ||
| | tidbcloud_component_uptime | histogram | instance: `tidb-0\|tidb-1\|...`<br/>component: `tidb\|tikv\|tiflash`<br/>cluster_name: `<cluster name>` | The uptime in seconds of TiDB components | | ||
| | tidbcloud_ticdc_owner_resolved_ts_lag | gauge | changefeed_id: `<changefeed-id>`<br/>cluster_name: `<cluster name>` | The resolved timestamp lag in seconds for changefeed owner | | ||
| | tidbcloud_changefeed_status | gauge | changefeed_id: `<changefeed-id>`<br/>cluster_name: `<cluster name>` | Changefeed status:<br/>`-1`: Unknown<br/>`0`: Normal<br/>`1`: Warning<br/>`2`: Failed<br/>`3`: Stopped<br/>`4`: Finished<br/>`6`: Warning<br/>`7`: Other | | ||
| | tidbcloud_resource_manager_resource_unit_read_request_unit | gauge | cluster_name: `<cluster name>`<br/>resource_group: `<group-name>` | The read request units consumed by Resource Manager | | ||
| | tidbcloud_resource_manager_resource_unit_write_request_unit | gauge | cluster_name: `<cluster name>`<br/>resource_group: `<group-name>` | The write request units consumed by Resource Manager | | ||
hfxsd marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
|
||
| ## FAQ | ||
|
|
||
| - Why does the same metric have different values on Grafana and the TiDB Cloud console at the same time? | ||
|
|
||
| Grafana and TiDB Cloud use different aggregation calculation logic, so the displayed aggregated values might differ. You can adjust the `mini step` configuration in Grafana to get more fine-grained metric values. | ||
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
premium 的 alert 文档应该也写过了,看看怎么插入进来呢:https://github.com/pingcap/docs/pull/22546/changes