diff --git a/.github/styles/templates/procedure.md b/.github/styles/templates/procedure.md index 630978383c..afa2c56f52 100644 --- a/.github/styles/templates/procedure.md +++ b/.github/styles/templates/procedure.md @@ -26,8 +26,8 @@ If necessary, a paragraph or two explaining more about how things work. This section shows you how to: -* [Verb \](#verb-what-the-user-will-do) -* [Verb \](#verb-what-the-user-will-do-if-the-procedure-is-cut-into-logical-sections) +* [Verb \][verb-what-the-user-will-do-link] +* [Verb \][verb-what-the-user-will-do-if-the-procedure-is-cut-into-logical-sections-link] ## Prerequisites @@ -104,3 +104,6 @@ You have \. [secure-vpc-aws]: /use-timescale/:currentVersion:/vpc/ [install-linux]: /self-hosted/:currentVersion:/install/installation-linux/ [gdsg]: https://developers.google.com/style/highlights + +[verb-what-the-user-will-do-link]: /.github/:currentVersion:/styles/templates/procedure/#verb-what-the-user-will-do +[verb-what-the-user-will-do-if-the-procedure-is-cut-into-logical-sections-link]: /.github/:currentVersion:/styles/templates/procedure/#verb-what-the-user-will-do-if-the-procedure-is-cut-into-logical-sections diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 4df06986e1..e7b12528b9 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -46,7 +46,7 @@ Each major doc section has a dedicated directory with `.md` files inside, repres Beneath the front matter, describe the error and its solution in regular Markdown. You can also use any other components allowed within the docs site. The entry shows up on the troubleshooting pages for its associated products and topics. If the page doesn't already exist, add an entry for it in the page - index, setting `type` to `placeholder`. See [Navigation tree](#navigation-tree). + index, setting `type` to `placeholder`. See [Navigation tree][navigation-tree-link]. ## Edit the navigation hierarchy @@ -161,4 +161,6 @@ The previous documentation source is in the deprecated repository called [docs.t [use-navigation]: use-timescale/page-index/page-index.js [formatting]: _partials/_formatting_examples.md [variables]: https://www.tigerdata.com/docs/variables-for-contributors/ -[readme]: README.md \ No newline at end of file +[readme]: README.md + +[navigation-tree-link]: /CONTRIBUTING/:currentVersion:/#navigation-tree diff --git a/_partials/_data_model_metadata.md b/_partials/_data_model_metadata.md deleted file mode 100644 index d049666dff..0000000000 --- a/_partials/_data_model_metadata.md +++ /dev/null @@ -1,7 +0,0 @@ - - -You might also notice that the metadata fields are missing. Because this is a -relational database, metadata can be stored in a secondary table and `JOIN`ed at -query time. Learn more about [$TIMESCALE_DB's support for `JOIN`s](#joins-with-relational-data). - - diff --git a/_partials/_livesync-terminal.md b/_partials/_livesync-terminal.md index ef199d46b6..c3cc11dd31 100644 --- a/_partials/_livesync-terminal.md +++ b/_partials/_livesync-terminal.md @@ -152,7 +152,7 @@ specifies the tables to synchronize. ALTER PUBLICATION SET(publish_via_partition_root=true); ``` - To convert partitioned table to hypertable, follow [Convert partitions and tables with time-series data into hypertables](#convert-partitions-and-tables-with-time-series-data-into-hypertables). + To convert a partitioned table to a hypertable, follow [Convert partitions and tables with time-series data into hypertables][convert-partitions-and-tables-with-time-series-data-into-hypertables-link]. 1. **Stop syncing a table in the `PUBLICATION`, use `DROP TABLE`** @@ -347,3 +347,5 @@ EOF [caggs]: /use-timescale/:currentVersion:/continuous-aggregates/about-continuous-aggregates/ [join-livesync-on-slack]: https://app.slack.com/client/T4GT3N2JK/C086NU9EZ88 [install-psql]: /integrations/:currentVersion:/psql/ + +[convert-partitions-and-tables-with-time-series-data-into-hypertables-link]: /migrate/:currentVersion:/livesync-for-postgresql/#convert-partitions-and-tables-with-time-series-data-into-hypertables diff --git a/api/administration.md b/api/administration.md index 7ed53c6f98..2c635d0000 100644 --- a/api/administration.md +++ b/api/administration.md @@ -54,12 +54,14 @@ SELECT timescaledb_post_restore(); Perform the required operations so that you can restore the database using `pg_restore`. Specifically, this sets the `timescaledb.restoring` GUC to `on` and stops any background workers which could have been performing tasks. -The background workers are stopped until the [timescaledb_post_restore()](#timescaledb_post_restore) function is run, after the restore operation is complete. +The background workers are stopped until the [timescaledb_post_restore()][timescaledb_post_restore-link] function is run, after the restore operation is complete. For more information, see [Migrate using pg_dump and pg_restore]. -After using `timescaledb_pre_restore()`, you need to run [`timescaledb_post_restore()`](#timescaledb_post_restore) before you can use the database normally. + +After using `timescaledb_pre_restore()`, you need to run [`timescaledb_post_restore()`][timescaledb_post_restore-link] before you can use the database normally. + ### Sample usage @@ -72,3 +74,5 @@ SELECT timescaledb_pre_restore(); [Migrate using pg_dump and pg_restore]: /migrate/:currentVersion:/pg-dump-and-restore/ [telemetry]: /self-hosted/:currentVersion:/configuration/telemetry + +[timescaledb_post_restore-link]: /api/:currentVersion:/administration/#timescaledb_post_restore diff --git a/api/glossary.md b/api/glossary.md index 9a1868b54a..817ef61ede 100644 --- a/api/glossary.md +++ b/api/glossary.md @@ -551,7 +551,7 @@ This glossary defines technical terms, concepts, and terminology used in $COMPAN -**$SERVICE_LONG**: an instance of optimized [$PG][postgres-link] extended with database engine innovations such as [$TIMESCALE_DB][timescaledb-link], in a cloud infrastructure that delivers speed without sacrifice. You can create [free $SERVICE_SHORTs][free-tiger-service-link] and [standard $SERVICE_SHORTs][standard-tiger-service-link]. +**$SERVICE_LONG**: an optimized [$PG][postgres-link] instance extended with database engine innovations such as [$TIMESCALE_DB][timescaledb-link], in a cloud infrastructure that delivers speed without sacrifice. You can create [free $SERVICE_SHORTs][free-tiger-service-link] and [standard $SERVICE_SHORTs][standard-tiger-service-link]. **Time series**: data points indexed and ordered by time, typically representing how values change over time. diff --git a/integrations/telegraf.md b/integrations/telegraf.md index a6cd2ba979..10d23e8616 100644 --- a/integrations/telegraf.md +++ b/integrations/telegraf.md @@ -18,8 +18,8 @@ and output of data. To view metrics gathered by Telegraf and stored in a [hypertable][about-hypertables] in a $SERVICE_LONG. -- [Link Telegraf to your $SERVICE_LONG](#link-telegraf-to-your-service): create a Telegraf configuration -- [View the metrics collected by Telegraf](#view-the-metrics-collected-by-telegraf): connect to your $SERVICE_SHORT and +- [Link Telegraf to your $SERVICE_LONG][link-telegraf-to-your-service-link]: create a Telegraf configuration +- [View the metrics collected by Telegraf][view-the-metrics-collected-by-telegraf-link]: connect to your $SERVICE_SHORT and query the metrics table ## Prerequisites @@ -156,3 +156,6 @@ see the [PostgreQL output plugin][output-plugin]. [connect-timescaledb]: /integrations/:currentVersion:/find-connection-details/ [grafana]: /integrations/:currentVersion:/grafana/ [about-hypertables]: /use-timescale/:currentVersion:/hypertables/ + +[link-telegraf-to-your-service-link]: /integrations/:currentVersion:/telegraf/#link-telegraf-to-your-service +[view-the-metrics-collected-by-telegraf-link]: /integrations/:currentVersion:/telegraf/#view-the-metrics-collected-by-telegraf diff --git a/mst/connection-pools.md b/mst/connection-pools.md index 7f813a4522..da1ec29301 100644 --- a/mst/connection-pools.md +++ b/mst/connection-pools.md @@ -102,7 +102,7 @@ connection pooling to. * In the `Database` field, select a database to connect to. Each pool can only connect to one database. * In the `Pool Mode` field, select which - [pool mode](#connection-pooling-modes) to use. + [pool mode][connection-pooling-modes-link] to use. * In the `Pool Size` field, select the maximum number of server connections this pool can use at any one time. * In the `Username` field, select which database username to connect to @@ -121,4 +121,6 @@ to use both pooled and un-pooled connections at the same time. -[mst-login]:https://portal.managed.timescale.com/login \ No newline at end of file +[mst-login]:https://portal.managed.timescale.com/login + +[connection-pooling-modes-link]: /mst/:currentVersion:/connection-pools/#connection-pooling-modes diff --git a/mst/extensions.md b/mst/extensions.md index b8c0057615..131645501b 100644 --- a/mst/extensions.md +++ b/mst/extensions.md @@ -8,7 +8,7 @@ tags: [extensions] # Supported $PG extensions $MST_LONG supports many $PG extensions. See -[available extensions](#available-extensions) for a full list. +[available extensions][available-extensions-link] for a full list. ## Add an extension @@ -134,3 +134,5 @@ language extension is `plpythonu`. You can contact Support directly from $MST_LONG. Click the life-preserver icon in the upper-right corner of your dashboard. + +[available-extensions-link]: /mst/:currentVersion:/extensions/#available-extensions diff --git a/mst/ingest-data.md b/mst/ingest-data.md index 9c1961ef8f..cef1991680 100644 --- a/mst/ingest-data.md +++ b/mst/ingest-data.md @@ -10,12 +10,12 @@ tags: [JDB, ODBC, client driver, Kafka, csv] There are several different ways of ingesting your data into $MST_LONG. This section contains instructions to: -* Bulk upload [from a `.csv` file](#bulk-upload-from-csv-files) +* Bulk upload [from a `.csv` file][bulk-upload-from-csv-files-link] * Insert data - [directly using a client driver](#insert-data-directly-using-a-client-driver), + [directly using a client driver][insert-data-directly-using-a-client-driver-link], such as JDBC, ODBC, or Node.js * Insert data - [directly using a message queue](#insert-data-directly-using-a-message-queue), + [directly using a message queue][insert-data-directly-using-a-message-queue-link], such as Kafka Before you begin, make sure you have @@ -85,8 +85,10 @@ hypertable. You need to begin by creating the new table, before you import the data. + Before you begin, make sure you have -[prepared your new database](#procedure-preparing-your-new-database). +[prepared your new database][procedure-preparing-your-new-database-link]. + @@ -159,3 +161,8 @@ See [these instructions][gh-kafkaconnector] for using the Kafka connector. [postgres-odbc]: https://odbc.postgresql.org/ [create-managed-service]: /mst/:currentVersion:/installation-mst/ [mst-login]:https://portal.managed.timescale.com/login + +[bulk-upload-from-csv-files-link]: /mst/:currentVersion:/ingest-data/#bulk-upload-from-csv-files +[insert-data-directly-using-a-client-driver-link]: /mst/:currentVersion:/ingest-data/#insert-data-directly-using-a-client-driver +[insert-data-directly-using-a-message-queue-link]: /mst/:currentVersion:/ingest-data/#insert-data-directly-using-a-message-queue +[procedure-preparing-your-new-database-link]: /mst/:currentVersion:/ingest-data/#preparing-your-new-database diff --git a/self-hosted/install/installation-docker.md b/self-hosted/install/installation-docker.md index 6e2143e032..254305e01e 100644 --- a/self-hosted/install/installation-docker.md +++ b/self-hosted/install/installation-docker.md @@ -17,7 +17,7 @@ time series and demanding workloads that ingest and query high volumes of data. instance on any local system from a pre-built Docker container. This section shows you how to -[Install and configure $TIMESCALE_DB on $PG](#install-and-configure-timescaledb-on-postgresql). +[Install and configure $TIMESCALE_DB on $PG][install-and-configure-timescaledb-on-postgresql-link]. < TestingEnv/> @@ -32,7 +32,7 @@ To run, and connect to a $PG installation on Docker, you need to install: ## Install and configure $TIMESCALE_DB on $PG This section shows you how to install the latest version of $PG and -$TIMESCALE_DB on a [supported platform](#supported-platforms) using containers supplied by $COMPANY. +$TIMESCALE_DB using containers supplied by $COMPANY. @@ -44,5 +44,6 @@ And that is it! You have $TIMESCALE_DB running on a database on a self-hosted in [docker-install]: https://docs.docker.com/get-started/get-docker/ -[install-psql]: https://www.timescale.com/blog/how-to-install-psql-on-mac-ubuntu-debian-windows/ +[install-psql]: /integrations/:currentVersion:/psql/ +[install-and-configure-timescaledb-on-postgresql-link]: /self-hosted/:currentVersion:/install/installation-docker/#install-and-configure-timescaledb-on-postgres diff --git a/self-hosted/install/installation-linux.md b/self-hosted/install/installation-linux.md index 840dd36588..d783b5fa8d 100644 --- a/self-hosted/install/installation-linux.md +++ b/self-hosted/install/installation-linux.md @@ -24,9 +24,9 @@ time series and demanding workloads that ingest and query high volumes of data. This section shows you how to: -* [Install and configure $TIMESCALE_DB on $PG](#install-and-configure-timescaledb-on-postgresql) - set up +* [Install and configure $TIMESCALE_DB on $PG][install-and-configure-timescaledb-on-postgresql-link] - set up a self-hosted $PG instance to efficiently run TimescaleDB. -* [Add the $TIMESCALE_DB extension to your database](#add-the-timescaledb-extension-to-your-database) - enable $TIMESCALE_DB +* [Add the $TIMESCALE_DB extension to your database][add-the-timescaledb-extension-to-your-database-link] - enable $TIMESCALE_DB features and performance improvements on a database. @@ -35,7 +35,7 @@ This section shows you how to: ## Install and configure $TIMESCALE_DB on $PG This section shows you how to install the latest version of $PG and -$TIMESCALE_DB on a [supported platform](#supported-platforms) using the packages supplied by $COMPANY. +$TIMESCALE_DB on a [supported platform][supported-platforms-link] using the packages supplied by $COMPANY. @@ -113,3 +113,7 @@ You can deploy $TIMESCALE_DB on the following systems: [install-from-source]: /self-hosted/:currentVersion:/install/installation-source/ [supported-platforms-self-hosted]: /about/:currentVersion:/supported-platforms/#supported-systems + +[add-the-timescaledb-extension-to-your-database-link]: /self-hosted/:currentVersion:/install/installation-linux/#add-the-timescaledb-extension-to-your-database +[install-and-configure-timescaledb-on-postgresql-link]: /self-hosted/:currentVersion:/install/installation-linux/#install-and-configure-timescaledb-on-postgres +[supported-platforms-link]: /self-hosted/:currentVersion:/install/installation-linux/#supported-platforms diff --git a/self-hosted/install/installation-macos.md b/self-hosted/install/installation-macos.md index 6461fd01b3..6355df9ab9 100644 --- a/self-hosted/install/installation-macos.md +++ b/self-hosted/install/installation-macos.md @@ -20,9 +20,9 @@ macOS device. This section shows you how to: -* [Install and configure $TIMESCALE_DB on $PG](#install-and-configure-timescaledb-on-postgresql) - set up +* [Install and configure $TIMESCALE_DB on $PG][install-and-configure-timescaledb-on-postgresql-link] - set up a self-hosted $PG instance to efficiently run TimescaleDB. -* [Add the $TIMESCALE_DB extension to your database](#add-the-timescaledb-extension-to-your-database) - enable $TIMESCALE_DB +* [Add the $TIMESCALE_DB extension to your database][add-the-timescaledb-extension-to-your-database-link] - enable $TIMESCALE_DB features and performance improvements on a database. < TestingEnv /> @@ -46,7 +46,7 @@ To keep your current $PG installation, [Install from source][install-from-source ## Install and configure $TIMESCALE_DB on $PG This section shows you how to install the latest version of $PG and -$TIMESCALE_DB on a [supported platform](#supported-platforms) using the packages supplied by $COMPANY. +$TIMESCALE_DB on a [supported platform][supported-platforms-link] using the packages supplied by $COMPANY. @@ -90,3 +90,7 @@ For the latest functionality, install MacOS 14 Sonoma. [macports]: https://guide.macports.org/#installing.macports [install-from-source]: /self-hosted/:currentVersion:/install/installation-source/ [install-postgresql]: https://www.postgresql.org/download/macosx/ + +[add-the-timescaledb-extension-to-your-database-link]: /self-hosted/:currentVersion:/install/installation-macos/#add-the-timescaledb-extension-to-your-database +[install-and-configure-timescaledb-on-postgresql-link]: /self-hosted/:currentVersion:/install/installation-macos/#install-and-configure-timescaledb-on-postgres +[supported-platforms-link]: /self-hosted/:currentVersion:/install/installation-macos/#supported-platforms diff --git a/self-hosted/install/installation-source.md b/self-hosted/install/installation-source.md index 7bba922b0c..ed1840df19 100644 --- a/self-hosted/install/installation-source.md +++ b/self-hosted/install/installation-source.md @@ -18,9 +18,9 @@ instance on any local system, from source. This section shows you how to: -* [Install and configure $TIMESCALE_DB on $PG](#install-and-configure-timescaledb-on-postgres) - set up +* [Install and configure $TIMESCALE_DB on $PG][install-and-configure-timescaledb-on-postgres-link] - set up a self-hosted $PG instance to efficiently run TimescaleDB1. -* [Add the $TIMESCALE_DB extension to your database](#add-the-timescaledb-extension-to-your-database) - enable TimescaleDB features and +* [Add the $TIMESCALE_DB extension to your database][add-the-timescaledb-extension-to-your-database-link] - enable TimescaleDB features and performance improvements on a database. < TestingEnv/> @@ -79,3 +79,6 @@ And that is it! You have TimescaleDB running on a database on a self-hosted inst [cmake-download]: https://cmake.org/download/ [compatibility-matrix]: /self-hosted/:currentVersion:/upgrades/upgrade-pg/#plan-your-upgrade-path [postgres-breaking-change]: https://www.postgresql.org/about/news/postgresql-172-166-1510-1415-1318-and-1222-released-2965/ + +[add-the-timescaledb-extension-to-your-database-link]: /self-hosted/:currentVersion:/install/installation-source/#add-the-timescaledb-extension-to-your-database +[install-and-configure-timescaledb-on-postgres-link]: /self-hosted/:currentVersion:/install/installation-source/#install-and-configure-timescaledb-on-postgres diff --git a/self-hosted/install/installation-windows.md b/self-hosted/install/installation-windows.md index 6cd55c85f9..ddecb0230b 100644 --- a/self-hosted/install/installation-windows.md +++ b/self-hosted/install/installation-windows.md @@ -107,6 +107,6 @@ For release information, see the [GitHub releases page][gh-releases] and the [re [release-notes]: https://github.com/timescale/timescaledb/releases [windows-releases]: #windows-releases [install-from-source]: /self-hosted/:currentVersion:/install/installation-source/ -[install-timescaledb]: /self-hosted/:currentVersion:/install/installation-windows/#install-and-configure-timescaledb-on-postgresql -[add-timescledb-extension]: /self-hosted/:currentVersion:/install/installation-windows/#add-the-timescaledb-extension-to-your-database +[install-timescaledb]: /self-hosted/:currentVersion:/install/installation-windows/#install-and-configure-timescaledb-on-postgres +[add-timescledb-extension]: /self-hosted/:currentVersion:/install/installation-windows/#add-the-timescale-db-extension-to-your-database [supported-platforms]: /self-hosted/:currentVersion:/install/installation-windows/#supported-platforms diff --git a/self-hosted/migration/schema-then-data.md b/self-hosted/migration/schema-then-data.md index 677bed97a5..aef2560e48 100644 --- a/self-hosted/migration/schema-then-data.md +++ b/self-hosted/migration/schema-then-data.md @@ -38,14 +38,14 @@ database at once. For more information, see the section on The procedure to migrate your database requires these steps: -* [Migrate schema pre-data](#migrate-schema-pre-data) -* [Restore hypertables in Timescale](#restore-hypertables-in-timescale) -* [Copy data from the source database](#copy-data-from-the-source-database) -* [Restore data into Timescale](#restore-data-into-timescale) -* [Migrate schema post-data](#migrate-schema-post-data) -* [Recreate continuous aggregates](#recreate-continuous-aggregates) (optional) -* [Recreate policies](#recreate-policies) (optional) -* [Update table statistics](#update-table-statistics) +* [Migrate schema pre-data][migrate-schema-pre-data-link] +* [Restore hypertables in $TIMESCALE_DB][restore-hypertables-in-timescale-link] +* [Copy data from the source database][copy-data-from-the-source-database-link] +* [Restore data into $TIMESCALE_DB][restore-data-into-timescale-link] +* [Migrate schema post-data][migrate-schema-post-data-link] +* [Recreate continuous aggregates][recreate-continuous-aggregates-link] (optional) +* [Recreate policies][recreate-policies-link] (optional) +* [Update table statistics][update-table-statistics-link] @@ -54,7 +54,7 @@ data can take a very long time. You can continue reading from your source database during this time, though performance could be slower. To avoid this problem, fork your database and migrate your data from the fork. If you write to the tables in your source database during the migration, the new writes might -not be transferred to Timescale. To avoid this problem, see the section on +not be transferred to TimescaleDB. To avoid this problem, see the section on [migrating an active database][migration]. @@ -86,14 +86,14 @@ Before you begin, check that you have: Migrate your pre-data from your source database to $SELF_LONG. This includes table and schema definitions, as well as information on sequences, -owners, and settings. This doesn't include Timescale-specific schemas. +owners, and settings. This doesn't include $TIMESCALE_DB-specific schemas. ### Migrating schema pre-data 1. Dump the schema pre-data from your source database into a `dump_pre_data.bak` file, using - your source database connection details. Exclude Timescale-specific schemas. + your source database connection details. Exclude $TIMESCALE_DB-specific schemas. If you are prompted for a password, use your source database credentials: ```bash @@ -116,7 +116,7 @@ owners, and settings. This doesn't include Timescale-specific schemas. ## Restore hypertables in your $SELF_LONG instance After pre-data migration, your hypertables from your source database become -regular $PG tables in Timescale. Recreate your hypertables in your $SELF_LONG instance to +regular $PG tables in $TIMESCALE_DB. Recreate your hypertables in your $SELF_LONG instance to restore them. @@ -179,11 +179,11 @@ Split each table by time range, and copy each range individually. For example: -## Restore data into Timescale +## Restore data into $TIMESCALE_DB When you have copied your data into `.csv` files, you can restore it to $SELF_LONG by copying from the `.csv` files. There are two methods: using -regular $PG [`COPY`][copy], or using the TimescaleDB +regular $PG [`COPY`][copy], or using the $TIMESCALE_DB [`timescaledb-parallel-copy`][timescaledb-parallel-copy] function. In tests, `timescaledb-parallel-copy` is 16% faster. The `timescaledb-parallel-copy` tool is not included by default. You must install the function. @@ -216,3 +216,11 @@ the [compression section](https://www.tigerdata.com/docs/use-timescale/latest/co [upgrading-postgresql-self-hosted]: /self-hosted/:currentVersion:/upgrades/upgrade-pg/ [upgrading-timescaledb]: /self-hosted/:currentVersion:/upgrades/major-upgrade/ [migration]: /migrate/:currentVersion:/ +[migrate-schema-pre-data-link]: /self-hosted/:currentVersion:/migration/schema-then-data/#migrate-schema-pre-data +[copy-data-from-the-source-database-link]: /self-hosted/:currentVersion:/migration/schema-then-data/#copy-data-from-the-source-database +[restore-data-into-timescale-link]: /self-hosted/:currentVersion:/migration/schema-then-data/#restore-data-into-timescaledb +[restore-hypertables-in-timescale-link]: /self-hosted/:currentVersion:/migration/schema-then-data/#restore-hypertables-in-your-self-hosted-timescaledb-instance +[migrate-schema-post-data-link]: /self-hosted/:currentVersion:/migration/schema-then-data/#migrate-schema-post-data +[recreate-continuous-aggregates-link]: /self-hosted/:currentVersion:/migration/schema-then-data/#recreate-continuous-aggregates +[recreate-policies-link]: /self-hosted/:currentVersion:/migration/schema-then-data/#recreate-policies +[update-table-statistics-link]: /self-hosted/:currentVersion:/migration/schema-then-data/#update-table-statistics diff --git a/self-hosted/multinode-timescaledb/multinode-ha.md b/self-hosted/multinode-timescaledb/multinode-ha.md index 19a29691a9..459f6a30ac 100644 --- a/self-hosted/multinode-timescaledb/multinode-ha.md +++ b/self-hosted/multinode-timescaledb/multinode-ha.md @@ -105,7 +105,7 @@ replication factor on an existing distributed hypertable. Note, however, that only new chunks are replicated according to the updated replication factor. Existing chunks need to be re-replicated by copying those chunks to new data nodes (see the [node -failures section](#node-failures) below). +failures section][node-failures-link] below). When native replication is enabled, the replication happens whenever you write data to the table. On every `INSERT` and `COPY` call, each @@ -222,3 +222,5 @@ CALL timescaledb_experimental.cleanup_copy_chunk_operation('ts_copy_1_31'); [single-ha]: /self-hosted/:currentVersion:/replication-and-ha/ [alter_data_node]: /api/:currentVersion:/distributed-hypertables/alter_data_node/ [copy_chunk]:/api/:currentVersion:/distributed-hypertables/copy_chunk_experimental + +[node-failures-link]: /self-hosted/:currentVersion:/multinode-timescaledb/multinode-ha/#node-failures diff --git a/tutorials/blockchain-analyze/analyze-blockchain-query.md b/tutorials/blockchain-analyze/analyze-blockchain-query.md index c5f5704fd8..6dc2dda44a 100644 --- a/tutorials/blockchain-analyze/analyze-blockchain-query.md +++ b/tutorials/blockchain-analyze/analyze-blockchain-query.md @@ -16,12 +16,12 @@ queries that are not possible in standard $PG. In this section, you learn how to write queries that answer these questions: -* [Is there any connection between the number of transactions and the transaction fees?](#is-there-any-connection-between-the-number-of-transactions-and-the-transaction-fees) -* [Does the transaction volume affect the BTC-USD rate?](#does-the-transaction-volume-affect-the-btc-usd-rate) -* [Do more transactions in a block mean the block is more expensive to mine?](#do-more-transactions-in-a-block-mean-the-block-is-more-expensive-to-mine) -* [What percentage of the average miner's revenue comes from fees compared to block rewards?](#what-percentage-of-the-average-miners-revenue-comes-from-fees-compared-to-block-rewards) -* [How does block weight affect miner fees?](#how-does-block-weight-affect-miner-fees) -* [What's the average miner revenue per block?](#whats-the-average-miner-revenue-per-block) +* [Is there any connection between the number of transactions and the transaction fees?][is-there-any-connection-between-the-number-of-transactions-and-the-transaction-fees-link] +* [Does the transaction volume affect the BTC-USD rate?][does-the-transaction-volume-affect-the-btc-usd-rate-link] +* [Do more transactions in a block mean the block is more expensive to mine?][do-more-transactions-in-a-block-mean-the-block-is-more-expensive-to-mine-link] +* [What percentage of the average miner's revenue comes from fees compared to block rewards?][what-percentage-of-the-average-miners-revenue-comes-from-fees-compared-to-block-rewards-link] +* [How does block weight affect miner fees?][how-does-block-weight-affect-miner-fees-link] +* [What's the average miner revenue per block?][whats-the-average-miner-revenue-per-block-link] ## Create continuous aggregates @@ -547,3 +547,9 @@ increase the time range. [docs-cagg]: /use-timescale/:currentVersion:/continuous-aggregates/ [about-hyperfunctions]: https://www.tigerdata.com/docs/use-timescale/latest/hyperfunctions/about-hyperfunctions/ +[is-there-any-connection-between-the-number-of-transactions-and-the-transaction-fees-link]: /tutorials/:currentVersion:/blockchain-analyze/analyze-blockchain-query/#is-there-any-connection-between-the-number-of-transactions-and-the-transaction-fees +[does-the-transaction-volume-affect-the-btc-usd-rate-link]: /tutorials/:currentVersion:/blockchain-analyze/analyze-blockchain-query/#does-the-transaction-volume-affect-the-btc-usd-rate +[do-more-transactions-in-a-block-mean-the-block-is-more-expensive-to-mine-link]: /tutorials/:currentVersion:/blockchain-analyze/analyze-blockchain-query/#do-more-transactions-in-a-block-mean-the-block-is-more-expensive-to-mine +[what-percentage-of-the-average-miners-revenue-comes-from-fees-compared-to-block-rewards-link]: /tutorials/:currentVersion:/blockchain-analyze/analyze-blockchain-query/#what-percentage-of-the-average-miners-revenue-comes-from-fees-compared-to-block-rewards +[how-does-block-weight-affect-miner-fees-link]: /tutorials/:currentVersion:/blockchain-analyze/analyze-blockchain-query/#how-does-block-weight-affect-miner-fees +[whats-the-average-miner-revenue-per-block-link]: /tutorials/:currentVersion:/blockchain-analyze/analyze-blockchain-query/#whats-the-average-miner-revenue-per-block diff --git a/tutorials/blockchain-query/beginner-blockchain-query.md b/tutorials/blockchain-query/beginner-blockchain-query.md index eb3ae6a919..12772387f4 100644 --- a/tutorials/blockchain-query/beginner-blockchain-query.md +++ b/tutorials/blockchain-query/beginner-blockchain-query.md @@ -13,9 +13,9 @@ When you have your dataset loaded, you can start constructing some queries to discover what your data tells you. In this section, you learn how to write queries that answer these questions: -* [What are the five most recent coinbase transactions?](#what-are-the-five-most-recent-coinbase-transactions) -* [What are the five most recent transactions?](#what-are-the-five-most-recent-transactions) -* [What are the five most recent blocks?](#what-are-the-five-most-recent-blocks?) +* [What are the five most recent coinbase transactions?][what-are-the-five-most-recent-coinbase-transactions-link] +* [What are the five most recent transactions?][what-are-the-five-most-recent-transactions-link] +* [What are the five most recent blocks?][what-are-the-five-most-recent-blocks-link] ## What are the five most recent coinbase transactions? @@ -144,3 +144,6 @@ US Dollars. [coinbase-def]: https://www.pcmag.com/encyclopedia/term/coinbase-transaction +[what-are-the-five-most-recent-coinbase-transactions-link]: /tutorials/:currentVersion:/blockchain-query/beginner-blockchain-query/#what-are-the-five-most-recent-coinbase-transactions +[what-are-the-five-most-recent-transactions-link]: /tutorials/:currentVersion:/blockchain-query/beginner-blockchain-query/#what-are-the-five-most-recent-transactions +[what-are-the-five-most-recent-blocks-link]: /tutorials/:currentVersion:/blockchain-query/beginner-blockchain-query/#what-are-the-five-most-recent-blocks diff --git a/tutorials/energy-data/query-energy.md b/tutorials/energy-data/query-energy.md index cac4f58020..25faeb8bf3 100644 --- a/tutorials/energy-data/query-energy.md +++ b/tutorials/energy-data/query-energy.md @@ -17,9 +17,9 @@ queries that are not possible in standard $PG. In this section, you learn how to construct queries, to answer these questions: -* [Energy consumption by hour of day](#what-is-the-energy-consumption-by-the-hour-of-the-day) -* [Energy consumption by weekday](#what-is-the-energy-consumption-by-the-day-of-the-week). -* [Energy consumption by month](#what-is-the-energy-consumption-on-a-monthly-basis). +* [Energy consumption by hour of day][what-is-the-energy-consumption-by-the-hour-of-the-day-link] +* [Energy consumption by weekday][what-is-the-energy-consumption-by-the-day-of-the-week-link]. +* [Energy consumption by month][what-is-the-energy-consumption-on-a-monthly-basis-link]. ## What is the energy consumption by the hour of the day? @@ -213,3 +213,6 @@ You may also want to check the energy consumption that occurs on a monthly basis [about-hyperfunctions]: https://www.tigerdata.com/docs/use-timescale/latest/hyperfunctions/about-hyperfunctions/ +[what-is-the-energy-consumption-by-the-hour-of-the-day-link]: /tutorials/:currentVersion:/energy-data/query-energy/#what-is-the-energy-consumption-by-the-hour-of-the-day +[what-is-the-energy-consumption-by-the-day-of-the-week-link]: /tutorials/:currentVersion:/energy-data/query-energy/#what-is-the-energy-consumption-by-the-day-of-the-week +[what-is-the-energy-consumption-on-a-monthly-basis-link]: /tutorials/:currentVersion:/energy-data/query-energy/#what-is-the-energy-consumption-on-a-monthly-basis diff --git a/tutorials/nyc-taxi-cab/query-nyc.md b/tutorials/nyc-taxi-cab/query-nyc.md index c3ff15e749..469230b8fd 100644 --- a/tutorials/nyc-taxi-cab/query-nyc.md +++ b/tutorials/nyc-taxi-cab/query-nyc.md @@ -14,11 +14,11 @@ When you have your dataset loaded, you can start constructing some queries to discover what your data tells you. In this section, you learn how to write queries that answer these questions: -* [How many rides take place each day?](#how-many-rides-take-place-every-day) -* [What is the average fare amount?](#what-is-the-average-fare-amount) -* [How many rides of each rate type were taken?](#how-many-rides-of-each-rate-type-were-taken) -* [What kind of trips are going to and from airports?](#what-kind-of-trips-are-going-to-and-from-airports) -* [How many rides took place on New Year's Day 2016](#how-many-rides-took-place-on-new-years-day-2016)? +* [How many rides take place each day?][how-many-rides-take-place-every-day-link] +* [What is the average fare amount?][what-is-the-average-fare-amount-link] +* [How many rides of each rate type were taken?][how-many-rides-of-each-rate-type-were-taken-link] +* [What kind of trips are going to and from airports?][what-kind-of-trips-are-going-to-and-from-airports-link] +* [How many rides took place on New Year's Day 2016][how-many-rides-took-place-on-new-years-day-2016-link]? ## How many rides take place every day? @@ -272,3 +272,9 @@ the data into time intervals instead. ``` + +[how-many-rides-take-place-every-day-link]: /tutorials/:currentVersion:/nyc-taxi-cab/query-nyc/#how-many-rides-take-place-every-day +[what-is-the-average-fare-amount-link]: /tutorials/:currentVersion:/nyc-taxi-cab/query-nyc/#what-is-the-average-fare-amount +[how-many-rides-of-each-rate-type-were-taken-link]: /tutorials/:currentVersion:/nyc-taxi-cab/query-nyc/#how-many-rides-of-each-rate-type-were-taken +[what-kind-of-trips-are-going-to-and-from-airports-link]: /tutorials/:currentVersion:/nyc-taxi-cab/query-nyc/#what-kind-of-trips-are-going-to-and-from-airports +[how-many-rides-took-place-on-new-years-day-2016-link]: /tutorials/:currentVersion:/nyc-taxi-cab/query-nyc/#how-many-rides-took-place-on-new-years-day-2016 diff --git a/tutorials/nyc-taxi-geospatial/plot-nyc.md b/tutorials/nyc-taxi-geospatial/plot-nyc.md index b0582b24e7..37bfd38d61 100644 --- a/tutorials/nyc-taxi-geospatial/plot-nyc.md +++ b/tutorials/nyc-taxi-geospatial/plot-nyc.md @@ -15,8 +15,8 @@ discover what your data tells you. In this section, you learn how to combine the data in the NYC taxi dataset with geospatial data from [PostGIS][postgis], to answer these questions: -* [How many rides on New Year's Day 2016 originated from Times Square?](#how-many-rides-on-new-years-day-2016-originated-from-times-square) -* [Which rides traveled more than 5 miles in Manhattan?](#which-rides-traveled-more-than-5-miles-in-manhattan). +* [How many rides on New Year's Day 2016 originated from Times Square?][how-many-rides-on-new-years-day-2016-originated-from-times-square-link] +* [Which rides traveled more than 5 miles in Manhattan?][which-rides-traveled-more-than-5-miles-in-manhattan-link]. ## Set up your dataset for PostGIS @@ -181,3 +181,5 @@ Grafana can plot the data properly. [postgis]: http://postgis.net/ +[how-many-rides-on-new-years-day-2016-originated-from-times-square-link]: /tutorials/:currentVersion:/nyc-taxi-geospatial/plot-nyc/#how-many-rides-on-new-years-day-2016-originated-from-times-square +[which-rides-traveled-more-than-5-miles-in-manhattan-link]: /tutorials/:currentVersion:/nyc-taxi-geospatial/plot-nyc/#which-rides-traveled-more-than-5-miles-in-manhattan diff --git a/use-timescale/continuous-aggregates/create-a-continuous-aggregate.md b/use-timescale/continuous-aggregates/create-a-continuous-aggregate.md index 6623cf9380..1b1cab78b8 100644 --- a/use-timescale/continuous-aggregates/create-a-continuous-aggregate.md +++ b/use-timescale/continuous-aggregates/create-a-continuous-aggregate.md @@ -18,7 +18,7 @@ Continuous aggregates require a `time_bucket` on the time partitioning column of the hypertable. By default, views are automatically refreshed. You can adjust this by setting -the [WITH NO DATA](#using-the-with-no-data-option) option. Additionally, the +the [WITH NO DATA][using-the-with-no-data-option-link] option. Additionally, the view can not be a [security barrier view][postgres-security-barrier]. Continuous aggregates use hypertables in the background, which means that they @@ -313,4 +313,5 @@ For $TIMESCALE_DB v2.19.3 and below, $CAGGs do not support window functions. To [postgres-rls]: [postgres-security-barrier]: [with-no-data]: /use-timescale/:currentVersion:/continuous-aggregates/create-a-continuous-aggregate/#using-the-with-no-data-option -[create_materialized_view]: /api/:currentVersion:/continuous-aggregates/create_materialized_view/#parameters \ No newline at end of file +[create_materialized_view]: /api/:currentVersion:/continuous-aggregates/create_materialized_view/#parameters +[using-the-with-no-data-option-link]: /use-timescale/:currentVersion:/continuous-aggregates/create-a-continuous-aggregate/#using-the-with-no-data-option \ No newline at end of file diff --git a/use-timescale/data-tiering/querying-tiered-data.md b/use-timescale/data-tiering/querying-tiered-data.md index a33a961a6d..23b8380f06 100644 --- a/use-timescale/data-tiering/querying-tiered-data.md +++ b/use-timescale/data-tiering/querying-tiered-data.md @@ -24,7 +24,7 @@ With tiered reads enabled, you can query your data normally even when it's distr Your hypertable is spread across the tiers, so queries and `JOIN`s work and fetch the same data as usual. By default, tiered data is not accessed by queries. Querying tiered data may slow down query performance -as the data is not stored locally on the high-performance storage tier. See [Performance considerations](#performance-considerations). +as the data is not stored locally on the high-performance storage tier. See [Performance considerations][performance-considerations-link]. @@ -190,5 +190,7 @@ Queries over tiered data are expected to be slower than over local data. However * Text and non-native types (JSON, JSONB, GIS) filtering is slower when querying tiered data. +[performance-considerations-link]: /use-timescale/:currentVersion:/data-tiering/querying-tiered-data/#performance-considerations + diff --git a/use-timescale/hyperfunctions/function-pipelines.md b/use-timescale/hyperfunctions/function-pipelines.md index 69ddbfb398..9bd9036652 100644 --- a/use-timescale/hyperfunctions/function-pipelines.md +++ b/use-timescale/hyperfunctions/function-pipelines.md @@ -224,7 +224,7 @@ timevector('2021-01-01 UTC', 10) -> add(5) -> mul(2) -> add(1) Both of these examples produce `('2020-01-01 00:00:00+00', 31.0)`. If multiple arithmetic operations are needed and precedence is important, -consider using a [Lambda](#lambda-elements) instead. +consider using a [Lambda][lambda-elements-link] instead. ### Unary mathematical functions @@ -786,8 +786,8 @@ SELECT hyperloglog(device_id) -> distinct_count() FROM measurements; You can turn a timevector into a formatted text representation. There are two functions for turning a timevector to text: -* [`to_text`](#to-text), which allows you to specify the template -* [`to_plotly`](#to-plotly), which outputs a format suitable for use with the +* [`to_text`][to-text-link], which allows you to specify the template +* [`to_plotly`][to-plotly-link], which outputs a format suitable for use with the [Plotly JSON chart schema][plotly] ### `to_text` @@ -916,3 +916,6 @@ This table lists all function pipeline elements in alphabetical order: |`unnest`|Output|`TABLE (time TIMESTAMPTZ, value DOUBLE PRECISION)`| [plotly]: https://plotly.com/chart-studio-help/json-chart-schema/ +[lambda-elements-link]: /use-timescale/:currentVersion:/hyperfunctions/function-pipelines/#lambda-elements +[to-text-link]: /use-timescale/:currentVersion:/hyperfunctions/function-pipelines/#to_text +[to-plotly-link]: /use-timescale/:currentVersion:/hyperfunctions/function-pipelines/#to_plotly diff --git a/use-timescale/query-data/advanced-analytic-queries.md b/use-timescale/query-data/advanced-analytic-queries.md index 1995aeab34..04e8d818ba 100644 --- a/use-timescale/query-data/advanced-analytic-queries.md +++ b/use-timescale/query-data/advanced-analytic-queries.md @@ -78,7 +78,7 @@ SELECT ## Calculate the rate of change -Like [increase](#calculate-the-increase-in-a-value), rate applies to a situation +Like [increase][calculate-the-increase-in-a-value-link], rate applies to a situation with monotonically increasing counters. If your sample interval is variable or you use different sampling intervals between different series, it is helpful to normalize the values to a common time interval to make the calculated values @@ -418,3 +418,4 @@ use its [SkipScan][skipscan] feature. [skipscan]: /use-timescale/:currentVersion:/query-data/skipscan/ [time_bucket]: /api/:currentVersion:/hyperfunctions/time_bucket [toolkit-approx-percentile]: /api/:currentVersion:/hyperfunctions/percentile-approximation/uddsketch/ +[calculate-the-increase-in-a-value-link]: /use-timescale/:currentVersion:/query-data/advanced-analytic-queries/#calculate-the-increase-in-a-value