From fd63cae6f5fe8dd7ca4d74031c0274f9bd4fb4fa Mon Sep 17 00:00:00 2001 From: billy-the-fish Date: Fri, 12 Dec 2025 12:12:39 +0100 Subject: [PATCH 1/5] chore: update glossary internal links. --- api/glossary.md | 102 ++++++++++++++++++++++++++---------------------- 1 file changed, 55 insertions(+), 47 deletions(-) diff --git a/api/glossary.md b/api/glossary.md index 68c7274415..9a1868b54a 100644 --- a/api/glossary.md +++ b/api/glossary.md @@ -15,7 +15,7 @@ This glossary defines technical terms, concepts, and terminology used in $COMPAN **ACID**: a set of properties (atomicity, consistency, isolation, durability) that guarantee database transactions are processed reliably. -**ACID compliance**: a set of database properties—Atomicity, Consistency, Isolation, Durability—ensuring reliable and consistent transactions. Inherited from [$PG](#postgresql). +**ACID compliance**: a set of database properties—Atomicity, Consistency, Isolation, Durability—ensuring reliable and consistent transactions. Inherited from [$PG][postgres-link]. **Adaptive query optimization**: dynamic query plan adjustment based on actual execution statistics and data distribution patterns, improving performance over time. @@ -41,7 +41,7 @@ This glossary defines technical terms, concepts, and terminology used in $COMPAN **Background job**: an automated task that runs in the background without user intervention, typically for maintenance operations like compression or data retention. -**Background worker**: a [$PG](#postgresql) process that runs background tasks independently of client sessions. +**Background worker**: a [$PG][postgres-link] process that runs background tasks independently of client sessions. **Batch processing**: handling data in grouped batches rather than as individual real-time events, often used for historical data processing. @@ -49,13 +49,13 @@ This glossary defines technical terms, concepts, and terminology used in $COMPAN **Backup**: a copy of data stored separately from the original data to protect against data loss, corruption, or system failure. -**Bloom filter**: a probabilistic data structure that tests set membership with possible false positives but no false negatives. [$TIMESCALE_DB](#timescaledb) uses blocked bloom filters to speed up point lookups by eliminating [chunks](#chunk) that don't contain queried values. +**Bloom filter**: a probabilistic data structure that tests set membership with possible false positives but no false negatives. [$TIMESCALE_DB][timescaledb-link] uses blocked bloom filters to speed up point lookups by eliminating [chunks][chunk-link] that don't contain queried values. **Buffer pool**: memory area where frequently accessed data pages are cached to reduce disk I/O operations. -**BRIN (Block Range Index)**: a [$PG](#postgresql) index type that stores summaries about ranges of table blocks, useful for large tables with naturally ordered data. +**BRIN (Block Range Index)**: a [$PG][postgres-link] index type that stores summaries about ranges of table blocks, useful for large tables with naturally ordered data. -**Bytea**: a [$PG](#postgresql) data type for storing binary data as a sequence of bytes. +**Bytea**: a [$PG][postgres-link] data type for storing binary data as a sequence of bytes. ## C @@ -67,7 +67,7 @@ This glossary defines technical terms, concepts, and terminology used in $COMPAN -**Chunk**: a horizontal partition of a [$HYPERTABLE](#hypertable) that contains data for a specific time interval and space partition. See [chunks][use-hypertables-chunks]. +**Chunk**: a horizontal partition of a [$HYPERTABLE][hypertable-link] that contains data for a specific time interval and space partition. See [chunks][use-hypertables-chunks]. **Chunk interval**: the time period covered by each chunk in a $HYPERTABLE, which affects query performance and storage efficiency. @@ -81,7 +81,7 @@ This glossary defines technical terms, concepts, and terminology used in $COMPAN **Cloud**: computing services delivered over the internet, including servers, storage, databases, networking, software, analytics, and intelligence. -**Cloud deployment**: the use of public, private, or hybrid cloud infrastructure to host [$TIMESCALE_DB](#timescaledb), enabling elastic scalability and managed services. +**Cloud deployment**: the use of public, private, or hybrid cloud infrastructure to host [$TIMESCALE_DB][timescaledb-link], enabling elastic scalability and managed services. **Cloud-native**: an approach to building applications that leverage cloud infrastructure, scalability, and services like Kubernetes. @@ -89,7 +89,7 @@ This glossary defines technical terms, concepts, and terminology used in $COMPAN **Columnar**: a data storage format that stores data column by column rather than row by row, optimizing for analytical queries. -**Columnstore**: [$TIMESCALE_DB](#timescaledb)'s columnar storage engine optimized for analytical workloads and [compression](#compression). +**Columnstore**: [$TIMESCALE_DB][timescaledb-link]'s columnar storage engine optimized for analytical workloads and [compression][compression-link]. @@ -169,13 +169,13 @@ This glossary defines technical terms, concepts, and terminology used in $COMPAN **Exactly-once**: a message is delivered and processed precisely once. There is no loss and no duplicates. -**Explain**: a [$PG](#postgresql) command that shows the execution plan for a query, useful for performance analysis. +**Explain**: a [$PG][postgres-link] command that shows the execution plan for a query, useful for performance analysis. **Event sourcing**: an architectural pattern storing all changes as a sequence of events, naturally fitting time-series database capabilities. **Event-driven architecture**: a design pattern where components react to events such as sensor readings, requiring real-time data pipelines and storage. -**Extension**: a [$PG](#postgresql) add-on that extends the database's functionality beyond the core features. +**Extension**: a [$PG][postgres-link] add-on that extends the database's functionality beyond the core features. ## F @@ -183,7 +183,7 @@ This glossary defines technical terms, concepts, and terminology used in $COMPAN **Failover**: the automatic switching to a backup system, server, or network upon the failure or abnormal termination of the primary system. -**Financial time-series**: high-volume, timestamped datasets like stock market feeds or trade logs, requiring low-latency, scalable databases like [$TIMESCALE_DB](#timescaledb). +**Financial time-series**: high-volume, timestamped datasets like stock market feeds or trade logs, requiring low-latency, scalable databases like [$TIMESCALE_DB][timescaledb-link]. **Foreign key**: a database constraint that establishes a link between data in two tables by referencing the primary key of another table. @@ -191,7 +191,7 @@ This glossary defines technical terms, concepts, and terminology used in $COMPAN -**Free $SERVICE_SHORT**: a free instance of $CLOUD_LONG with limited resources. You can create up to two free $SERVICE_SHORTs under any $PRICING_PLAN. When a free $SERVICE_SHORT reaches the resource limit, it converts to the read-only state. You can convert a free $SERVICE_SHORT to a [standard one](#standard-tiger-service) under paid $PRICING_PLANs. +**Free $SERVICE_SHORT**: a free instance of $CLOUD_LONG with limited resources. You can create up to two free $SERVICE_SHORTs under any $PRICING_PLAN. When a free $SERVICE_SHORT reaches the resource limit, it converts to the read-only state. You can convert a free $SERVICE_SHORT to a [standard one][standard-tiger-service-link] under paid $PRICING_PLANs. **FTP (File Transfer Protocol)**: a standard network protocol used for transferring files between a client and server on a computer network. @@ -199,13 +199,13 @@ This glossary defines technical terms, concepts, and terminology used in $COMPAN **Gap filling**: a technique for handling missing data points in time-series by interpolation or other methods, often implemented with hyperfunctions. -**GIN (Generalized Inverted Index)**: a [$PG](#postgresql) index type designed for indexing composite values and supporting fast searches. +**GIN (Generalized Inverted Index)**: a [$PG][postgres-link] index type designed for indexing composite values and supporting fast searches. -**GiST (Generalized Search Tree)**: a [$PG](#postgresql) index type that provides a framework for implementing custom index types. +**GiST (Generalized Search Tree)**: a [$PG][postgres-link] index type that provides a framework for implementing custom index types. **GP-LTTB**: an advanced downsampling algorithm that extends Largest-Triangle-Three-Buckets with Gaussian Process modeling. -**GUC (Grand Unified Configuration)**: [$PG](#postgresql)'s configuration parameter system that controls various aspects of database behavior. +**GUC (Grand Unified Configuration)**: [$PG][postgres-link]'s configuration parameter system that controls various aspects of database behavior. **GUID (Globally Unique Identifier)**: a unique identifier used in software applications, typically represented as a 128-bit value. @@ -231,17 +231,17 @@ This glossary defines technical terms, concepts, and terminology used in $COMPAN **Hot storage**: a tier of data storage for frequently accessed data that provides the fastest access times but at higher cost. -**Hypercore**: [$TIMESCALE_DB](#timescaledb)'s hybrid storage engine that seamlessly combines row and column storage for optimal performance. See [Hypercore][use-hypercore]. +**Hypercore**: [$TIMESCALE_DB][timescaledb-link]'s hybrid storage engine that seamlessly combines row and column storage for optimal performance. See [Hypercore][use-hypercore]. -**Hyperfunction**: an SQL function in [$TIMESCALE_DB](#timescaledb) designed for time-series analysis, statistics, and specialized computations. See [Hyperfunctions][use-hyperfunctions]. +**Hyperfunction**: an SQL function in [$TIMESCALE_DB][timescaledb-link] designed for time-series analysis, statistics, and specialized computations. See [Hyperfunctions][use-hyperfunctions]. **HyperLogLog**: a probabilistic data structure used for estimating the cardinality of large datasets with minimal memory usage. -**Hypershift**: a migration tool and strategy for moving data to [$TIMESCALE_DB](#timescaledb) with minimal downtime. +**Hypershift**: a migration tool and strategy for moving data to [$TIMESCALE_DB][timescaledb-link] with minimal downtime. -**Hypertable**: [$TIMESCALE_DB](#timescaledb)'s core abstraction that automatically partitions time-series data for scalability. See [Hypertables][use-hypertables]. +**Hypertable**: [$TIMESCALE_DB][timescaledb-link]'s core abstraction that automatically partitions time-series data for scalability. See [Hypertables][use-hypertables]. ## I @@ -271,7 +271,7 @@ This glossary defines technical terms, concepts, and terminology used in $COMPAN **Job execution**: the process of running scheduled background tasks or automated procedures. -**JIT (Just-In-Time) compilation**: [$PG](#postgresql) feature that compiles frequently executed query parts for improved performance, available in [$TIMESCALE_DB](#timescaledb). +**JIT (Just-In-Time) compilation**: [$PG][postgres-link] feature that compiles frequently executed query parts for improved performance, available in [$TIMESCALE_DB][timescaledb-link]. **Job history**: a record of past job executions, including their status, duration, and any errors encountered. @@ -289,7 +289,7 @@ This glossary defines technical terms, concepts, and terminology used in $COMPAN **Load balancer**: a service distributing traffic across servers or database nodes to optimize resource use and avoid single points of failure. -**Log-Structured Merge (LSM) Tree**: a data structure optimized for write-heavy workloads, though [$TIMESCALE_DB](#timescaledb) primarily uses B-tree indexes for balanced read/write performance. +**Log-Structured Merge (LSM) Tree**: a data structure optimized for write-heavy workloads, though [$TIMESCALE_DB][timescaledb-link] primarily uses B-tree indexes for balanced read/write performance. **LlamaIndex**: a framework for building applications with large language models, providing tools for data ingestion and querying. @@ -297,7 +297,7 @@ This glossary defines technical terms, concepts, and terminology used in $COMPAN **Logical backup**: a backup method that exports data in a human-readable format, allowing for selective restoration. -**Logical replication**: a [$PG](#postgresql) feature that replicates data changes at the logical level rather than the physical level. +**Logical replication**: a [$PG][postgres-link] feature that replicates data changes at the logical level rather than the physical level. **Logging**: the process of recording events, errors, and system activities for monitoring and troubleshooting purposes. @@ -329,7 +329,7 @@ This glossary defines technical terms, concepts, and terminology used in $COMPAN **MQTT (Message Queuing Telemetry Transport)**: a lightweight messaging protocol designed for small sensors and mobile devices. -**MST (Managed Service for TimescaleDB)**: a fully managed [$TIMESCALE_DB](#timescaledb) service that handles infrastructure and maintenance tasks. +**MST (Managed Service for TimescaleDB)**: a fully managed [$TIMESCALE_DB][timescaledb-link] service that handles infrastructure and maintenance tasks. ## N @@ -341,7 +341,7 @@ This glossary defines technical terms, concepts, and terminology used in $COMPAN **Not null**: a database constraint that ensures a column cannot contain empty values. -**Numeric**: a [$PG](#postgresql) data type for storing exact numeric values with user-defined precision. +**Numeric**: a [$PG][postgres-link] data type for storing exact numeric values with user-defined precision. ## O @@ -367,7 +367,7 @@ This glossary defines technical terms, concepts, and terminology used in $COMPAN **Parallel copy**: a technique for copying large amounts of data using multiple concurrent processes to improve performance. -**Parallel Query Execution**: a [$PG](#postgresql) feature that uses multiple CPU cores to execute single queries faster, inherited by [$TIMESCALE_DB](#timescaledb). +**Parallel Query Execution**: a [$PG][postgres-link] feature that uses multiple CPU cores to execute single queries faster, inherited by [$TIMESCALE_DB][timescaledb-link]. **Partitioning**: the practice of dividing large tables into smaller, more manageable pieces based on certain criteria. @@ -375,19 +375,19 @@ This glossary defines technical terms, concepts, and terminology used in $COMPAN **Performance**: a measure of how efficiently a system operates, often quantified by metrics like throughput, latency, and resource utilization. -**pg_basebackup**: a [$PG](#postgresql) utility for taking base backups of a running [$PG](#postgresql) cluster. +**pg_basebackup**: a [$PG][postgres-link] utility for taking base backups of a running [$PG][postgres-link] cluster. -**pg_dump**: a [$PG](#postgresql) utility for backing up database objects and data in various formats. +**pg_dump**: a [$PG][postgres-link] utility for backing up database objects and data in various formats. -**pg_restore**: a [$PG](#postgresql) utility for restoring databases from backup files created by `pg_dump`. +**pg_restore**: a [$PG][postgres-link] utility for restoring databases from backup files created by `pg_dump`. -**pgVector**: a [$PG](#postgresql) extension that adds vector similarity search capabilities for AI and machine learning applications. See [pgvector][ai-pgvector]. +**pgVector**: a [$PG][postgres-link] extension that adds vector similarity search capabilities for AI and machine learning applications. See [pgvector][ai-pgvector]. -**pgai on $CLOUD_LONG**: a cloud solution for building search, RAG, and AI agents with [$PG](#postgresql). Enables calling AI embedding and generation models directly from the database using SQL. See [pgai][ai-pgai]. +**pgai on $CLOUD_LONG**: a cloud solution for building search, RAG, and AI agents with [$PG][postgres-link]. Enables calling AI embedding and generation models directly from the database using SQL. See [pgai][ai-pgai]. **pgvectorscale**: a performance enhancement for pgvector featuring StreamingDiskANN indexing, binary quantization compression, and label-based filtering. See [pgvectorscale][ai-pgvectorscale]. -**pgvectorizer**: a [$TIMESCALE_DB](#timescaledb) tool for automatically vectorizing and indexing data for similarity search. +**pgvectorizer**: a [$TIMESCALE_DB][timescaledb-link] tool for automatically vectorizing and indexing data for similarity search. **Physical backup**: a backup method that copies the actual database files at the storage level. @@ -401,11 +401,11 @@ This glossary defines technical terms, concepts, and terminology used in $COMPAN **$PG**: an open-source object-relational database system known for its reliability, robustness, and performance. -**PostGIS**: a [$PG](#postgresql) extension that adds support for geographic objects and spatial queries. +**PostGIS**: a [$PG][postgres-link] extension that adds support for geographic objects and spatial queries. **Primary key**: a database constraint that uniquely identifies each row in a table. -**psql**: an interactive terminal-based front-end to [$PG](#postgresql) that allows users to type queries interactively. +**psql**: an interactive terminal-based front-end to [$PG][postgres-link] that allows users to type queries interactively. ## Q @@ -435,7 +435,7 @@ This glossary defines technical terms, concepts, and terminology used in $COMPAN **Real-time analytics**: the immediate analysis of incoming data streams, crucial for observability, trading platforms, and IoT monitoring. -**Real**: a [$PG](#postgresql) data type for storing single-precision floating-point numbers. +**Real**: a [$PG][postgres-link] data type for storing single-precision floating-point numbers. **Real-time aggregate**: a continuous aggregate that includes both materialized historical data and real-time calculations on recent data. @@ -481,11 +481,11 @@ This glossary defines technical terms, concepts, and terminology used in $COMPAN **Service discovery**: mechanisms allowing applications to dynamically locate services like database endpoints, often used in distributed environments. -**Segmentwise recompression**: a [$TIMESCALE_DB](#timescaledb) [compression](#compression) technique that recompresses data segments to improve [compression](#compression) ratios. +**Segmentwise recompression**: a [$TIMESCALE_DB][timescaledb-link] [compression][compression-link] technique that recompresses data segments to improve [compression][compression-link] ratios. **Serializable**: the highest isolation level that ensures transactions appear to run serially even when executed concurrently. -**Service**: see [$SERVICE_LONG](#tiger-service). +**Service**: see [$SERVICE_LONG][tiger-service-link]. **Sharding**: horizontal partitioning of data across multiple database instances, distributing load and enabling linear scalability. @@ -507,7 +507,7 @@ This glossary defines technical terms, concepts, and terminology used in $COMPAN **Snapshot**: a point-in-time copy of data that can be used for backup and recovery purposes. -**SP-GiST (Space-Partitioned Generalized Search Tree)**: a [$PG](#postgresql) index type for data structures that naturally partition search spaces. +**SP-GiST (Space-Partitioned Generalized Search Tree)**: a [$PG][postgres-link] index type for data structures that naturally partition search spaces. **Storage optimization**: techniques for reducing storage costs and improving performance through compression, tiering, and efficient data organization. @@ -521,9 +521,9 @@ This glossary defines technical terms, concepts, and terminology used in $COMPAN -**Standard $SERVICE_SHORT**: a regular [$SERVICE_LONG](#tiger-service) that includes the resources and features according to the pricing plan. You can create standard $SERVICE_SHORTs under any of the paid plans. +**Standard $SERVICE_SHORT**: a regular [$SERVICE_LONG][tiger-service-link] that includes the resources and features according to the pricing plan. You can create standard $SERVICE_SHORTs under any of the paid plans. -**Streaming replication**: a [$PG](#postgresql) replication method that continuously sends write-ahead log records to standby servers. +**Streaming replication**: a [$PG][postgres-link] replication method that continuously sends write-ahead log records to standby servers. **Synthetic monitoring**: simulated transactions or probes used to test system health, generating time-series metrics for performance analysis. @@ -531,7 +531,7 @@ This glossary defines technical terms, concepts, and terminology used in $COMPAN **Table**: a database object that stores data in rows and columns, similar to a spreadsheet. -**Tablespace**: a [$PG](#postgresql) storage structure that defines where database objects are physically stored on disk. +**Tablespace**: a [$PG][postgres-link] storage structure that defines where database objects are physically stored on disk. **TCP (Transmission Control Protocol)**: a connection-oriented protocol that ensures reliable data transmission between applications. @@ -539,19 +539,19 @@ This glossary defines technical terms, concepts, and terminology used in $COMPAN **Telemetry**: the collection of real-time data from systems or devices for monitoring and analysis. -**Text**: a [$PG](#postgresql) data type for storing variable-length character strings. +**Text**: a [$PG][postgres-link] data type for storing variable-length character strings. **Throughput**: a measure of system performance indicating the amount of work performed or data processed per unit of time. **Tiered storage**: a storage strategy that automatically moves data between different storage classes based on access patterns and age. -**$CLOUD_LONG**: $COMPANY's managed cloud platform that provides [$TIMESCALE_DB](#timescaledb) as a fully managed solution with additional features. +**$CLOUD_LONG**: $COMPANY's managed cloud platform that provides [$TIMESCALE_DB][timescaledb-link] as a fully managed solution with additional features. **Tiger Lake**: $COMPANY's service for integrating operational databases with data lake architectures. -**$SERVICE_LONG**: an instance of optimized [$PG](#postgresql) extended with database engine innovations such as [$TIMESCALE_DB](#timescaledb), in a cloud infrastructure that delivers speed without sacrifice. You can create [free $SERVICE_SHORTs](#free-tiger-service) and [standard $SERVICE_SHORTs](#standard-tiger-service). +**$SERVICE_LONG**: an instance of optimized [$PG][postgres-link] extended with database engine innovations such as [$TIMESCALE_DB][timescaledb-link], in a cloud infrastructure that delivers speed without sacrifice. You can create [free $SERVICE_SHORTs][free-tiger-service-link] and [standard $SERVICE_SHORTs][standard-tiger-service-link]. **Time series**: data points indexed and ordered by time, typically representing how values change over time. @@ -563,11 +563,11 @@ This glossary defines technical terms, concepts, and terminology used in $COMPAN -**$TIMESCALE_DB**: an open-source [$PG](#postgresql) extension for real-time analytics that provides scalability and performance optimizations. +**$TIMESCALE_DB**: an open-source [$PG][postgres-link] extension for real-time analytics that provides scalability and performance optimizations. **Timestamp**: a data type that stores date and time information without timezone data. -**Timestamptz**: a [$PG](#postgresql) data type that stores timestamp with timezone information. +**Timestamptz**: a [$PG][postgres-link] data type that stores timestamp with timezone information. **TLS (Transport Layer Security)**: a cryptographic protocol that provides security for communication over networks. @@ -595,7 +595,7 @@ This glossary defines technical terms, concepts, and terminology used in $COMPAN ## V -**Vacuum**: a [$PG](#postgresql) maintenance operation that reclaims storage and updates database statistics. +**Vacuum**: a [$PG][postgres-link] maintenance operation that reclaims storage and updates database statistics. **Varchar**: a variable-length character data type that can store strings up to a specified maximum length. @@ -613,7 +613,7 @@ This glossary defines technical terms, concepts, and terminology used in $COMPAN ## W -**WAL (Write-Ahead Log)**: [$PG](#postgresql)'s method for ensuring data integrity by writing changes to a log before applying them to data files. +**WAL (Write-Ahead Log)**: [$PG][postgres-link]'s method for ensuring data integrity by writing changes to a log before applying them to data files. **Warm storage**: a storage tier that balances access speed and cost, suitable for data accessed occasionally. @@ -658,3 +658,11 @@ This glossary defines technical terms, concepts, and terminology used in $COMPAN [hyperfunctions-asap-smooth]: /use-timescale/:currentVersion:/hyperfunctions/gapfilling-interpolation/ [hyperfunctions-candlestick-agg]: /use-timescale/:currentVersion:/hyperfunctions/stats-aggs/ [hyperfunctions-stats-agg]: /use-timescale/:currentVersion:/hyperfunctions/stats-aggs/ +[postgres-link]: /api/:currentVersion:/glossary/#postgresql +[timescaledb-link]: /api/:currentVersion:/glossary/#timescaledb +[chunk-link]: /api/:currentVersion:/glossary/#chunk +[hypertable-link]: /api/:currentVersion:/glossary/#hypertable +[compression-link]: /api/:currentVersion:/glossary/#compression +[tiger-service-link]: /api/:currentVersion:/glossary/#tiger-service +[free-tiger-service-link]: /api/:currentVersion:/glossary/#free-tiger-service +[standard-tiger-service-link]: /api/:currentVersion:/glossary/#standard-tiger-service From b09d392cc708f4b721db15a4234cfeee2aac094e Mon Sep 17 00:00:00 2001 From: billy-the-fish Date: Fri, 12 Dec 2025 16:11:50 +0100 Subject: [PATCH 2/5] chore: update (# links --- .github/styles/templates/procedure.md | 7 ++++-- CONTRIBUTING.md | 6 +++-- _partials/_data_model_metadata.md | 4 +++- _partials/_livesync-terminal.md | 4 +++- api/administration.md | 8 +++++-- api/glossary.md | 2 +- integrations/telegraf.md | 7 ++++-- mst/connection-pools.md | 6 +++-- mst/extensions.md | 4 +++- mst/ingest-data.md | 13 ++++++---- self-hosted/install/installation-docker.md | 7 +++--- self-hosted/install/installation-linux.md | 10 +++++--- self-hosted/install/installation-macos.md | 10 +++++--- self-hosted/install/installation-source.md | 7 ++++-- self-hosted/migration/schema-then-data.md | 24 ++++++++++++------- .../multinode-timescaledb/multinode-ha.md | 4 +++- .../analyze-blockchain-query.md | 18 +++++++++----- .../beginner-blockchain-query.md | 9 ++++--- tutorials/energy-data/query-energy.md | 9 ++++--- tutorials/nyc-taxi-cab/query-nyc.md | 16 +++++++++---- tutorials/nyc-taxi-geospatial/plot-nyc.md | 6 +++-- .../create-a-continuous-aggregate.md | 5 ++-- .../data-tiering/querying-tiered-data.md | 4 +++- .../hyperfunctions/function-pipelines.md | 9 ++++--- .../query-data/advanced-analytic-queries.md | 3 ++- 25 files changed, 138 insertions(+), 64 deletions(-) diff --git a/.github/styles/templates/procedure.md b/.github/styles/templates/procedure.md index 630978383c..afa2c56f52 100644 --- a/.github/styles/templates/procedure.md +++ b/.github/styles/templates/procedure.md @@ -26,8 +26,8 @@ If necessary, a paragraph or two explaining more about how things work. This section shows you how to: -* [Verb \](#verb-what-the-user-will-do) -* [Verb \](#verb-what-the-user-will-do-if-the-procedure-is-cut-into-logical-sections) +* [Verb \][verb-what-the-user-will-do-link] +* [Verb \][verb-what-the-user-will-do-if-the-procedure-is-cut-into-logical-sections-link] ## Prerequisites @@ -104,3 +104,6 @@ You have \. [secure-vpc-aws]: /use-timescale/:currentVersion:/vpc/ [install-linux]: /self-hosted/:currentVersion:/install/installation-linux/ [gdsg]: https://developers.google.com/style/highlights + +[verb-what-the-user-will-do-link]: /.github/:currentVersion:/styles/templates/procedure/#verb-what-the-user-will-do +[verb-what-the-user-will-do-if-the-procedure-is-cut-into-logical-sections-link]: /.github/:currentVersion:/styles/templates/procedure/#verb-what-the-user-will-do-if-the-procedure-is-cut-into-logical-sections diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 4df06986e1..e7b12528b9 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -46,7 +46,7 @@ Each major doc section has a dedicated directory with `.md` files inside, repres Beneath the front matter, describe the error and its solution in regular Markdown. You can also use any other components allowed within the docs site. The entry shows up on the troubleshooting pages for its associated products and topics. If the page doesn't already exist, add an entry for it in the page - index, setting `type` to `placeholder`. See [Navigation tree](#navigation-tree). + index, setting `type` to `placeholder`. See [Navigation tree][navigation-tree-link]. ## Edit the navigation hierarchy @@ -161,4 +161,6 @@ The previous documentation source is in the deprecated repository called [docs.t [use-navigation]: use-timescale/page-index/page-index.js [formatting]: _partials/_formatting_examples.md [variables]: https://www.tigerdata.com/docs/variables-for-contributors/ -[readme]: README.md \ No newline at end of file +[readme]: README.md + +[navigation-tree-link]: /CONTRIBUTING/:currentVersion:/#navigation-tree diff --git a/_partials/_data_model_metadata.md b/_partials/_data_model_metadata.md index d049666dff..a62f7cd5dc 100644 --- a/_partials/_data_model_metadata.md +++ b/_partials/_data_model_metadata.md @@ -2,6 +2,8 @@ You might also notice that the metadata fields are missing. Because this is a relational database, metadata can be stored in a secondary table and `JOIN`ed at -query time. Learn more about [$TIMESCALE_DB's support for `JOIN`s](#joins-with-relational-data). +query time. Learn more about [$TIMESCALE_DB's support for `JOIN`s][joins-with-relational-data-link]. + +[joins-with-relational-data-link]: /_partials/:currentVersion:/_data_model_metadata/#joins-with-relational-data diff --git a/_partials/_livesync-terminal.md b/_partials/_livesync-terminal.md index a0aaf95edf..199b7a2eae 100644 --- a/_partials/_livesync-terminal.md +++ b/_partials/_livesync-terminal.md @@ -152,7 +152,7 @@ specifies the tables to synchronize. ALTER PUBLICATION SET(publish_via_partition_root=true); ``` - To convert partitioned table to hypertable, follow [Convert partitions and tables with time-series data into hypertables](#convert-partitions-and-tables-with-time-series-data-into-hypertables). + To convert partitioned table to hypertable, follow [Convert partitions and tables with time-series data into hypertables][convert-partitions-and-tables-with-time-series-data-into-hypertables-link]. 1. **Stop syncing a table in the `PUBLICATION`, use `DROP TABLE`** @@ -342,3 +342,5 @@ EOF [caggs]: /use-timescale/:currentVersion:/continuous-aggregates/about-continuous-aggregates/ [join-livesync-on-slack]: https://app.slack.com/client/T4GT3N2JK/C086NU9EZ88 [install-psql]: /integrations/:currentVersion:/psql/ + +[convert-partitions-and-tables-with-time-series-data-into-hypertables-link]: /migrate/:currentVersion:/livesync-for-postgresql/#convert-partitions-and-tables-with-time-series-data-into-hypertables diff --git a/api/administration.md b/api/administration.md index 7ed53c6f98..2c7503d6fe 100644 --- a/api/administration.md +++ b/api/administration.md @@ -54,12 +54,14 @@ SELECT timescaledb_post_restore(); Perform the required operations so that you can restore the database using `pg_restore`. Specifically, this sets the `timescaledb.restoring` GUC to `on` and stops any background workers which could have been performing tasks. -The background workers are stopped until the [timescaledb_post_restore()](#timescaledb_post_restore) function is run, after the restore operation is complete. +The background workers are stopped until the [timescaledb_post_restore()][timescaledb_post_restore-link] function is run, after the restore operation is complete. For more information, see [Migrate using pg_dump and pg_restore]. -After using `timescaledb_pre_restore()`, you need to run [`timescaledb_post_restore()`](#timescaledb_post_restore) before you can use the database normally. + +After using `timescaledb_pre_restore()`, you need to run [`timescaledb_post_restore()`][timescaledb_post_restore-link] before you can use the database normally. + ### Sample usage @@ -72,3 +74,5 @@ SELECT timescaledb_pre_restore(); [Migrate using pg_dump and pg_restore]: /migrate/:currentVersion:/pg-dump-and-restore/ [telemetry]: /self-hosted/:currentVersion:/configuration/telemetry + +[timescaledb_post_restore-link]: /api/:currentVersion:/administration/#timescaledb-post-restore diff --git a/api/glossary.md b/api/glossary.md index 9a1868b54a..817ef61ede 100644 --- a/api/glossary.md +++ b/api/glossary.md @@ -551,7 +551,7 @@ This glossary defines technical terms, concepts, and terminology used in $COMPAN -**$SERVICE_LONG**: an instance of optimized [$PG][postgres-link] extended with database engine innovations such as [$TIMESCALE_DB][timescaledb-link], in a cloud infrastructure that delivers speed without sacrifice. You can create [free $SERVICE_SHORTs][free-tiger-service-link] and [standard $SERVICE_SHORTs][standard-tiger-service-link]. +**$SERVICE_LONG**: an optimized [$PG][postgres-link] instance extended with database engine innovations such as [$TIMESCALE_DB][timescaledb-link], in a cloud infrastructure that delivers speed without sacrifice. You can create [free $SERVICE_SHORTs][free-tiger-service-link] and [standard $SERVICE_SHORTs][standard-tiger-service-link]. **Time series**: data points indexed and ordered by time, typically representing how values change over time. diff --git a/integrations/telegraf.md b/integrations/telegraf.md index a6cd2ba979..76fc77c81f 100644 --- a/integrations/telegraf.md +++ b/integrations/telegraf.md @@ -18,8 +18,8 @@ and output of data. To view metrics gathered by Telegraf and stored in a [hypertable][about-hypertables] in a $SERVICE_LONG. -- [Link Telegraf to your $SERVICE_LONG](#link-telegraf-to-your-service): create a Telegraf configuration -- [View the metrics collected by Telegraf](#view-the-metrics-collected-by-telegraf): connect to your $SERVICE_SHORT and +- [Link Telegraf to your $SERVICE_LONG][link-telegraf-to-your-service-link]: create a Telegraf configuration +- [View the metrics collected by Telegraf][view-the-metrics-collected-by-telegraf-link]: connect to your $SERVICE_SHORT and query the metrics table ## Prerequisites @@ -156,3 +156,6 @@ see the [PostgreQL output plugin][output-plugin]. [connect-timescaledb]: /integrations/:currentVersion:/find-connection-details/ [grafana]: /integrations/:currentVersion:/grafana/ [about-hypertables]: /use-timescale/:currentVersion:/hypertables/ + +[link-telegraf-to-your-service-link]: /integrations/:currentVersion:/telegraf/#link-telegraf-to-your-service-short +[view-the-metrics-collected-by-telegraf-link]: /integrations/:currentVersion:/telegraf/#view-the-metrics-collected-by-telegraf diff --git a/mst/connection-pools.md b/mst/connection-pools.md index 7f813a4522..da1ec29301 100644 --- a/mst/connection-pools.md +++ b/mst/connection-pools.md @@ -102,7 +102,7 @@ connection pooling to. * In the `Database` field, select a database to connect to. Each pool can only connect to one database. * In the `Pool Mode` field, select which - [pool mode](#connection-pooling-modes) to use. + [pool mode][connection-pooling-modes-link] to use. * In the `Pool Size` field, select the maximum number of server connections this pool can use at any one time. * In the `Username` field, select which database username to connect to @@ -121,4 +121,6 @@ to use both pooled and un-pooled connections at the same time. -[mst-login]:https://portal.managed.timescale.com/login \ No newline at end of file +[mst-login]:https://portal.managed.timescale.com/login + +[connection-pooling-modes-link]: /mst/:currentVersion:/connection-pools/#connection-pooling-modes diff --git a/mst/extensions.md b/mst/extensions.md index b8c0057615..131645501b 100644 --- a/mst/extensions.md +++ b/mst/extensions.md @@ -8,7 +8,7 @@ tags: [extensions] # Supported $PG extensions $MST_LONG supports many $PG extensions. See -[available extensions](#available-extensions) for a full list. +[available extensions][available-extensions-link] for a full list. ## Add an extension @@ -134,3 +134,5 @@ language extension is `plpythonu`. You can contact Support directly from $MST_LONG. Click the life-preserver icon in the upper-right corner of your dashboard. + +[available-extensions-link]: /mst/:currentVersion:/extensions/#available-extensions diff --git a/mst/ingest-data.md b/mst/ingest-data.md index 9c1961ef8f..f1afbb2f26 100644 --- a/mst/ingest-data.md +++ b/mst/ingest-data.md @@ -10,12 +10,12 @@ tags: [JDB, ODBC, client driver, Kafka, csv] There are several different ways of ingesting your data into $MST_LONG. This section contains instructions to: -* Bulk upload [from a `.csv` file](#bulk-upload-from-csv-files) +* Bulk upload [from a `.csv` file][bulk-upload-from-csv-files-link] * Insert data - [directly using a client driver](#insert-data-directly-using-a-client-driver), + [directly using a client driver][insert-data-directly-using-a-client-driver-link], such as JDBC, ODBC, or Node.js * Insert data - [directly using a message queue](#insert-data-directly-using-a-message-queue), + [directly using a message queue][insert-data-directly-using-a-message-queue-link], such as Kafka Before you begin, make sure you have @@ -86,7 +86,7 @@ import the data. Before you begin, make sure you have -[prepared your new database](#procedure-preparing-your-new-database). +[prepared your new database][procedure-preparing-your-new-database-link]. @@ -159,3 +159,8 @@ See [these instructions][gh-kafkaconnector] for using the Kafka connector. [postgres-odbc]: https://odbc.postgresql.org/ [create-managed-service]: /mst/:currentVersion:/installation-mst/ [mst-login]:https://portal.managed.timescale.com/login + +[bulk-upload-from-csv-files-link]: /mst/:currentVersion:/ingest-data/#bulk-upload-from-csv-files +[insert-data-directly-using-a-client-driver-link]: /mst/:currentVersion:/ingest-data/#insert-data-directly-using-a-client-driver +[insert-data-directly-using-a-message-queue-link]: /mst/:currentVersion:/ingest-data/#insert-data-directly-using-a-message-queue +[procedure-preparing-your-new-database-link]: /mst/:currentVersion:/ingest-data/#preparing-your-new-database diff --git a/self-hosted/install/installation-docker.md b/self-hosted/install/installation-docker.md index 6e2143e032..9050e71b26 100644 --- a/self-hosted/install/installation-docker.md +++ b/self-hosted/install/installation-docker.md @@ -17,7 +17,7 @@ time series and demanding workloads that ingest and query high volumes of data. instance on any local system from a pre-built Docker container. This section shows you how to -[Install and configure $TIMESCALE_DB on $PG](#install-and-configure-timescaledb-on-postgresql). +[Install and configure $TIMESCALE_DB on $PG][install-and-configure-timescaledb-on-postgresql-link]. < TestingEnv/> @@ -32,7 +32,7 @@ To run, and connect to a $PG installation on Docker, you need to install: ## Install and configure $TIMESCALE_DB on $PG This section shows you how to install the latest version of $PG and -$TIMESCALE_DB on a [supported platform](#supported-platforms) using containers supplied by $COMPANY. +$TIMESCALE_DB using containers supplied by $COMPANY. @@ -44,5 +44,6 @@ And that is it! You have $TIMESCALE_DB running on a database on a self-hosted in [docker-install]: https://docs.docker.com/get-started/get-docker/ -[install-psql]: https://www.timescale.com/blog/how-to-install-psql-on-mac-ubuntu-debian-windows/ +[install-psql]: /integrations/:currentVersion:/psql/ +[install-and-configure-timescaledb-on-postgresql-link]: /self-hosted/:currentVersion:/install/installation-docker/#install-and-configure-timescale-db-on-pg diff --git a/self-hosted/install/installation-linux.md b/self-hosted/install/installation-linux.md index 840dd36588..78e0a4a66c 100644 --- a/self-hosted/install/installation-linux.md +++ b/self-hosted/install/installation-linux.md @@ -24,9 +24,9 @@ time series and demanding workloads that ingest and query high volumes of data. This section shows you how to: -* [Install and configure $TIMESCALE_DB on $PG](#install-and-configure-timescaledb-on-postgresql) - set up +* [Install and configure $TIMESCALE_DB on $PG][install-and-configure-timescaledb-on-postgresql-link] - set up a self-hosted $PG instance to efficiently run TimescaleDB. -* [Add the $TIMESCALE_DB extension to your database](#add-the-timescaledb-extension-to-your-database) - enable $TIMESCALE_DB +* [Add the $TIMESCALE_DB extension to your database][add-the-timescaledb-extension-to-your-database-link] - enable $TIMESCALE_DB features and performance improvements on a database. @@ -35,7 +35,7 @@ This section shows you how to: ## Install and configure $TIMESCALE_DB on $PG This section shows you how to install the latest version of $PG and -$TIMESCALE_DB on a [supported platform](#supported-platforms) using the packages supplied by $COMPANY. +$TIMESCALE_DB on a [supported platform][supported-platforms-link] using the packages supplied by $COMPANY. @@ -113,3 +113,7 @@ You can deploy $TIMESCALE_DB on the following systems: [install-from-source]: /self-hosted/:currentVersion:/install/installation-source/ [supported-platforms-self-hosted]: /about/:currentVersion:/supported-platforms/#supported-systems + +[add-the-timescaledb-extension-to-your-database-link]: /self-hosted/:currentVersion:/install/installation-linux/#add-the-timescale-db-extension-to-your-database +[install-and-configure-timescaledb-on-postgresql-link]: /self-hosted/:currentVersion:/install/installation-linux/#install-and-configure-timescale-db-on-pg +[supported-platforms-link]: /self-hosted/:currentVersion:/install/installation-linux/#supported-platforms diff --git a/self-hosted/install/installation-macos.md b/self-hosted/install/installation-macos.md index 6461fd01b3..886f79d0a3 100644 --- a/self-hosted/install/installation-macos.md +++ b/self-hosted/install/installation-macos.md @@ -20,9 +20,9 @@ macOS device. This section shows you how to: -* [Install and configure $TIMESCALE_DB on $PG](#install-and-configure-timescaledb-on-postgresql) - set up +* [Install and configure $TIMESCALE_DB on $PG][install-and-configure-timescaledb-on-postgresql-link] - set up a self-hosted $PG instance to efficiently run TimescaleDB. -* [Add the $TIMESCALE_DB extension to your database](#add-the-timescaledb-extension-to-your-database) - enable $TIMESCALE_DB +* [Add the $TIMESCALE_DB extension to your database][add-the-timescaledb-extension-to-your-database-link] - enable $TIMESCALE_DB features and performance improvements on a database. < TestingEnv /> @@ -46,7 +46,7 @@ To keep your current $PG installation, [Install from source][install-from-source ## Install and configure $TIMESCALE_DB on $PG This section shows you how to install the latest version of $PG and -$TIMESCALE_DB on a [supported platform](#supported-platforms) using the packages supplied by $COMPANY. +$TIMESCALE_DB on a [supported platform][supported-platforms-link] using the packages supplied by $COMPANY. @@ -90,3 +90,7 @@ For the latest functionality, install MacOS 14 Sonoma. [macports]: https://guide.macports.org/#installing.macports [install-from-source]: /self-hosted/:currentVersion:/install/installation-source/ [install-postgresql]: https://www.postgresql.org/download/macosx/ + +[add-the-timescaledb-extension-to-your-database-link]: /self-hosted/:currentVersion:/install/installation-macos/#add-the-timescale-db-extension-to-your-database +[install-and-configure-timescaledb-on-postgresql-link]: /self-hosted/:currentVersion:/install/installation-macos/#install-and-configure-timescale-db-on-pg +[supported-platforms-link]: /self-hosted/:currentVersion:/install/installation-macos/#supported-platforms diff --git a/self-hosted/install/installation-source.md b/self-hosted/install/installation-source.md index 7bba922b0c..ea7dd02b29 100644 --- a/self-hosted/install/installation-source.md +++ b/self-hosted/install/installation-source.md @@ -18,9 +18,9 @@ instance on any local system, from source. This section shows you how to: -* [Install and configure $TIMESCALE_DB on $PG](#install-and-configure-timescaledb-on-postgres) - set up +* [Install and configure $TIMESCALE_DB on $PG][install-and-configure-timescaledb-on-postgres-link] - set up a self-hosted $PG instance to efficiently run TimescaleDB1. -* [Add the $TIMESCALE_DB extension to your database](#add-the-timescaledb-extension-to-your-database) - enable TimescaleDB features and +* [Add the $TIMESCALE_DB extension to your database][add-the-timescaledb-extension-to-your-database-link] - enable TimescaleDB features and performance improvements on a database. < TestingEnv/> @@ -79,3 +79,6 @@ And that is it! You have TimescaleDB running on a database on a self-hosted inst [cmake-download]: https://cmake.org/download/ [compatibility-matrix]: /self-hosted/:currentVersion:/upgrades/upgrade-pg/#plan-your-upgrade-path [postgres-breaking-change]: https://www.postgresql.org/about/news/postgresql-172-166-1510-1415-1318-and-1222-released-2965/ + +[add-the-timescaledb-extension-to-your-database-link]: /self-hosted/:currentVersion:/install/installation-source/#add-the-timescale-db-extension-to-your-database +[install-and-configure-timescaledb-on-postgres-link]: /self-hosted/:currentVersion:/install/installation-source/#install-and-configure-timescale-db-on-pg diff --git a/self-hosted/migration/schema-then-data.md b/self-hosted/migration/schema-then-data.md index 677bed97a5..33483cb8db 100644 --- a/self-hosted/migration/schema-then-data.md +++ b/self-hosted/migration/schema-then-data.md @@ -38,14 +38,14 @@ database at once. For more information, see the section on The procedure to migrate your database requires these steps: -* [Migrate schema pre-data](#migrate-schema-pre-data) -* [Restore hypertables in Timescale](#restore-hypertables-in-timescale) -* [Copy data from the source database](#copy-data-from-the-source-database) -* [Restore data into Timescale](#restore-data-into-timescale) -* [Migrate schema post-data](#migrate-schema-post-data) -* [Recreate continuous aggregates](#recreate-continuous-aggregates) (optional) -* [Recreate policies](#recreate-policies) (optional) -* [Update table statistics](#update-table-statistics) +* [Migrate schema pre-data][migrate-schema-pre-data-link] +* [Restore hypertables in Timescale][restore-hypertables-in-timescale-link] +* [Copy data from the source database][copy-data-from-the-source-database-link] +* [Restore data into Timescale][restore-data-into-timescale-link] +* [Migrate schema post-data][migrate-schema-post-data-link] +* [Recreate continuous aggregates][recreate-continuous-aggregates-link] (optional) +* [Recreate policies][recreate-policies-link] (optional) +* [Update table statistics][update-table-statistics-link] @@ -216,3 +216,11 @@ the [compression section](https://www.tigerdata.com/docs/use-timescale/latest/co [upgrading-postgresql-self-hosted]: /self-hosted/:currentVersion:/upgrades/upgrade-pg/ [upgrading-timescaledb]: /self-hosted/:currentVersion:/upgrades/major-upgrade/ [migration]: /migrate/:currentVersion:/ +[migrate-schema-pre-data-link]: /self-hosted/:currentVersion:/migration/schema-then-data/#migrate-schema-pre-data +[copy-data-from-the-source-database-link]: /self-hosted/:currentVersion:/migration/schema-then-data/#copy-data-from-the-source-database +[restore-data-into-timescale-link]: /self-hosted/:currentVersion:/migration/schema-then-data/#restore-data-into-timescale +[restore-hypertables-in-timescale-link]: /self-hosted/:currentVersion:/migration/schema-then-data/#restore-hypertables-in-your-self-long-instance +[migrate-schema-post-data-link]: /self-hosted/:currentVersion:/migration/schema-then-data/#migrate-schema-post-data +[recreate-continuous-aggregates-link]: /self-hosted/:currentVersion:/migration/schema-then-data/#recreate-continuous-aggregates +[recreate-policies-link]: /self-hosted/:currentVersion:/migration/schema-then-data/#recreate-policies +[update-table-statistics-link]: /self-hosted/:currentVersion:/migration/schema-then-data/#update-table-statistics diff --git a/self-hosted/multinode-timescaledb/multinode-ha.md b/self-hosted/multinode-timescaledb/multinode-ha.md index 19a29691a9..459f6a30ac 100644 --- a/self-hosted/multinode-timescaledb/multinode-ha.md +++ b/self-hosted/multinode-timescaledb/multinode-ha.md @@ -105,7 +105,7 @@ replication factor on an existing distributed hypertable. Note, however, that only new chunks are replicated according to the updated replication factor. Existing chunks need to be re-replicated by copying those chunks to new data nodes (see the [node -failures section](#node-failures) below). +failures section][node-failures-link] below). When native replication is enabled, the replication happens whenever you write data to the table. On every `INSERT` and `COPY` call, each @@ -222,3 +222,5 @@ CALL timescaledb_experimental.cleanup_copy_chunk_operation('ts_copy_1_31'); [single-ha]: /self-hosted/:currentVersion:/replication-and-ha/ [alter_data_node]: /api/:currentVersion:/distributed-hypertables/alter_data_node/ [copy_chunk]:/api/:currentVersion:/distributed-hypertables/copy_chunk_experimental + +[node-failures-link]: /self-hosted/:currentVersion:/multinode-timescaledb/multinode-ha/#node-failures diff --git a/tutorials/blockchain-analyze/analyze-blockchain-query.md b/tutorials/blockchain-analyze/analyze-blockchain-query.md index c5f5704fd8..6dc2dda44a 100644 --- a/tutorials/blockchain-analyze/analyze-blockchain-query.md +++ b/tutorials/blockchain-analyze/analyze-blockchain-query.md @@ -16,12 +16,12 @@ queries that are not possible in standard $PG. In this section, you learn how to write queries that answer these questions: -* [Is there any connection between the number of transactions and the transaction fees?](#is-there-any-connection-between-the-number-of-transactions-and-the-transaction-fees) -* [Does the transaction volume affect the BTC-USD rate?](#does-the-transaction-volume-affect-the-btc-usd-rate) -* [Do more transactions in a block mean the block is more expensive to mine?](#do-more-transactions-in-a-block-mean-the-block-is-more-expensive-to-mine) -* [What percentage of the average miner's revenue comes from fees compared to block rewards?](#what-percentage-of-the-average-miners-revenue-comes-from-fees-compared-to-block-rewards) -* [How does block weight affect miner fees?](#how-does-block-weight-affect-miner-fees) -* [What's the average miner revenue per block?](#whats-the-average-miner-revenue-per-block) +* [Is there any connection between the number of transactions and the transaction fees?][is-there-any-connection-between-the-number-of-transactions-and-the-transaction-fees-link] +* [Does the transaction volume affect the BTC-USD rate?][does-the-transaction-volume-affect-the-btc-usd-rate-link] +* [Do more transactions in a block mean the block is more expensive to mine?][do-more-transactions-in-a-block-mean-the-block-is-more-expensive-to-mine-link] +* [What percentage of the average miner's revenue comes from fees compared to block rewards?][what-percentage-of-the-average-miners-revenue-comes-from-fees-compared-to-block-rewards-link] +* [How does block weight affect miner fees?][how-does-block-weight-affect-miner-fees-link] +* [What's the average miner revenue per block?][whats-the-average-miner-revenue-per-block-link] ## Create continuous aggregates @@ -547,3 +547,9 @@ increase the time range. [docs-cagg]: /use-timescale/:currentVersion:/continuous-aggregates/ [about-hyperfunctions]: https://www.tigerdata.com/docs/use-timescale/latest/hyperfunctions/about-hyperfunctions/ +[is-there-any-connection-between-the-number-of-transactions-and-the-transaction-fees-link]: /tutorials/:currentVersion:/blockchain-analyze/analyze-blockchain-query/#is-there-any-connection-between-the-number-of-transactions-and-the-transaction-fees +[does-the-transaction-volume-affect-the-btc-usd-rate-link]: /tutorials/:currentVersion:/blockchain-analyze/analyze-blockchain-query/#does-the-transaction-volume-affect-the-btc-usd-rate +[do-more-transactions-in-a-block-mean-the-block-is-more-expensive-to-mine-link]: /tutorials/:currentVersion:/blockchain-analyze/analyze-blockchain-query/#do-more-transactions-in-a-block-mean-the-block-is-more-expensive-to-mine +[what-percentage-of-the-average-miners-revenue-comes-from-fees-compared-to-block-rewards-link]: /tutorials/:currentVersion:/blockchain-analyze/analyze-blockchain-query/#what-percentage-of-the-average-miners-revenue-comes-from-fees-compared-to-block-rewards +[how-does-block-weight-affect-miner-fees-link]: /tutorials/:currentVersion:/blockchain-analyze/analyze-blockchain-query/#how-does-block-weight-affect-miner-fees +[whats-the-average-miner-revenue-per-block-link]: /tutorials/:currentVersion:/blockchain-analyze/analyze-blockchain-query/#whats-the-average-miner-revenue-per-block diff --git a/tutorials/blockchain-query/beginner-blockchain-query.md b/tutorials/blockchain-query/beginner-blockchain-query.md index eb3ae6a919..12772387f4 100644 --- a/tutorials/blockchain-query/beginner-blockchain-query.md +++ b/tutorials/blockchain-query/beginner-blockchain-query.md @@ -13,9 +13,9 @@ When you have your dataset loaded, you can start constructing some queries to discover what your data tells you. In this section, you learn how to write queries that answer these questions: -* [What are the five most recent coinbase transactions?](#what-are-the-five-most-recent-coinbase-transactions) -* [What are the five most recent transactions?](#what-are-the-five-most-recent-transactions) -* [What are the five most recent blocks?](#what-are-the-five-most-recent-blocks?) +* [What are the five most recent coinbase transactions?][what-are-the-five-most-recent-coinbase-transactions-link] +* [What are the five most recent transactions?][what-are-the-five-most-recent-transactions-link] +* [What are the five most recent blocks?][what-are-the-five-most-recent-blocks-link] ## What are the five most recent coinbase transactions? @@ -144,3 +144,6 @@ US Dollars. [coinbase-def]: https://www.pcmag.com/encyclopedia/term/coinbase-transaction +[what-are-the-five-most-recent-coinbase-transactions-link]: /tutorials/:currentVersion:/blockchain-query/beginner-blockchain-query/#what-are-the-five-most-recent-coinbase-transactions +[what-are-the-five-most-recent-transactions-link]: /tutorials/:currentVersion:/blockchain-query/beginner-blockchain-query/#what-are-the-five-most-recent-transactions +[what-are-the-five-most-recent-blocks-link]: /tutorials/:currentVersion:/blockchain-query/beginner-blockchain-query/#what-are-the-five-most-recent-blocks diff --git a/tutorials/energy-data/query-energy.md b/tutorials/energy-data/query-energy.md index cac4f58020..25faeb8bf3 100644 --- a/tutorials/energy-data/query-energy.md +++ b/tutorials/energy-data/query-energy.md @@ -17,9 +17,9 @@ queries that are not possible in standard $PG. In this section, you learn how to construct queries, to answer these questions: -* [Energy consumption by hour of day](#what-is-the-energy-consumption-by-the-hour-of-the-day) -* [Energy consumption by weekday](#what-is-the-energy-consumption-by-the-day-of-the-week). -* [Energy consumption by month](#what-is-the-energy-consumption-on-a-monthly-basis). +* [Energy consumption by hour of day][what-is-the-energy-consumption-by-the-hour-of-the-day-link] +* [Energy consumption by weekday][what-is-the-energy-consumption-by-the-day-of-the-week-link]. +* [Energy consumption by month][what-is-the-energy-consumption-on-a-monthly-basis-link]. ## What is the energy consumption by the hour of the day? @@ -213,3 +213,6 @@ You may also want to check the energy consumption that occurs on a monthly basis [about-hyperfunctions]: https://www.tigerdata.com/docs/use-timescale/latest/hyperfunctions/about-hyperfunctions/ +[what-is-the-energy-consumption-by-the-hour-of-the-day-link]: /tutorials/:currentVersion:/energy-data/query-energy/#what-is-the-energy-consumption-by-the-hour-of-the-day +[what-is-the-energy-consumption-by-the-day-of-the-week-link]: /tutorials/:currentVersion:/energy-data/query-energy/#what-is-the-energy-consumption-by-the-day-of-the-week +[what-is-the-energy-consumption-on-a-monthly-basis-link]: /tutorials/:currentVersion:/energy-data/query-energy/#what-is-the-energy-consumption-on-a-monthly-basis diff --git a/tutorials/nyc-taxi-cab/query-nyc.md b/tutorials/nyc-taxi-cab/query-nyc.md index c3ff15e749..469230b8fd 100644 --- a/tutorials/nyc-taxi-cab/query-nyc.md +++ b/tutorials/nyc-taxi-cab/query-nyc.md @@ -14,11 +14,11 @@ When you have your dataset loaded, you can start constructing some queries to discover what your data tells you. In this section, you learn how to write queries that answer these questions: -* [How many rides take place each day?](#how-many-rides-take-place-every-day) -* [What is the average fare amount?](#what-is-the-average-fare-amount) -* [How many rides of each rate type were taken?](#how-many-rides-of-each-rate-type-were-taken) -* [What kind of trips are going to and from airports?](#what-kind-of-trips-are-going-to-and-from-airports) -* [How many rides took place on New Year's Day 2016](#how-many-rides-took-place-on-new-years-day-2016)? +* [How many rides take place each day?][how-many-rides-take-place-every-day-link] +* [What is the average fare amount?][what-is-the-average-fare-amount-link] +* [How many rides of each rate type were taken?][how-many-rides-of-each-rate-type-were-taken-link] +* [What kind of trips are going to and from airports?][what-kind-of-trips-are-going-to-and-from-airports-link] +* [How many rides took place on New Year's Day 2016][how-many-rides-took-place-on-new-years-day-2016-link]? ## How many rides take place every day? @@ -272,3 +272,9 @@ the data into time intervals instead. ``` + +[how-many-rides-take-place-every-day-link]: /tutorials/:currentVersion:/nyc-taxi-cab/query-nyc/#how-many-rides-take-place-every-day +[what-is-the-average-fare-amount-link]: /tutorials/:currentVersion:/nyc-taxi-cab/query-nyc/#what-is-the-average-fare-amount +[how-many-rides-of-each-rate-type-were-taken-link]: /tutorials/:currentVersion:/nyc-taxi-cab/query-nyc/#how-many-rides-of-each-rate-type-were-taken +[what-kind-of-trips-are-going-to-and-from-airports-link]: /tutorials/:currentVersion:/nyc-taxi-cab/query-nyc/#what-kind-of-trips-are-going-to-and-from-airports +[how-many-rides-took-place-on-new-years-day-2016-link]: /tutorials/:currentVersion:/nyc-taxi-cab/query-nyc/#how-many-rides-took-place-on-new-years-day-2016 diff --git a/tutorials/nyc-taxi-geospatial/plot-nyc.md b/tutorials/nyc-taxi-geospatial/plot-nyc.md index b0582b24e7..37bfd38d61 100644 --- a/tutorials/nyc-taxi-geospatial/plot-nyc.md +++ b/tutorials/nyc-taxi-geospatial/plot-nyc.md @@ -15,8 +15,8 @@ discover what your data tells you. In this section, you learn how to combine the data in the NYC taxi dataset with geospatial data from [PostGIS][postgis], to answer these questions: -* [How many rides on New Year's Day 2016 originated from Times Square?](#how-many-rides-on-new-years-day-2016-originated-from-times-square) -* [Which rides traveled more than 5 miles in Manhattan?](#which-rides-traveled-more-than-5-miles-in-manhattan). +* [How many rides on New Year's Day 2016 originated from Times Square?][how-many-rides-on-new-years-day-2016-originated-from-times-square-link] +* [Which rides traveled more than 5 miles in Manhattan?][which-rides-traveled-more-than-5-miles-in-manhattan-link]. ## Set up your dataset for PostGIS @@ -181,3 +181,5 @@ Grafana can plot the data properly. [postgis]: http://postgis.net/ +[how-many-rides-on-new-years-day-2016-originated-from-times-square-link]: /tutorials/:currentVersion:/nyc-taxi-geospatial/plot-nyc/#how-many-rides-on-new-years-day-2016-originated-from-times-square +[which-rides-traveled-more-than-5-miles-in-manhattan-link]: /tutorials/:currentVersion:/nyc-taxi-geospatial/plot-nyc/#which-rides-traveled-more-than-5-miles-in-manhattan diff --git a/use-timescale/continuous-aggregates/create-a-continuous-aggregate.md b/use-timescale/continuous-aggregates/create-a-continuous-aggregate.md index 6623cf9380..1b1cab78b8 100644 --- a/use-timescale/continuous-aggregates/create-a-continuous-aggregate.md +++ b/use-timescale/continuous-aggregates/create-a-continuous-aggregate.md @@ -18,7 +18,7 @@ Continuous aggregates require a `time_bucket` on the time partitioning column of the hypertable. By default, views are automatically refreshed. You can adjust this by setting -the [WITH NO DATA](#using-the-with-no-data-option) option. Additionally, the +the [WITH NO DATA][using-the-with-no-data-option-link] option. Additionally, the view can not be a [security barrier view][postgres-security-barrier]. Continuous aggregates use hypertables in the background, which means that they @@ -313,4 +313,5 @@ For $TIMESCALE_DB v2.19.3 and below, $CAGGs do not support window functions. To [postgres-rls]: [postgres-security-barrier]: [with-no-data]: /use-timescale/:currentVersion:/continuous-aggregates/create-a-continuous-aggregate/#using-the-with-no-data-option -[create_materialized_view]: /api/:currentVersion:/continuous-aggregates/create_materialized_view/#parameters \ No newline at end of file +[create_materialized_view]: /api/:currentVersion:/continuous-aggregates/create_materialized_view/#parameters +[using-the-with-no-data-option-link]: /use-timescale/:currentVersion:/continuous-aggregates/create-a-continuous-aggregate/#using-the-with-no-data-option \ No newline at end of file diff --git a/use-timescale/data-tiering/querying-tiered-data.md b/use-timescale/data-tiering/querying-tiered-data.md index a33a961a6d..23b8380f06 100644 --- a/use-timescale/data-tiering/querying-tiered-data.md +++ b/use-timescale/data-tiering/querying-tiered-data.md @@ -24,7 +24,7 @@ With tiered reads enabled, you can query your data normally even when it's distr Your hypertable is spread across the tiers, so queries and `JOIN`s work and fetch the same data as usual. By default, tiered data is not accessed by queries. Querying tiered data may slow down query performance -as the data is not stored locally on the high-performance storage tier. See [Performance considerations](#performance-considerations). +as the data is not stored locally on the high-performance storage tier. See [Performance considerations][performance-considerations-link]. @@ -190,5 +190,7 @@ Queries over tiered data are expected to be slower than over local data. However * Text and non-native types (JSON, JSONB, GIS) filtering is slower when querying tiered data. +[performance-considerations-link]: /use-timescale/:currentVersion:/data-tiering/querying-tiered-data/#performance-considerations + diff --git a/use-timescale/hyperfunctions/function-pipelines.md b/use-timescale/hyperfunctions/function-pipelines.md index 69ddbfb398..de874f9137 100644 --- a/use-timescale/hyperfunctions/function-pipelines.md +++ b/use-timescale/hyperfunctions/function-pipelines.md @@ -224,7 +224,7 @@ timevector('2021-01-01 UTC', 10) -> add(5) -> mul(2) -> add(1) Both of these examples produce `('2020-01-01 00:00:00+00', 31.0)`. If multiple arithmetic operations are needed and precedence is important, -consider using a [Lambda](#lambda-elements) instead. +consider using a [Lambda][lambda-elements-link] instead. ### Unary mathematical functions @@ -786,8 +786,8 @@ SELECT hyperloglog(device_id) -> distinct_count() FROM measurements; You can turn a timevector into a formatted text representation. There are two functions for turning a timevector to text: -* [`to_text`](#to-text), which allows you to specify the template -* [`to_plotly`](#to-plotly), which outputs a format suitable for use with the +* [`to_text`][to-text-link], which allows you to specify the template +* [`to_plotly`][to-plotly-link], which outputs a format suitable for use with the [Plotly JSON chart schema][plotly] ### `to_text` @@ -916,3 +916,6 @@ This table lists all function pipeline elements in alphabetical order: |`unnest`|Output|`TABLE (time TIMESTAMPTZ, value DOUBLE PRECISION)`| [plotly]: https://plotly.com/chart-studio-help/json-chart-schema/ +[lambda-elements-link]: /use-timescale/:currentVersion:/hyperfunctions/function-pipelines/#lambda-elements +[to-text-link]: /use-timescale/:currentVersion:/hyperfunctions/function-pipelines/#to-text +[to-plotly-link]: /use-timescale/:currentVersion:/hyperfunctions/function-pipelines/#to-plotly diff --git a/use-timescale/query-data/advanced-analytic-queries.md b/use-timescale/query-data/advanced-analytic-queries.md index 1995aeab34..04e8d818ba 100644 --- a/use-timescale/query-data/advanced-analytic-queries.md +++ b/use-timescale/query-data/advanced-analytic-queries.md @@ -78,7 +78,7 @@ SELECT ## Calculate the rate of change -Like [increase](#calculate-the-increase-in-a-value), rate applies to a situation +Like [increase][calculate-the-increase-in-a-value-link], rate applies to a situation with monotonically increasing counters. If your sample interval is variable or you use different sampling intervals between different series, it is helpful to normalize the values to a common time interval to make the calculated values @@ -418,3 +418,4 @@ use its [SkipScan][skipscan] feature. [skipscan]: /use-timescale/:currentVersion:/query-data/skipscan/ [time_bucket]: /api/:currentVersion:/hyperfunctions/time_bucket [toolkit-approx-percentile]: /api/:currentVersion:/hyperfunctions/percentile-approximation/uddsketch/ +[calculate-the-increase-in-a-value-link]: /use-timescale/:currentVersion:/query-data/advanced-analytic-queries/#calculate-the-increase-in-a-value From f288e2c84482efbf60b47330b1bd456de2a1e246 Mon Sep 17 00:00:00 2001 From: billy-the-fish Date: Tue, 16 Dec 2025 10:14:41 +0100 Subject: [PATCH 3/5] chore: cleanup --- mst/ingest-data.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/mst/ingest-data.md b/mst/ingest-data.md index f1afbb2f26..cef1991680 100644 --- a/mst/ingest-data.md +++ b/mst/ingest-data.md @@ -85,8 +85,10 @@ hypertable. You need to begin by creating the new table, before you import the data. + Before you begin, make sure you have [prepared your new database][procedure-preparing-your-new-database-link]. + From 8d5e0c8d5639ece9a1fe7e6848f6195a017e8a5f Mon Sep 17 00:00:00 2001 From: atovpeko Date: Tue, 16 Dec 2025 13:50:03 +0200 Subject: [PATCH 4/5] fix links --- api/administration.md | 2 +- integrations/telegraf.md | 2 +- self-hosted/install/installation-docker.md | 2 +- self-hosted/install/installation-linux.md | 4 ++-- self-hosted/install/installation-macos.md | 4 ++-- self-hosted/install/installation-source.md | 4 ++-- self-hosted/install/installation-windows.md | 4 ++-- self-hosted/migration/schema-then-data.md | 20 +++++++++---------- .../hyperfunctions/function-pipelines.md | 4 ++-- 9 files changed, 23 insertions(+), 23 deletions(-) diff --git a/api/administration.md b/api/administration.md index 2c7503d6fe..2c635d0000 100644 --- a/api/administration.md +++ b/api/administration.md @@ -75,4 +75,4 @@ SELECT timescaledb_pre_restore(); [Migrate using pg_dump and pg_restore]: /migrate/:currentVersion:/pg-dump-and-restore/ [telemetry]: /self-hosted/:currentVersion:/configuration/telemetry -[timescaledb_post_restore-link]: /api/:currentVersion:/administration/#timescaledb-post-restore +[timescaledb_post_restore-link]: /api/:currentVersion:/administration/#timescaledb_post_restore diff --git a/integrations/telegraf.md b/integrations/telegraf.md index 76fc77c81f..10d23e8616 100644 --- a/integrations/telegraf.md +++ b/integrations/telegraf.md @@ -157,5 +157,5 @@ see the [PostgreQL output plugin][output-plugin]. [grafana]: /integrations/:currentVersion:/grafana/ [about-hypertables]: /use-timescale/:currentVersion:/hypertables/ -[link-telegraf-to-your-service-link]: /integrations/:currentVersion:/telegraf/#link-telegraf-to-your-service-short +[link-telegraf-to-your-service-link]: /integrations/:currentVersion:/telegraf/#link-telegraf-to-your-service [view-the-metrics-collected-by-telegraf-link]: /integrations/:currentVersion:/telegraf/#view-the-metrics-collected-by-telegraf diff --git a/self-hosted/install/installation-docker.md b/self-hosted/install/installation-docker.md index 9050e71b26..254305e01e 100644 --- a/self-hosted/install/installation-docker.md +++ b/self-hosted/install/installation-docker.md @@ -46,4 +46,4 @@ And that is it! You have $TIMESCALE_DB running on a database on a self-hosted in [docker-install]: https://docs.docker.com/get-started/get-docker/ [install-psql]: /integrations/:currentVersion:/psql/ -[install-and-configure-timescaledb-on-postgresql-link]: /self-hosted/:currentVersion:/install/installation-docker/#install-and-configure-timescale-db-on-pg +[install-and-configure-timescaledb-on-postgresql-link]: /self-hosted/:currentVersion:/install/installation-docker/#install-and-configure-timescaledb-on-postgres diff --git a/self-hosted/install/installation-linux.md b/self-hosted/install/installation-linux.md index 78e0a4a66c..d783b5fa8d 100644 --- a/self-hosted/install/installation-linux.md +++ b/self-hosted/install/installation-linux.md @@ -114,6 +114,6 @@ You can deploy $TIMESCALE_DB on the following systems: [install-from-source]: /self-hosted/:currentVersion:/install/installation-source/ [supported-platforms-self-hosted]: /about/:currentVersion:/supported-platforms/#supported-systems -[add-the-timescaledb-extension-to-your-database-link]: /self-hosted/:currentVersion:/install/installation-linux/#add-the-timescale-db-extension-to-your-database -[install-and-configure-timescaledb-on-postgresql-link]: /self-hosted/:currentVersion:/install/installation-linux/#install-and-configure-timescale-db-on-pg +[add-the-timescaledb-extension-to-your-database-link]: /self-hosted/:currentVersion:/install/installation-linux/#add-the-timescaledb-extension-to-your-database +[install-and-configure-timescaledb-on-postgresql-link]: /self-hosted/:currentVersion:/install/installation-linux/#install-and-configure-timescaledb-on-postgres [supported-platforms-link]: /self-hosted/:currentVersion:/install/installation-linux/#supported-platforms diff --git a/self-hosted/install/installation-macos.md b/self-hosted/install/installation-macos.md index 886f79d0a3..6355df9ab9 100644 --- a/self-hosted/install/installation-macos.md +++ b/self-hosted/install/installation-macos.md @@ -91,6 +91,6 @@ For the latest functionality, install MacOS 14 Sonoma. [install-from-source]: /self-hosted/:currentVersion:/install/installation-source/ [install-postgresql]: https://www.postgresql.org/download/macosx/ -[add-the-timescaledb-extension-to-your-database-link]: /self-hosted/:currentVersion:/install/installation-macos/#add-the-timescale-db-extension-to-your-database -[install-and-configure-timescaledb-on-postgresql-link]: /self-hosted/:currentVersion:/install/installation-macos/#install-and-configure-timescale-db-on-pg +[add-the-timescaledb-extension-to-your-database-link]: /self-hosted/:currentVersion:/install/installation-macos/#add-the-timescaledb-extension-to-your-database +[install-and-configure-timescaledb-on-postgresql-link]: /self-hosted/:currentVersion:/install/installation-macos/#install-and-configure-timescaledb-on-postgres [supported-platforms-link]: /self-hosted/:currentVersion:/install/installation-macos/#supported-platforms diff --git a/self-hosted/install/installation-source.md b/self-hosted/install/installation-source.md index ea7dd02b29..ed1840df19 100644 --- a/self-hosted/install/installation-source.md +++ b/self-hosted/install/installation-source.md @@ -80,5 +80,5 @@ And that is it! You have TimescaleDB running on a database on a self-hosted inst [compatibility-matrix]: /self-hosted/:currentVersion:/upgrades/upgrade-pg/#plan-your-upgrade-path [postgres-breaking-change]: https://www.postgresql.org/about/news/postgresql-172-166-1510-1415-1318-and-1222-released-2965/ -[add-the-timescaledb-extension-to-your-database-link]: /self-hosted/:currentVersion:/install/installation-source/#add-the-timescale-db-extension-to-your-database -[install-and-configure-timescaledb-on-postgres-link]: /self-hosted/:currentVersion:/install/installation-source/#install-and-configure-timescale-db-on-pg +[add-the-timescaledb-extension-to-your-database-link]: /self-hosted/:currentVersion:/install/installation-source/#add-the-timescaledb-extension-to-your-database +[install-and-configure-timescaledb-on-postgres-link]: /self-hosted/:currentVersion:/install/installation-source/#install-and-configure-timescaledb-on-postgres diff --git a/self-hosted/install/installation-windows.md b/self-hosted/install/installation-windows.md index 6cd55c85f9..ddecb0230b 100644 --- a/self-hosted/install/installation-windows.md +++ b/self-hosted/install/installation-windows.md @@ -107,6 +107,6 @@ For release information, see the [GitHub releases page][gh-releases] and the [re [release-notes]: https://github.com/timescale/timescaledb/releases [windows-releases]: #windows-releases [install-from-source]: /self-hosted/:currentVersion:/install/installation-source/ -[install-timescaledb]: /self-hosted/:currentVersion:/install/installation-windows/#install-and-configure-timescaledb-on-postgresql -[add-timescledb-extension]: /self-hosted/:currentVersion:/install/installation-windows/#add-the-timescaledb-extension-to-your-database +[install-timescaledb]: /self-hosted/:currentVersion:/install/installation-windows/#install-and-configure-timescaledb-on-postgres +[add-timescledb-extension]: /self-hosted/:currentVersion:/install/installation-windows/#add-the-timescale-db-extension-to-your-database [supported-platforms]: /self-hosted/:currentVersion:/install/installation-windows/#supported-platforms diff --git a/self-hosted/migration/schema-then-data.md b/self-hosted/migration/schema-then-data.md index 33483cb8db..aef2560e48 100644 --- a/self-hosted/migration/schema-then-data.md +++ b/self-hosted/migration/schema-then-data.md @@ -39,9 +39,9 @@ database at once. For more information, see the section on The procedure to migrate your database requires these steps: * [Migrate schema pre-data][migrate-schema-pre-data-link] -* [Restore hypertables in Timescale][restore-hypertables-in-timescale-link] +* [Restore hypertables in $TIMESCALE_DB][restore-hypertables-in-timescale-link] * [Copy data from the source database][copy-data-from-the-source-database-link] -* [Restore data into Timescale][restore-data-into-timescale-link] +* [Restore data into $TIMESCALE_DB][restore-data-into-timescale-link] * [Migrate schema post-data][migrate-schema-post-data-link] * [Recreate continuous aggregates][recreate-continuous-aggregates-link] (optional) * [Recreate policies][recreate-policies-link] (optional) @@ -54,7 +54,7 @@ data can take a very long time. You can continue reading from your source database during this time, though performance could be slower. To avoid this problem, fork your database and migrate your data from the fork. If you write to the tables in your source database during the migration, the new writes might -not be transferred to Timescale. To avoid this problem, see the section on +not be transferred to TimescaleDB. To avoid this problem, see the section on [migrating an active database][migration]. @@ -86,14 +86,14 @@ Before you begin, check that you have: Migrate your pre-data from your source database to $SELF_LONG. This includes table and schema definitions, as well as information on sequences, -owners, and settings. This doesn't include Timescale-specific schemas. +owners, and settings. This doesn't include $TIMESCALE_DB-specific schemas. ### Migrating schema pre-data 1. Dump the schema pre-data from your source database into a `dump_pre_data.bak` file, using - your source database connection details. Exclude Timescale-specific schemas. + your source database connection details. Exclude $TIMESCALE_DB-specific schemas. If you are prompted for a password, use your source database credentials: ```bash @@ -116,7 +116,7 @@ owners, and settings. This doesn't include Timescale-specific schemas. ## Restore hypertables in your $SELF_LONG instance After pre-data migration, your hypertables from your source database become -regular $PG tables in Timescale. Recreate your hypertables in your $SELF_LONG instance to +regular $PG tables in $TIMESCALE_DB. Recreate your hypertables in your $SELF_LONG instance to restore them. @@ -179,11 +179,11 @@ Split each table by time range, and copy each range individually. For example: -## Restore data into Timescale +## Restore data into $TIMESCALE_DB When you have copied your data into `.csv` files, you can restore it to $SELF_LONG by copying from the `.csv` files. There are two methods: using -regular $PG [`COPY`][copy], or using the TimescaleDB +regular $PG [`COPY`][copy], or using the $TIMESCALE_DB [`timescaledb-parallel-copy`][timescaledb-parallel-copy] function. In tests, `timescaledb-parallel-copy` is 16% faster. The `timescaledb-parallel-copy` tool is not included by default. You must install the function. @@ -218,8 +218,8 @@ the [compression section](https://www.tigerdata.com/docs/use-timescale/latest/co [migration]: /migrate/:currentVersion:/ [migrate-schema-pre-data-link]: /self-hosted/:currentVersion:/migration/schema-then-data/#migrate-schema-pre-data [copy-data-from-the-source-database-link]: /self-hosted/:currentVersion:/migration/schema-then-data/#copy-data-from-the-source-database -[restore-data-into-timescale-link]: /self-hosted/:currentVersion:/migration/schema-then-data/#restore-data-into-timescale -[restore-hypertables-in-timescale-link]: /self-hosted/:currentVersion:/migration/schema-then-data/#restore-hypertables-in-your-self-long-instance +[restore-data-into-timescale-link]: /self-hosted/:currentVersion:/migration/schema-then-data/#restore-data-into-timescaledb +[restore-hypertables-in-timescale-link]: /self-hosted/:currentVersion:/migration/schema-then-data/#restore-hypertables-in-your-self-hosted-timescaledb-instance [migrate-schema-post-data-link]: /self-hosted/:currentVersion:/migration/schema-then-data/#migrate-schema-post-data [recreate-continuous-aggregates-link]: /self-hosted/:currentVersion:/migration/schema-then-data/#recreate-continuous-aggregates [recreate-policies-link]: /self-hosted/:currentVersion:/migration/schema-then-data/#recreate-policies diff --git a/use-timescale/hyperfunctions/function-pipelines.md b/use-timescale/hyperfunctions/function-pipelines.md index de874f9137..9bd9036652 100644 --- a/use-timescale/hyperfunctions/function-pipelines.md +++ b/use-timescale/hyperfunctions/function-pipelines.md @@ -917,5 +917,5 @@ This table lists all function pipeline elements in alphabetical order: [plotly]: https://plotly.com/chart-studio-help/json-chart-schema/ [lambda-elements-link]: /use-timescale/:currentVersion:/hyperfunctions/function-pipelines/#lambda-elements -[to-text-link]: /use-timescale/:currentVersion:/hyperfunctions/function-pipelines/#to-text -[to-plotly-link]: /use-timescale/:currentVersion:/hyperfunctions/function-pipelines/#to-plotly +[to-text-link]: /use-timescale/:currentVersion:/hyperfunctions/function-pipelines/#to_text +[to-plotly-link]: /use-timescale/:currentVersion:/hyperfunctions/function-pipelines/#to_plotly From 33bb929e48e777927621e6d93caa1aed5bac1a35 Mon Sep 17 00:00:00 2001 From: billy-the-fish Date: Tue, 16 Dec 2025 13:07:49 +0100 Subject: [PATCH 5/5] chore: update on review --- _partials/_data_model_metadata.md | 9 --------- _partials/_livesync-terminal.md | 2 +- 2 files changed, 1 insertion(+), 10 deletions(-) delete mode 100644 _partials/_data_model_metadata.md diff --git a/_partials/_data_model_metadata.md b/_partials/_data_model_metadata.md deleted file mode 100644 index a62f7cd5dc..0000000000 --- a/_partials/_data_model_metadata.md +++ /dev/null @@ -1,9 +0,0 @@ - - -You might also notice that the metadata fields are missing. Because this is a -relational database, metadata can be stored in a secondary table and `JOIN`ed at -query time. Learn more about [$TIMESCALE_DB's support for `JOIN`s][joins-with-relational-data-link]. - - - -[joins-with-relational-data-link]: /_partials/:currentVersion:/_data_model_metadata/#joins-with-relational-data diff --git a/_partials/_livesync-terminal.md b/_partials/_livesync-terminal.md index e4d2e8d306..c3cc11dd31 100644 --- a/_partials/_livesync-terminal.md +++ b/_partials/_livesync-terminal.md @@ -152,7 +152,7 @@ specifies the tables to synchronize. ALTER PUBLICATION SET(publish_via_partition_root=true); ``` - To convert partitioned table to hypertable, follow [Convert partitions and tables with time-series data into hypertables][convert-partitions-and-tables-with-time-series-data-into-hypertables-link]. + To convert a partitioned table to a hypertable, follow [Convert partitions and tables with time-series data into hypertables][convert-partitions-and-tables-with-time-series-data-into-hypertables-link]. 1. **Stop syncing a table in the `PUBLICATION`, use `DROP TABLE`**