Why TimescaleDB?
TimescaleDB extends PostgreSQL with automatic time-based partitioning, native compression, and continuous aggregates. Get the reliability of PostgreSQL with purpose-built time-series optimizations.
Prerequisites
Minimum Requirements
- • 2 CPU cores
- • 4 GB RAM
- • 20 GB SSD storage
- • PostgreSQL 13+
- • Ubuntu 20.04+ / Debian 11+
Recommended for Production
- • 4+ CPU cores
- • 16+ GB RAM
- • 200+ GB NVMe SSD
- • PostgreSQL 15+
- • Dedicated data disk
Installation
Add TimescaleDB Repository
Add the official TimescaleDB APT repository:
# Install prerequisites
sudo apt update
sudo apt install -y gnupg postgresql-common apt-transport-https lsb-release wget
# Add TimescaleDB repository
echo "deb https://packagecloud.io/timescale/timescaledb/ubuntu/ $(lsb_release -c -s) main" | \
sudo tee /etc/apt/sources.list.d/timescaledb.list
# Add GPG key
wget --quiet -O - https://packagecloud.io/timescale/timescaledb/gpgkey | \
sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/timescaledb.gpg
# Update package list
sudo apt updateInstall PostgreSQL with TimescaleDB
Install PostgreSQL 16 with TimescaleDB extension:
# Install PostgreSQL 16 with TimescaleDB
sudo apt install -y timescaledb-2-postgresql-16
# Run TimescaleDB tuning script
sudo timescaledb-tune --yes
# Restart PostgreSQL
sudo systemctl restart postgresql
# Verify installation
sudo -u postgres psql -c "SELECT version();"Enable TimescaleDB Extension
Enable the extension in your database:
-- Connect to PostgreSQL
sudo -u postgres psql
-- Create a database for time-series data
CREATE DATABASE tsdb;
-- Connect to the database
\c tsdb
-- Enable TimescaleDB extension
CREATE EXTENSION IF NOT EXISTS timescaledb;
-- Verify extension
\dx timescaledb List of installed extensions
Name | Version | Schema | Description
-------------+---------+------------+-------------------------------------------------------------------
timescaledb | 2.14.2 | public | Enables scalable inserts and complex queries for time-series dataDocker Installation
Deploy with Docker
Run TimescaleDB in a Docker container:
# Create data directory
mkdir -p ~/timescaledb/data
# Run TimescaleDB container
docker run -d \
--name timescaledb \
-p 5432:5432 \
-v ~/timescaledb/data:/var/lib/postgresql/data \
-e POSTGRES_PASSWORD=YourSecurePassword123! \
-e POSTGRES_DB=tsdb \
timescale/timescaledb:latest-pg16
# Verify container is running
docker logs timescaledb
# Connect to database
docker exec -it timescaledb psql -U postgres -d tsdbHypertables
Create a Hypertable
Convert regular tables to hypertables for automatic partitioning:
-- Create a regular table first
CREATE TABLE metrics (
time TIMESTAMPTZ NOT NULL,
device_id TEXT NOT NULL,
temperature DOUBLE PRECISION,
humidity DOUBLE PRECISION,
cpu_usage DOUBLE PRECISION
);
-- Convert to hypertable (partitioned by time)
SELECT create_hypertable('metrics', 'time');
-- Create hypertable with custom chunk interval (7 days)
SELECT create_hypertable(
'metrics',
'time',
chunk_time_interval => INTERVAL '7 days'
);Insert Time-Series Data
Insert data into hypertables like regular PostgreSQL tables:
-- Insert single row
INSERT INTO metrics (time, device_id, temperature, humidity, cpu_usage)
VALUES (NOW(), 'device-001', 72.5, 45.2, 23.5);
-- Insert multiple rows
INSERT INTO metrics (time, device_id, temperature, humidity, cpu_usage)
VALUES
(NOW() - INTERVAL '1 hour', 'device-001', 71.2, 44.8, 21.3),
(NOW() - INTERVAL '2 hours', 'device-001', 70.8, 46.1, 25.7),
(NOW() - INTERVAL '3 hours', 'device-002', 68.5, 52.3, 18.9);Query Time-Series Data
Use TimescaleDB time-series functions:
-- Time bucket aggregation (hourly averages)
SELECT
time_bucket('1 hour', time) AS bucket,
device_id,
AVG(temperature) AS avg_temp,
AVG(humidity) AS avg_humidity
FROM metrics
WHERE time > NOW() - INTERVAL '24 hours'
GROUP BY bucket, device_id
ORDER BY bucket DESC;
-- First/Last aggregates
SELECT
device_id,
first(temperature, time) AS first_temp,
last(temperature, time) AS last_temp
FROM metrics
WHERE time > NOW() - INTERVAL '1 day'
GROUP BY device_id;Compression
Enable Compression
Configure native compression for hypertables:
-- Enable compression on hypertable
ALTER TABLE metrics SET (
timescaledb.compress,
timescaledb.compress_segmentby = 'device_id',
timescaledb.compress_orderby = 'time DESC'
);
-- Manually compress specific chunks
SELECT compress_chunk(c.chunk_name)
FROM timescaledb_information.chunks c
WHERE c.hypertable_name = 'metrics'
AND NOT c.is_compressed
AND c.range_end < NOW() - INTERVAL '7 days';Automatic Compression Policy
Set up automatic compression for old data:
-- Add compression policy (compress data older than 7 days)
SELECT add_compression_policy('metrics', INTERVAL '7 days');
-- View compression policies
SELECT * FROM timescaledb_information.jobs
WHERE proc_name = 'policy_compression';
-- Check compression status
SELECT
chunk_name,
is_compressed,
before_compression_total_bytes,
after_compression_total_bytes
FROM timescaledb_information.chunks
WHERE hypertable_name = 'metrics';Continuous Aggregates
Create Continuous Aggregate
Create materialized views that auto-update:
-- Create hourly aggregates
CREATE MATERIALIZED VIEW metrics_hourly
WITH (timescaledb.continuous) AS
SELECT
time_bucket('1 hour', time) AS bucket,
device_id,
AVG(temperature) AS avg_temp,
MIN(temperature) AS min_temp,
MAX(temperature) AS max_temp,
AVG(humidity) AS avg_humidity,
COUNT(*) AS sample_count
FROM metrics
GROUP BY bucket, device_id
WITH NO DATA;
-- Add refresh policy
SELECT add_continuous_aggregate_policy('metrics_hourly',
start_offset => INTERVAL '3 hours',
end_offset => INTERVAL '1 hour',
schedule_interval => INTERVAL '1 hour'
);Data Retention
Retention Policies
Configure automatic data retention:
-- Drop chunks older than 30 days (much faster than DELETE)
SELECT drop_chunks('metrics', INTERVAL '30 days');
-- Add automatic retention policy (keep 90 days of data)
SELECT add_retention_policy('metrics', INTERVAL '90 days');
-- View retention policies
SELECT * FROM timescaledb_information.jobs
WHERE proc_name = 'policy_retention';
-- Remove retention policy
SELECT remove_retention_policy('metrics');Security
User Management
Create dedicated users with appropriate permissions:
-- Create read-only user
CREATE USER grafana_reader WITH PASSWORD 'SecurePassword123!';
GRANT CONNECT ON DATABASE tsdb TO grafana_reader;
GRANT USAGE ON SCHEMA public TO grafana_reader;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO grafana_reader;
-- Create write user for applications
CREATE USER app_writer WITH PASSWORD 'SecurePassword456!';
GRANT CONNECT ON DATABASE tsdb TO app_writer;
GRANT USAGE ON SCHEMA public TO app_writer;
GRANT SELECT, INSERT ON metrics TO app_writer;Enable SSL/TLS
Configure encrypted connections:
# Generate self-signed certificate
sudo mkdir -p /etc/postgresql/ssl
cd /etc/postgresql/ssl
sudo openssl req -new -x509 -days 365 -nodes \
-out server.crt -keyout server.key \
-subj "/CN=timescaledb.example.com"
sudo chown postgres:postgres server.*
sudo chmod 600 server.key# SSL Configuration
ssl = on
ssl_cert_file = '/etc/postgresql/ssl/server.crt'
ssl_key_file = '/etc/postgresql/ssl/server.key'
ssl_min_protocol_version = 'TLSv1.2'Backup & Recovery
Logical Backup
Create logical backups with pg_dump:
# Full database backup
pg_dump -U postgres -d tsdb -F c -f /backup/tsdb_$(date +%Y%m%d).dump
# Backup specific hypertable
pg_dump -U postgres -d tsdb -t metrics -F c -f /backup/metrics_$(date +%Y%m%d).dump
# Backup with compression
pg_dump -U postgres -d tsdb -F c -Z 9 -f /backup/tsdb_$(date +%Y%m%d).dump.gzRestore from Backup
Restore database from backup:
# Restore to existing database
dropdb -U postgres tsdb
createdb -U postgres tsdb
pg_restore -U postgres -d tsdb /backup/tsdb_20240101.dump
# Restore with TimescaleDB pre/post restore
psql -U postgres -d tsdb -c "SELECT timescaledb_pre_restore();"
pg_restore -U postgres -d tsdb /backup/tsdb_20240101.dump
psql -U postgres -d tsdb -c "SELECT timescaledb_post_restore();"Performance Tuning
PostgreSQL Configuration
Optimize PostgreSQL settings:
# Memory settings (adjust based on available RAM)
shared_buffers = 4GB # 25% of RAM
effective_cache_size = 12GB # 75% of RAM
work_mem = 256MB
maintenance_work_mem = 1GB
# WAL settings
wal_buffers = 64MB
max_wal_size = 4GB
min_wal_size = 1GB
# Parallelism
max_parallel_workers_per_gather = 4
max_parallel_workers = 8
# TimescaleDB specific
timescaledb.max_background_workers = 8Indexing Strategies
Create efficient indexes:
-- Composite index for common queries
CREATE INDEX idx_metrics_device_time
ON metrics (device_id, time DESC);
-- Partial index for recent data
CREATE INDEX idx_metrics_recent
ON metrics (device_id, time DESC)
WHERE time > NOW() - INTERVAL '7 days';
-- Analyze table for query planner
ANALYZE metrics;Troubleshooting
Slow Queries
-- Check query plan
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
SELECT * FROM metrics
WHERE time > NOW() - INTERVAL '1 day';
-- View chunk exclusion
EXPLAIN SELECT * FROM metrics
WHERE time > NOW() - INTERVAL '1 hour';
-- Should show "Chunks excluded: X"High Disk Usage
-- Check hypertable sizes
SELECT
hypertable_name,
pg_size_pretty(hypertable_size(format('%I.%I',
hypertable_schema, hypertable_name)::regclass))
FROM timescaledb_information.hypertables;
-- Force compression on old data
SELECT compress_chunk(c.chunk_name)
FROM timescaledb_information.chunks c
WHERE NOT c.is_compressed
AND c.range_end < NOW() - INTERVAL '3 days';Background Jobs Failing
-- View all scheduled jobs
SELECT * FROM timescaledb_information.jobs;
-- Check job run history
SELECT * FROM timescaledb_information.job_stats
ORDER BY last_run_started_at DESC;
-- View job errors
SELECT * FROM timescaledb_information.job_errors
ORDER BY finish_time DESC LIMIT 20;Quick Reference
Default Ports
- • PostgreSQL: 5432
Important Paths
- • Config: /etc/postgresql/16/main/
- • Data: /var/lib/postgresql/16/main/
- • Logs: /var/log/postgresql/
Key Functions
- • create_hypertable()
- • time_bucket()
- • add_compression_policy()
- • add_retention_policy()
Useful Views
- • timescaledb_information.hypertables
- • timescaledb_information.chunks
- • timescaledb_information.jobs
