Postgres promises simplicity and power that is perhaps too good to be true. To get the most out of your database you must manage its 300+ configuration knobs carefully.
Getting started with Postgres is remarkably easy. On most Linux distributions you can have a database up and running with a one-line command. This is great for developers, who can get started quickly and build applications rapidly. However, this simplicity belies the true complexity of running an advanced open source database.
Look under the hood and you will find over 300 configuration knobs that determine whether Postgres will run your application reliably and efficiently. Setting these knobs requires substantial expertise, especially when in environments that require high availability, stringent security, or in applications that achieve significant scale.
The power of Postgres arises directly from its powerful abstractions: the relational model and the SQL programming language. The relational model provides tables, a convenient way of organizing data that hides, or abstracts away, the process of laying out the data in memory or storage. SQL provides a powerful tool for combining data from multiple tables, and bears a closer resemblance to human languages than most programming languages. SQL is also unique among popular languages because it lets the database fill in the algorithmic details of a computation. Programmers describe the result they want computed, but they do not tell Postgres how to compute it.
Does Postgres promise something too good to be true? It claims to do all sorts of hard work for programmers: recording data reliably, organizing it for efficient access, producing efficient algorithms for combining it, managing caching, ensuring quick recovery when system faults occur—the list goes on.
The caveat is the configuration. Postgres ships with a formidable set of capabilities, including a sophisticated cost-based query optimizer, state-of-the-art concurrency control, and robust logging and recovery mechanisms. In undemanding scenarios these mechanisms take care of everything automatically. In production circumstances, by contrast, they are more like tools to help you get the job done.
In the remainder of this post, we explore the 300+ configuration knobs in Postgres. This is one of the best ways to learn about its capabilities.
The size of the Postgres configuration has grown over time. From 146 knobs in version 8.0, released in 2005, it leaped to 270 in version 10.0, released in 2017. In version 16.0, released in 2023, the number of knobs has reached 362.
We can dive deeper into this configuration to see what the knobs actually do. Postgres exposes all of the configuration settings via the pg_settings view and helpfully provides a description of each parameter. Look at the categories of settings in Postgres version 15.6.
postgres=> select substring(category, '^([^/]*)/?.*') major_category, count(*) ct
from pg_settings group by major_category order by ct desc;
major_category | count |
---|---|
Query Tuning | 48 |
Client Connection Defaults | 42 |
Reporting and Logging | 41 |
Write-Ahead Log | 38 |
Resource Usage | 34 |
Connections and Authentication | 25 |
Developer Options | 21 |
Replication | 21 |
Preset Options | 20 |
Autovacuum | 13 |
Statistics | 12 |
Version and Platform Compatibility | 8 |
Lock Management | 5 |
Error Handling | 4 |
What do these 332 all do?
There are often many ways to run a SQL statement. All produce equivalent results, but may have very different performance characteristics, requiring different amounts of memory, CPU time, and disk IO to complete. Postgres uses a sophisticated cost-based query optimizer to pick the best plan. It scores alternative execution plans according to a cost model, then chooses the best one. The power to pick through alternatives is fabulous, but it has several limitations, not the least among them the requirement to set the weights for all of the parameters. For we may have some intuition for what cpu_index_tuple_cost
, cpu_operator_cost
, and cpu_tuple_cost
mean, but how do we set them well? And how should they compare to random_page_cost
or seq_page_cost
?
Quite a few of these relate to locale and formatting, and as such defaults for the application configuration. Examples include character encoding (client_encoding), date and time formatting (lc_time), or monetary formatting (lc_monetary). Setting these aside, there are still 29 parameters that impact SQL processing statement behavior. These include settings like transaction_isolation
, transaction_deferrable
, or lock_timeout
.
This category contains settings for managing database connections and authenticating users. The most common database tuning knob is max_connections
, which specifies how many clients can connect to the database simultaneously. Setting this too high can exhaust system resources, invoking the OOM killer, but setting it too low limits the amount of work the database can do. The ssl
parameter enables SSL encryption for connections. Today this should always be on, and Postgres shows its age with a default off
setting. However but setting up SSL requires generating and storing certificates, so turning it on is not as easy as flipping a switch.
These are settings designed for developers of Postgres. They are interesting to understand, but you will not need to tune them in a production system. Examples include jit_dump_bitcode
, which directs the Just-In-Time (JIT) compiler to LLVM bitcode, wal_consistency_checking
which adds extra checks of the write-ahead logs, and ignore_checksum_failure
allows the system to continue operating even if data corruption is detected.
Replication is crucial for databases that meet high availability standards. There are several settings that control replication. For example, max_slot_wal_keep_size
determines the maximum amount of write-ahead log data to keep for replication slots. A higher settings keeps more data, using disk space but allowing a replica to catch up after a longer period of disconnection. The hot_standby_feedback
informs the primary node about long-running queries on read replicas. This can ensure that vacuum garbage collection activity on the primary system does not remove records needed to complete these queries, but doing doing so can risk table bloat.
These are settings that are usually set at the time of database initialization and rarely changed afterward. data_checksums
enhance data integrity by enabling checksums on data pages, which helps detect corruption introduces a performance overhead. debug_assertions
enable internal sanity checks. This provide additional error detection, but is generally needed only for troubleshooting bugs in the Postgres code itself.
Autovacuum settings control the automated background process that cleans up and optimizes the database. autovacuum_naptime
sets the minimum delay between autovacuum runs, affecting the responsiveness to accumulation of garbage. autovacuum_vacuum_cost_limit
determines the throttling of the autovacuum processes. Throttling these processes helps protect foreground application processing, but risks higher garbage accumulation and table bloat. In worst-case scenarios autovacuum falls ever further behind, creating the need to pause the database to perform a full vacuum.
Statistics settings affect the collection of performance and operational metrics. log_parser_stats
and track_io_timing
enable detailed logging of query parsing and I/O timing, respectively. Collecting more statistics can provide valuable performance insight, but may generate large amounts of log data and impact the performance of the database.
These are used for ensuring backwards compatibility with older versions of Postgres. For example, transform_null_equals
alters the behavior of the =
operator with NULL
values to conform to legacy behavior. backslash_quote
controls how backslash characters are treated in string literals, ensuring compatibility with certain applications.
Lock management settings fine-tune the behavior of locks. These settings become important under high concurrency, and setting them incorrectly can lead to crippling performance bottlenecks. For example, max_locks_per_transaction
, which is misleadingly named, specifies the size of the lock table and consequently the average number of locks each transaction can hold. deadlock_timeout
determines how long the database waits before checking for deadlocks, impacting responsiveness in situations where deadlocks occur.
These settings dictate how the database responds to errors. data_sync_retry
allows the database to retry synchronization to disk after a failure, which enhances resilience but can hide underlying disk issues, possibly masking corruption. restart_after_crash
controls whether the database should automatically restart after a crash, which makes sense in some settings but is not appropriate in settings when a failover configuration is available.
Digging through the Postgres configuration is a little bit like taking a tour of the internals of the database. The developers of Postgres have put a configuration knob on many of the interesting database capabilities and mechanisms, so you learn something about each one of them when you explore the configuration.
This exploration also reinforces what many of us already know from experience: configuring Postgres so that it runs your workload optimally requires significant expertise. Many developers do not even know where to start when faced with the 300+ configuration knobs. Even experts need to spend significant time analyzing a system to understand and address the root causes performance bottlenecks.
The complexity of the Postgres configuration undermines the benefits of the powerful abstractions: the relational model and the SQL language. We have been promised a database that takes care of all of the complex details of storing information reliably and transforming it efficiently. Postgres delivers on this promise to a great extent—as much as any database does—but while it simplifies code, it shifts much complexity to the configuration.
Automating database administration for PostgreSQL with the open source AutoDBA project.
Explaining the difference between self-managing databases and managed database services.
We describe how CrystalDB uses AI techniques to turn PostgreSQL into a self-managing serverless database.