PostgreSQL’s Shared Buffers and OS Page Cache: What They Are and How to Configure Them?

September 24, 2025 | by dbsnoop

PostgreSQL's Shared Buffers and OS Page Cache: What They Are and How to Configure Them?

PostgreSQL’s Shared Buffers and OS Page Cache: What They Are and How to Configure Them?

Your PostgreSQL server has a generous amount of RAM, but its memory behavior is confusing. Tools like htop or free -g show that “free” memory is dangerously low, while “cached/buffered” memory is extremely high. At the same time, the application’s performance is good, but you worry that the system is on the verge of collapsing due to a lack of RAM. This scenario is not a sign of a problem; it’s a sign that PostgreSQL, in conjunction with the Linux operating system, is working exactly as designed.

Unlike other RDBMSs that try to manage memory almost exclusively, PostgreSQL takes a collaborative approach. It relies on two levels of data caching: its own internal cache, shared_buffers, and the operating system’s cache, the OS Page Cache. Understanding the dynamic between these two caches is the key to effective memory tuning in PostgreSQL and to avoiding the most common misconfiguration of all: allocating too much memory to the wrong place.

The Dual-Cache Architecture: A Silent Partnership

To optimize PostgreSQL, you need to think like it does. Every data read from the disk goes through a two-layer partnership.

1. Shared Buffers: PostgreSQL’s Private Workspace

shared_buffers is an area of RAM managed directly by PostgreSQL. It is its primary working cache. When PostgreSQL modifies data (an UPDATE or INSERT), it must bring the data page into shared_buffers before altering it. It is, essentially, the workshop where all the dirty work is done.

  • Configuration Parameter: shared_buffers in the postgresql.conf file.

2. OS Page Cache: The Operating System’s Giant Warehouse

The operating system (especially Linux) is extremely aggressive and efficient at using all “free” memory as a file cache, the Page Cache. When any application reads a file from the disk, the OS keeps a copy of that block in RAM. The next time the same block is requested, it is served directly from memory without touching the disk. PostgreSQL stores its data in files, so it benefits enormously from this mechanism.

  • Configuration Parameter: None. It is managed automatically by the operating system.

How They Work Together (The Read Flow)

  1. A query needs a data page.
  2. PostgreSQL first looks in shared_buffers.
    • Cache Hit (Level 1): If the page is there, it is used immediately. Maximum performance.
    • Cache Miss (Level 1): If not, PostgreSQL requests the page from the operating system.
  3. The OS first looks in its OS Page Cache.
    • Cache Hit (Level 2): If the page is in the OS cache, it is copied to shared_buffers (a fast RAM-to-RAM operation) and then used. Very good performance.
    • Cache Miss (Level 2): If the page is in neither cache, the OS reads it from the physical disk (a slow operation), places it in the OS Page Cache, and delivers it to PostgreSQL, which then places it in shared_buffers. Lowest performance.

The Correct Configuration: Less is (Often) More

The most common mistake that administrators coming from other RDBMSs make is to allocate a huge percentage of the server’s RAM (70-80%) to shared_buffers. In PostgreSQL, this can be disastrous for performance.

By over-sizing shared_buffers, you “steal” memory that the operating system could have used for its efficient Page Cache. This leads to a “double caching” effect, where the same data can end up existing in both shared_buffers and the Page Cache, which is a waste of RAM.

The Golden Rule (Starting Point):
For a dedicated database server, the standard recommendation for shared_buffers is 25% of the system’s total RAM.

  • Servers with a lot of RAM (>32GB): This can be increased, but it rarely makes sense to go beyond 40%. For workloads that fit entirely in RAM, the value can be higher.
  • The Key is Balance: The goal is to give PostgreSQL enough memory for its write operations and its “hottest” data, while leaving the majority of the RAM for the OS Page Cache, which is extremely efficient at managing reads for the overall “working set.”

Code 1: Checking and Configuring shared_buffers

-- 1. Check the current setting (connected to psql)
SHOW shared_buffers;

To change it, edit the postgresql.conf file:

# Example for a server with 64GB of RAM
# 25% of 64GB = 16GB
shared_buffers = 16GB

Important: This is a change that requires a restart of the PostgreSQL service to take effect.

Validation: Is Your Configuration Effective?

You need to measure the efficiency of your shared_buffers. The metric for this is the Cache Hit Rate.

Code 2: Calculating the Cache Hit Rate

-- This query calculates the hit rate for the current database.
SELECT
    'shared_buffers_hit_rate' AS metric,
    (sum(blks_hit) * 100) / sum(blks_hit + blks_read) AS value
FROM
    pg_stat_database
WHERE
    datname = current_database();

What to look for: A hit rate consistently above 99% is the target. This indicates that almost all data requests that reach shared_buffers are served by it, and it is functioning as an efficient “workspace” for the most active data. If the hit rate is low, it might be a sign that shared_buffers is, in fact, too small for your write/modification workload.

From Static Configuration to Continuous Optimization with dbsnOOp

The “25%” rule is an excellent starting point, but it’s not a universal law. The ideal setting for your environment might be 30% or 20%. Finding this optimal balance requires continuous analysis.

dbsnOOp elevates this process from a static configuration to dynamic optimization.

  • Historical Hit Rate Monitoring: dbsnOOp tracks the efficiency of your shared_buffers over time. You can correlate drops in the hit rate with new deployments or changes in workload, understanding the real impact on your cache.
  • I/O and Query Performance Analysis: The platform correlates cache metrics with query latency and system I/O activity. This allows you to see if a larger or smaller shared_buffers results in better overall performance, enabling fine-tuning based on real data, not just rules of thumb.

Stop treating PostgreSQL’s memory like a black box. Understand the partnership between shared_buffers and the OS Page Cache to unlock your environment’s true performance.

Schedule a demo here.

Learn more about dbsnOOp!

Learn about database monitoring with advanced tools here.

Visit our YouTube channel to learn about the platform and watch tutorials.

Recommended Reading

  • PostgreSQL Fine-Tuning: This is the most direct and essential complement to the article’s theme. The configuration of shared_buffers is the foundation, and this guide explores other optimization techniques and strategies for PostgreSQL.
  • AI Database Tuning: Discover how Artificial Intelligence can analyze the complex interaction between shared_buffers, the OS Page Cache, and your workload to recommend an optimal configuration, going beyond rules of thumb.
  • Cloud Monitoring and Observability: The Essential Guide for Your Database: Memory management in the cloud has a direct impact on costs. This article explores the challenges of ensuring performance and resource efficiency on providers like AWS and Azure.
Share

Read more

MONITOR YOUR ASSETS WITH FLIGHTDECK

NO INSTALL – 100% SAAS

Complete the form below to proceed

*Mandatory