What No One Told You About WiredTiger’s Configuration (and How It’s Devouring Your RAM)

September 19, 2025 | by dbsnoop

What No One Told You About WiredTiger's Configuration (and How It's Devouring Your RAM)
Monitoring  Observability  Cloud  Database
What No One Told You About WiredTiger’s Configuration (and How It’s Devouring Your RAM)

Your team looks at the monitoring dashboard, and the metric is alarming: the mongodb process is consuming 80%, 90%, or even more of the server’s total RAM. The immediate instinct is to panic. The first suspicion falls on a memory leak in the application or inefficient queries loading too much data. In extreme cases, the fear of the Linux OOM (Out-of-Memory) Killer—the operating system process that kills applications to prevent a server crash—becomes a real and imminent concern. But in the vast majority of cases, the cause isn’t a bug, nor is it an error. It’s a misunderstood feature.

What no one told you is that your MongoDB is, by default, designed to be greedy with memory. The storage engine behind it, WiredTiger, aggressively tries to use a significant portion of the available RAM for its internal cache. The goal is noble: to keep as much data and as many indexes in memory as possible so that reads are absurdly fast, avoiding slow disk access. The problem is that the default configuration for this cache is a double-edged sword. In many scenarios, it can be the silent catalyst for your environment’s instability, suffocating the operating system and other essential processes.

The Dangerous Default: The “Black Box” Cache Logic

By default, starting from MongoDB 3.4, WiredTiger allocates for its cache the larger of:

  • 50% of the total RAM minus 1 GB.
  • 256 MB.

On a server with 64GB of RAM, for example, the WiredTiger cache will try to reserve (64 * 0.5) – 1 = 31GB for itself. This means that more than half of your server’s memory is, by default, dedicated to a single function. On a server dedicated exclusively to MongoDB, this might be acceptable. But in a real-world environment—running in containers, sharing resources with other applications, or with the operating system itself needing memory for its buffers—this default configuration can be disastrous.

Practical Diagnosis: See the Size of Your Cache’s Appetite

Before changing anything, you need data. Fortunately, MongoDB offers a clear view of the WiredTiger cache’s state.

Monitoring  Observability  Cloud  Database

Code: Inspecting the WiredTiger Cache

Connect to your mongosh and run the following command to get a detailed status of the storage engine:codeJavaScript

// This command returns a giant document. We are specifically
// interested in the 'wiredTiger.cache' section.
db.serverStatus().wiredTiger.cache

What to look for in the output:

  • “maximum bytes configured”: Shows the maximum size, in bytes, that the cache has been configured to use. This is your upper limit.
  • “bytes currently in the cache”: The actual size of the cache at the moment. This tells you how much of that limit is being effectively used.
  • “pages read into cache” and “pages written from cache”: If these numbers are growing very rapidly, it’s a sign that the cache is too small and MongoDB is constantly reading from disk (cache miss), which causes high I/O latency.

This quick diagnosis gives you a clear picture: Is MongoDB using the amount of memory you expected? Or has the default configuration created a RAM-hungry giant inside your server?

The Configuration Dilemma: The Sweet Spot

An incorrect cache configuration leads to two distinct failure scenarios:

  1. Cache Too Large (The Choking Risk): If the WiredTiger cache is too large, it leaves little memory for the operating system and any other processes running on the machine. The OS may start using swap memory on the disk, which is catastrophically slow and negates all the benefits of having a cache. In the worst-case scenario, the Linux OOM Killer will identify mongod as the culprit for memory consumption and terminate it abruptly, causing a complete outage of your database.
  2. Cache Too Small (The Slowness Risk): Fearing high RAM consumption, many teams set the cache too small. The result is constant “cache churn.” MongoDB needs to read data from the disk, place it in the cache, and almost immediately remove it to make space for new data. This manifests as low RAM utilization but high latency, elevated disk I/O, and poor overall performance.

Code: Taking Control of the Configuration

The solution is to manually set a cache size that makes sense for your workload and environment. This is done in the MongoDB configuration file (mongod.conf).codeYaml

# Example configuration in your mongod.conf file
# Sets a hard limit of 16GB for the WiredTiger cache.

storage:
  wiredTiger:
    engineConfig:
      cacheSizeGB: 16

After applying this configuration and restarting the mongod process, you will have full control over your database’s memory appetite.

The Engineering Solution: From Guesswork to Observability with dbsnOOp

Setting cacheSizeGB solves the problem of uncontrolled consumption, but it creates a new question: what is the right number? 16GB? 24GB? 10GB? The correct answer isn’t a fixed number, but a balancing point that depends on your workload, and that workload changes over time.

This is where manual guesswork ends and performance engineering begins.

  • Cache Hit Ratio Analysis: dbsnOOp continuously monitors the efficiency of your cache. It calculates the cache hit ratio, showing the percentage of reads that are served from memory versus from disk. A consistently high hit ratio (above 99%) indicates a healthy cache. A drop in this metric after a deploy could indicate a new query that is invalidating the cache.
  • Correlation with Latency: The platform correlates cache metrics (like pages read from disk) with the latency of your queries. This allows you to see the direct impact of an undersized cache on the end-user experience.
  • Historical Visibility: Instead of looking at a momentary snapshot with serverStatus, dbsnOOp provides a complete history. This allows you to make informed decisions, such as increasing the cache during seasonal peaks or reducing it during periods of low activity to save costs in the cloud.

Don’t let a misunderstood default configuration dictate the stability of your MongoDB environment.

Take control of your memory consumption based on data, not fear. Schedule a meeting with our specialist or watch a live demo!

Schedule a demo here.

Learn more about dbsnOOp!

Learn about database monitoring with advanced tools here.

Visit our YouTube channel to learn about the platform and watch tutorials.

Monitoring  Observability  Cloud  Database

Recommended Reading

  • MongoDB Fine-Tuning: This is the most direct and essential complement to the article’s theme. Deepen your knowledge of other MongoDB-specific optimization techniques and strategies, going beyond the cache to optimize indexes, queries, and schema.
  • Cloud Monitoring and Observability: The Essential Guide for Your Database: Managing resources like RAM is critical in cloud environments where every gigabyte has a cost. This article explores the challenges of ensuring performance and cost control on platforms like MongoDB Atlas.
  • AI Database Tuning: Discover how Artificial Intelligence can analyze complex patterns of memory and I/O usage to recommend optimal configurations, transforming the tuning “guessing game” into a data science.
Share

Read more

MONITOR YOUR ASSETS WITH FLIGHTDECK

NO INSTALL – 100% SAAS

Complete the form below to proceed

*Mandatory