How to Find and Remove Unused Indexes to Speed Up Writes

November 21, 2025 | by dbsnoop

How to Find and Remove Unused Indexes to Speed Up Writes
dbsnoop  Monitoring and Observability

In the world of database optimization, indexes are celebrated as performance heroes. They are the first tool we pull out to solve a slow SELECT query, and for good reason. However, there is a dark and rarely discussed side: every index you create imposes a hidden and perpetual tax on your system’s write performance. An index that is not used to speed up any reads but still needs to be meticulously maintained with every INSERT, UPDATE, and DELETE, is not a performance asset; it is a disguised “dead weight.” It is a technical debt that accumulates silently, making your transactions slower, increasing lock contention, and inflating your storage costs.

The practice of “indexing for precaution” or forgetting indexes created for queries that no longer exist leads to a phenomenon of “index sprawl”: a proliferation of useless indexes that actively harm the health of your database. This article is a practical and high-impact guide to identifying, validating, and safely removing these unused indexes? a “quick win” that can unlock write performance you didn’t even know you were missing.

Why Useless Indexes Are So Expensive

To understand the impact, it is crucial to remember that an index is not a passive entity. It is a living data structure that must be kept in perfect sync with the main table. This maintenance has a real and measurable cost.

1. Write Amplification

This is the most direct cost. A single write operation on your table is amplified into multiple write operations on the disk.

  • Practical Example: Imagine an audit_logs table with 8 indexes to support various types of queries. When your application executes a single INSERT into this table, the database needs to perform, at a minimum, nine distinct write operations: one to write the row to the table itself (in the heap) and eight more to insert the new entry into each of the eight B-Tree index structures.
  • The Impact: In a system with a high data ingestion rate, this write amplification becomes a massive bottleneck. It drastically increases the demand for IOPS (Input/Output Operations Per Second) on your disk, which translates directly into a higher cloud bill for high-performance storage.

2. Increased Lock Contention

The maintenance of indexes does not happen in a vacuum; it participates in the same concurrency control system as your transactions.

  • Practical Example: When a row is updated (UPDATE), the database needs to acquire locks not only on the table row but also on the corresponding entries in the indexes. More indexes mean more resources to be locked. An UPDATE that modifies an indexed column usually results in a “delete + insert” operation in the index tree, a process that can require locks on multiple index pages.
  • The Impact: In a high-concurrency environment, this need to lock more resources for a longer time drastically increases the probability of lock contention and even deadlocks. Other transactions are forced to wait, resulting in a drop in the system’s total throughput.

3. Storage and Maintenance Costs

Indexes are not just metadata; they are data structures that consume disk space.

  • Practical Example: On wide tables with many composite indexes, it is not uncommon for the total size of the indexes to exceed the size of the table itself.
  • The Impact: This not only increases your direct storage costs but also prolongs critical maintenance operations. Backups take longer to complete and restore. VACUUM operations (in PostgreSQL) or statistics maintenance have more work to do, consuming more system resources.

The Step-by-Step Guide to Identifying Unused Indexes

Identifying these “write-only” indexes requires an analysis of their actual use in production. Fortunately, the main relational databases collect statistics that allow us to do just that.

For PostgreSQL

PostgreSQL tracks index usage in the pg_stat_user_indexes system view. The key column here is idx_scan, which counts how many times the index has been used for a scan.

The Query:

SELECT
    s.schemaname,
    s.relname AS table_name,
    s.indexrelname AS index_name,
    pg_size_pretty(pg_relation_size(s.indexrelid)) AS index_size,
    s.idx_scan AS times_used
FROM
    pg_stat_user_indexes s
JOIN
    pg_indexes i ON s.indexrelname = i.indexname
WHERE
    s.idx_scan = 0
    AND i.indexdef NOT LIKE '%UNIQUE%'; -- Excludes UNIQUE constraint indexes
ORDER BY
    pg_relation_size(s.indexrelid) DESC;

This query lists all indexes that have never been used for a read (idx_scan = 0), ordered by their size. The largest ones at the top are your main candidates for removal.

dbsnoop  Monitoring and Observability

For SQL Server

SQL Server uses a Dynamic Management View (DMV) called sys.dm_db_index_usage_stats. This DMV is even richer, as it tracks reads (user_seeks, user_scans, user_lookups) and writes (user_updates) separately.

The Query:

SELECT
    OBJECT_NAME(s.object_id) AS table_name,
    i.name AS index_name,
    i.type_desc,
    s.user_updates AS total_writes,
    s.user_seeks + s.user_scans + s.user_lookups AS total_reads,
    (SELECT SUM(p.rows) FROM sys.partitions p WHERE p.index_id = s.index_id AND s.object_id = p.object_id) as number_of_rows
FROM
    sys.dm_db_index_usage_stats AS s
JOIN
    sys.indexes AS i ON s.object_id = i.object_id AND s.index_id = i.index_id
WHERE
    OBJECTPROPERTY(s.object_id, 'IsUserTable') = 1
    AND s.database_id = DB_ID()
    AND s.user_seeks + s.user_scans + s.user_lookups = 0 -- Key condition: no reads
    AND s.user_updates > 0 -- At least one write
    AND i.is_unique = 0; -- Excludes UNIQUE constraint indexes
ORDER BY
    s.user_updates DESC;

This query is extremely powerful. It finds all indexes that have had no read operations but have had at least one write operation. These are, by definition, the “dead weight” indexes.

Critical Caveats of the Manual Approach

These queries are a great starting point, but they come with an important warning: index usage statistics are reset every time the database service is restarted. If you run the query a day after a restart, the results will not be reliable. It is crucial that you collect this data over a complete business cycle (at least a month, or a quarter if there are quarterly reports) to be sure that you are not discarding an index that is rarely used but critical.

Continuous Analysis with Observability

The risk and manual nature of the above approach are exactly the problems that an observability platform like dbsnOOp was designed to solve.

  • Persistent Monitoring: dbsnOOp collects and stores index usage statistics over time, regardless of server restarts. It builds an accurate and long-term picture of what is and is not used in your environment.
  • Contextualization: The platform doesn’t just tell you that an index is not used; it can correlate the usage of other indexes with your system’s most critical queries. This gives you the confidence that removing an index will not negatively impact an important query.
  • Safe and Actionable Recommendations: Based on its historical and continuous analysis, dbsnOOp can generate a high-confidence list of unused indexes that are safe candidates for removal. This transforms a risky investigation project into a routine, data-driven optimization process.

The Safe Removal Process: A Checklist for SREs

Once you have identified a candidate index for removal, never drop it immediately in production. Follow a safe process:

  1. Identify and Question: Use the query or the dbsnOOp report to find a candidate. Question its origin: does it belong to an old feature? Was it created for a query that was refactored? Is there a quarterly BI job that might use it?
  2. Disable or Rename (Staging/QA Environments): Test the impact of the removal in a pre-production environment. Run your regression and performance tests to ensure no essential query has been harmed.
  3. Rename in Production: Instead of an immediate DROP INDEX, a safe tactic is to first rename the index in production (e.g., ALTER INDEX idx_old RENAME TO idx_old_todelete;). This makes it invisible to the query optimizer, effectively disabling it.
  4. Monitor Closely: After the rename, closely monitor your performance dashboards and error logs for a period (e.g., one or two weeks). If no new slow queries or errors appear, you can proceed with confidence.
  5. Drop: Finally, schedule a maintenance window to execute the DROP INDEX command.

The Continuous Hygiene of Indexing

Index management is not a one-time creation event; it is a continuous lifecycle of creation, monitoring, and removal. Unused indexes are an insidious form of technical debt that imposes a hidden tax on every write transaction your system processes. By adopting a proactive approach to finding and removing this dead weight, you are not just cleaning up your database; you are unlocking write performance, reducing contention, and lowering your operational costs. In a world of data-driven engineering, there is no longer an excuse to pay the inefficiency tax.

Want to find and eliminate the “dead weight” that is slowing down your writes? Schedule a meeting with our specialist or watch a live demo!

To schedule a conversation with one of our specialists, visit our website. If you prefer to see the tool in action, watch a free demo. Stay up to date with our tips and news by following our YouTube channel and our LinkedIn page.

Schedule a demo here.

Learn more about dbsnOOp!

Learn about database monitoring with advanced tools here.

Visit our YouTube channel to learn about the platform and watch tutorials.

dbsnoop  Monitoring and Observability

Recommended Reading

Share

Read more

UPGRADE YOUR OPERATION WITH AUTONOMOUS DBA

NO INSTALL – 100% SAAS

Complete the form below to proceed

*Mandatory