How to Fix “Out of Shared Memory” Errors in PostgreSQL

If you’ve seen the error ERROR: out of shared memory, you might have checked your server’s RAM and noticed plenty of free space. This is because the error doesn't refer to your total system memory, but rather a specific, pre-allocated slice of RAM that PostgreSQL uses for its internal "bookkeeping."

What is Shared Memory in Postgres?

PostgreSQL is a process-based system. Every time a client connects, a new process is spawned. To ensure these processes don't work in total isolation, Postgres allocates a Shared Memory area at startup.

This area acts as a communal whiteboard where Postgres tracks:

  • Shared Buffers: Cached data pages from the disk.
  • Lock Table: A record of every single lock held by every transaction (this is the most common culprit).
  • Background Worker States: Information for parallel queries and maintenance.

Why does the error arises

The "Out of Shared Memory" error almost always triggers when the Lock Table is full. Postgres needs to track every object involved in a transaction. If you are running a query that touches 50,000 partitions, or if you have hundreds of concurrent users locking different rows, the "whiteboard" runs out of space to write down who owns what lock.

Solution 1: Tuning the Configuration (The Root Fix)

The most direct fix is to allocate more space for locks in your postgresql.conf. This requires a restart.

Open postgresql.conf using the below command in linux (you may use your psql version instead of 17):

sudo nano /etc/postgresql/17/main/postgresql.conf 

Listed below are the four critical parameters that define your shared memory footprint and can be tuned to avoid the out of shared memory error:

max_locks_per_transaction

  • The default value will be 64 but if you find the error of shared memory you can increase the lock per transaction to 128 or 256.
  • This is the primary multiplier. Postgres calculates the size of the lock table using this formula: max_connections * max_locks_per_transaction.

Note: This does not strictly limit a single transaction to 64 locks; it is just used to calculate the total pool size. Increasing this effectively expands the lock table.

max_pred_locks_per_transaction

  • This parameter will have a default value 64 and in case of shared memory issues it is recommended to increase to 128 (if using Serializable isolation).
  • If your application uses ISOLATION LEVEL SERIALIZABLE, Postgres tracks "predicate locks" (locks on data ranges or potential gaps). These consume their own space in shared memory. If you don't use Serializable transactions, this is less important, but matching it to your standard lock limit is a safe practice.

max_prepared_transactions

  • If you are using prepared transactions then increasing this parameter value to 100 (only if using Two-Phase Commit) can help. But if the value is already 0, you are probably not using the prepared transactions
  • This enables Two-Phase Commits (2PC). If you set this to a non-zero value, Postgres must reserve shared memory slots (and lock table space) for these "suspended" transactions, ensuring they survive a crash.

Warning: If you don't need 2PC, keep this at 0 to save memory. If you do need it, ensure it is at least as large as max_connections so every user can theoretically prepare a transaction.

shared_buffers

  • Typical value for shared_buffers is 4GB (or approximately 25% of total RAM)
  • While this defines the data cache rather than the lock table, it is the largest consumer of the shared memory segment. If you increase the lock parameters significantly, you slightly increase the total shared memory requirement. Ensure your OS kernel limits are large enough to accommodate your shared_buffers and the increased lock tables.

You will have to perform a PostgreSQL restart once these parameters are changed in the postgresql.conf file, you can restart postgres using the below command:

sudo systemctl restart postgresql

Solution 2: Application-Side Batching (The "Chunk and Commit")

Instead of forcing the database to track a million changes in one go, modify your application logic to break the work into smaller pieces if possible. This releases locks back to the shared memory pool after every commit.

For example, you have a query like given below that might lock millions of records at once:

DELETE FROM logs WHERE created_at < '2024-01-01';

Instead of such queries use a loop or any other logic that can perform this deletion in batches like:

  1. Select 5,000 IDs to delete.
  2. Delete those 5,000 IDs.
  3. Commit the transaction and this releases the locks back to the shared memory pool.
  4. Repeat until no rows remain.

Solution 3: Use Low-Lock Function Designs

When writing logic inside the database, use Stored Procedures instead of Functions for heavy lifting. Functions are atomic; they must finish entirely before a commit happens, meaning they hold onto locks for their entire duration. Procedures (added in Postgres 11) allow you to call COMMIT inside a loop.

This is an example of having commit inside a loop using stored procedures:

CREATE OR REPLACE PROCEDURE purge_old_data()
LANGUAGE plpgsql AS $$
BEGIN
    LOOP
        DELETE FROM events WHERE id IN (
            SELECT id FROM events WHERE processed = true LIMIT 5000
        );
        EXIT WHEN NOT FOUND;
        COMMIT; -- This specific line clears the shared memory locks!
    END LOOP;
END;
$$;

Solution 4: Optimize Partitioned Tables

If you have a table with thousands of partitions, a simple SELECT * FROM parent_table can attempt to lock every single child partition, instantly exhausting shared memory.

  • Partition Pruning: Ensure enable_partition_pruning = on.
  • Query Filtering: Always include the partition key in your WHERE clause so Postgres only locks the specific sub-table it needs.

Running into an out-of-shared-memory error can be alarming, but it is rarely a sign that your server is physically out of RAM. Instead, it serves as a critical signal that your database's coordination limits, specifically the lock table, are being exceeded by heavy transactions or schema complexity. While increasing parameters like max_locks_per_transaction can provide immediate breathing room, true stability comes from combining these configuration tweaks with smarter, batched application logic that respects shared memory as a finite resource.

whatsapp_icon
location

Calicut

Cybrosys Technologies Pvt. Ltd.
Neospace, KINFRA Techno Park
Kakkanchery, Calicut
Kerala, India - 673635

location

Kochi

Cybrosys Technologies Pvt. Ltd.
1st Floor, Thapasya Building,
Infopark, Kakkanad,
Kochi, India - 682030.

location

Bangalore

Cybrosys Techno Solutions
The Estate, 8th Floor,
Dickenson Road,
Bangalore, India - 560042

Send Us A Message