Migrating an Odoo database is not just an application-level operation. The real bottleneck often lies in PostgreSQL. When the database grows beyond 50–100 GB, even a single migration step can take several days if PostgreSQL is not tuned properly.
After performing multiple migrations for large Odoo databases and working at the PostgreSQL internals level, certain tuning methods consistently deliver dramatic improvements.This article explains how to accelerate Odoo database migration by optimizing PostgreSQL restore operations, I/O, memory usage, WAL configuration, index management, and autovacuum strategies.
1. Using All CPU Cores During Restore with Parallel pg_restore
The first and most critical improvement is enabling parallelism during the restore operation. By default, pg_restore uses a single thread. This means only one COPY command runs at a time. For large databases, this is extremely slow.
Without parallelism
pg_restore -p 5433 -d odoo_14 backup.dump
Only one COPY runs at a time. CPU utilization remains very low.
You can check the active running queries from pg_stat_activity
select * from pg_stat_activity where state = 'active';
With full CPU utilization
pg_restore -p 5433 -d odoo_14 -j $(nproc) backup.dump
- -p - specifies the port of the postgresql that runs
- -d - name of the database
- -j $(nproc) - runs restore in parallel mode, using all available cores.
backup.dump - dump file of database
You can also check the number of cores of your system by this linux command
nproc
If the system has 16 cores, then 16 COPY operations execute simultaneously.In pg_stat_activity you will observe multiple active COPY commands working together.
Why it matters
COPY operations for large Odoo tables such as account_move_line, stock_move, mail_message, and ir_attachment are the real bottlenecks. Parallel restore is the fastest and safest way to reduce migration time.
2. Removing Unwanted Data Before Migration
Before exporting the source database, it is essential to remove data that does not need to travel to the next Odoo version.
Examples include:
DELETE FROM mail_tracking_value;
DELETE FROM mail_mail;
DELETE FROM ir_attachment WHERE store_fname IS NULL;
DELETE FROM bus_bus;
This reduces dump size and significantly reduces database migration time.
3. Removing Unused or Duplicate Indexes Before Restore
Indexes on large tables slow down restore operations because each index requires multiple passes through data.
You can list the high sized index tables in order by using the query below
SELECT
i.relname AS index_name,
t.relname AS table_name,
pg_size_pretty(pg_relation_size(i.oid)) AS index_size,
pg_relation_size(i.oid) AS index_size_bytes
FROM
pg_class t
JOIN
pg_index ix ON t.oid = ix.indrelid
JOIN
pg_class i ON ix.indexrelid = i.oid
WHERE
t.relkind = 'r' -- only tables
ORDER BY
pg_relation_size(i.oid) DESC;
Before taking the dump, identify unused indexes:
SELECT indexrelid::regclass, idx_scan
FROM pg_stat_user_indexes
WHERE idx_scan = 0;
Drop unnecessary indexes:
DROP INDEX index_name;
After the restore and final data transformations, create only the indexes that are truly needed for Odoo queries.
4. PostgreSQL Configuration Tuning for Faster Migrations
PostgreSQL defaults are not designed for large Odoo databases. Tuning memory, I/O, WAL, and parallel settings has a significant impact.
Below are essential settings for a system with 32 GB RAM and 16 cores.
Memory Tuning ( linux with a 32 gb ram )
shared_buffers = 8GB ( 25 % of ram )
work_mem = 256MB
maintenance_work_mem = 4GB
effective_cache_size = 24GB ( 75 % of ram )
These values ensure high-speed sorting, index creation, and large query operations.
WAL and Checkpoint Settings
wal_buffers = 512MB
max_wal_size = 64GB
min_wal_size = 4GB
checkpoint_timeout = 30min
checkpoint_completion_target = 0.9
Larger WAL buffers allow heavy writes without interruptions.
Longer checkpoint intervals reduce flushing overhead.
Parallel Worker Configuration
max_parallel_workers = 16
max_worker_processes = 16
max_parallel_workers_per_gather = 4
max_parallel_maintenance_workers = 4
This improves the performance of large index builds, VACUUM FULL operations, and ANALYZE.
Performance Tweaks
synchronous_commit = off
wal_compression = on
full_page_writes = on
random_page_cost = 1.1
seq_page_cost = 1.0
Disabling synchronous_commit speeds up all writes. WAL compression saves I/O during heavy migrations.
PostgreSQL 18 I/O Settings
io_method = worker
effective_io_concurrency = 200
Io_workers = 16
Async I/O mode significantly improves restore and index rebuild speeds.
5. Temporarily Disable Autovacuum During Migration
Autovacuum generates noise during restoration and slows down COPY operations.
Before migration:
ALTER SYSTEM SET autovacuum = off;
After migration and cleanup:
ALTER SYSTEM SET autovacuum = on;
This prevents unnecessary background workers from competing with restore operations.
6. Performing VACUUM FULL and ANALYZE After Migration
After the restore completes, PostgreSQL lacks fresh statistics.Without ANALYZE, Odoo will respond slowly because the planner selects poor execution paths.
VACUUM FULL;
ANALYZE;
This rebuilds table storage and updates statistics.
For very large databases, run ANALYZE first on selective tables:
ANALYZE account_move_line;
ANALYZE stock_move;
ANALYZE stock_order_line;
Then perform VACUUM FULL if needed.
7. Set Fillfactor to Reduce Page Splitting
Odoo models often perform updates that change record length. When pages are full, updates result in dead tuples and bloated tables.
Before migration, set fillfactor:
ALTER TABLE table_name SET (fillfactor = 80);
VACUUM FULL table_name;
ANALYZE table_name;
This leaves free space in each page, reducing dead tuples and improving future update performance.
8. Warm Up Caches Using pg_prewarm
After restore, loading frequently accessed tables into memory improves initial migration speed:
Create the extension named pg_prewarm in postgres 18 ( use postgres 18 for the best feature of asynchronous i/o feature )
Create extension pg_prewarm;
Check if the extension is installed correctly
Select * from pg_available_extensions where name = 'pg_prewarm';
You can explore the functions related to pg_prewarm like this
\dx+ pg_prewarm
You get results like this
Objects in extension "pg_prewarm"
Object description
-------------------------------------------------------
function autoprewarm_dump_now()
function autoprewarm_start_worker()
function pg_prewarm(regclass,text,text,bigint,bigint)
(3 rows)
You can easily cache the models by using the below examples
SELECT pg_prewarm('account_move_line');
SELECT pg_prewarm('res_partner');
SELECT pg_prewarm('sale_order_line');This reduces disk reads during the first migration steps.
9. Odoo Worker Settings to Utilize CPU
In odoo.conf:
workers = number_of_cores / 2
For a 16-core system:
workers = 8
This ensures that Odoo does not overload PostgreSQL while still delivering parallelism.
10. Identify Slow Queries and Add Indexes After Migration
During migration, monitor long-running queries:
SELECT query, state, wait_event
FROM pg_stat_activity
WHERE state = 'active';
Once identified, build indexes
Adding only essential indexes prevents unnecessary overhead.
Final Summary
Speeding up Odoo database migration for large installations requires a combination of PostgreSQL tuning, parallel restore, cache warming, index optimization, and proper WAL configuration.
The most impactful steps are:
- Use pg_restore with full CPU parallelism
- Remove unnecessary data and indexes before export
- Tune PostgreSQL memory, I/O, WAL, and parallel operations
- Disable autovacuum temporarily
- Run VACUUM FULL and ANALYZE after restore
- Apply fillfactor and pg_prewarm for optimal performance
- Add indexes based on slow queries only after data loads
- Configure appropriate Odoo workers for load balance
Following these steps consistently reduces Odoo migration time from days to hours and dramatically improves performance during the upgrade process.