Your Odoo instance is running. Orders are going through, invoices are getting generated, and everyone seems happy. But under the surface, something is slowly rotting. Performance problems in Odoo rarely announce themselves. They creep in over weeks and months, and by the time someone says "the system feels slow," you are already deep in trouble.
We have been building and maintaining Odoo deployments for years, and we keep running into the same problems over and over. These are the seven things that silently wreck Odoo performance, and more importantly, how to spot them before they become full-blown crises.
1. Slow Queries Nobody Knows About
This is the big one. Somewhere in your system, a SQL query is taking 3 seconds to run. Maybe 5. Maybe 12. And nobody has any idea it is happening because Odoo does not tell you about it by default.
These slow queries hide in custom reports, computed fields, large search operations, and API calls. They don't crash anything. They just make everything a little slower, every single time they run. Multiply that by 50 concurrent users and you have a system that feels sluggish for everyone but nobody can point to why.
The tricky part is that slow queries often start fast. A report that ran in 200ms when you had 10,000 sale orders starts crawling when you hit 500,000. By then, the developer who wrote it has moved on to another project.
What to look for: Any query consistently taking more than 1 second. Pay special attention to queries that do full table scans (no WHERE clause on indexed columns) or queries using complex subqueries and CTEs.
If you want to see what is actually going on in your database right now, here is how. First, check if `pg_stat_statements` is enabled on your server:
SELECT * FROM pg_extension WHERE extname = 'pg_stat_statements';
If it returns nothing, you need to enable it. Add this to your `postgresql.conf` and restart PostgreSQL:
shared_preload_libraries = 'pg_stat_statements'
Then create the extension:
CREATE EXTENSION pg_stat_statements;
Once it is running, this query will show you the top 10 slowest queries in your system:
SELECT query,
calls,
ROUND((total_exec_time / 1000)::numeric, 2) AS total_seconds,
ROUND((mean_exec_time / 1000)::numeric, 3) AS avg_seconds
ROUND((max_exec_time / 1000)::numeric, 3) AS max_seconds
FROM pg_stat_statements
ORDER BY mean_exec_time DESC
LIMIT 10;
If you see queries averaging more than 1 second, those are your bottlenecks. Run `EXPLAIN ANALYZE` on the worst ones to see what PostgreSQL is actually doing under the hood.
The thing is, this gives you a snapshot of today. But slow queries drift. A query that is fine today can become a problem next month when the table grows by another 200,000 rows. What you really need is something watching this continuously, not just when you remember to check.
2. Table Bloat Eating Your Database Alive
PostgreSQL has a dirty secret that catches a lot of Odoo admins off guard. When you update or delete a row, PostgreSQL does not actually remove the old data. It marks it as "dead" and leaves it sitting there, taking up space and slowing down queries.
This is called table bloat, and in a busy Odoo instance that processes hundreds of transactions a day, it piles up fast. We have seen databases where 40% of the actual disk usage was dead rows. Tables that should have been 2 GB were sitting at 5 GB, and every query on those tables was scanning through all that garbage.
PostgreSQL has an autovacuum process that is supposed to clean this up, but the default settings are tuned for small databases. An active Odoo instance with millions of rows in `stock_move`, `account_move_line`, or `mail_message` can easily overwhelm the default autovacuum configuration.
What to look for: Check your biggest tables for the ratio of dead rows to live rows. If a table has more than 10% dead tuples, it needs attention. If it is above 20%, you have a real problem.
Here is a quick way to see how bad it is on your system. Run this query and look at the dead_pct column:
SELECT relname, n_live_tup, n_dead_tup,
ROUND(n_dead_tup::numeric / GREATEST(n_live_tup, 1) * 100, 1) AS dead_pct
FROM pg_stat_user_tables
WHERE n_dead_tup > 1000
ORDER BY n_dead_tup DESC;
If numbers look bad, a manual VACUUM ANALYZE on the worst offenders will help immediately:
-- Replace with your actual table name from the query above
VACUUM ANALYZE stock_move_line;
For the long term, you want to tune autovacuum for your busiest tables. These settings in postgresql.conf are a good starting point for an active Odoo database:
autovacuum_vacuum_scale_factor = 0.01 -- default is 0.2, way too lazy
autovacuum_analyze_scale_factor = 0.005
autovacuum_vacuum_cost_delay = 2ms -- default 20ms, let it work faster
You can also check when autovacuum last ran on each table:
SELECT relname, last_autovacuum, last_autoanalyze, n_dead_tup
FROM pg_stat_user_tables
WHERE n_dead_tup > 5000
ORDER BY n_dead_tup DESC;
If last_autovacuum is NULL or more than a day old on a busy table, autovacuum is not keeping up. Running these queries once will clean things up, sure. But bloat comes back. Unless someone or something is watching these numbers regularly, you will be back in the same situation in a few weeks.
3. Connection Pool Exhaustion
Every time a user opens an Odoo page, clicks a button, or triggers an action, it needs a database connection. PostgreSQL has a hard limit on how many connections it allows (usually 100 by default), and Odoo workers share this pool.
When the pool gets close to its limit, new requests start queuing. Users see spinning icons. Pages take 10 seconds to load. And if connections start getting stuck in "idle in transaction" state (usually from a badly written custom module or a transaction that never committed), it gets worse fast.
The really dangerous thing about connection pool exhaustion is that it looks like a server hardware problem. The instinct is to throw more RAM or CPU at it, when the real fix is usually a PostgreSQL config change or finding the module that is leaking connections.
What to look for: Check pg_stat_activity regularly. A healthy system has mostly idle connections with a few active ones. If you see connections piling up in "idle in transaction" or "active" states, something is holding them open too long.
You can get a quick read on your connection pool health with this:
SELECT state, COUNT(*) AS count
FROM pg_stat_activity
WHERE datname = current_database()
GROUP BY state
ORDER BY count DESC;
A healthy result looks something like: 5-10 idle, 1-3 active, 0 idle in transaction. If you see more than 2-3 connections stuck in `idle in transaction`, something is not committing or rolling back properly.
To see how close you are to the ceiling:
SELECT max_conn, used, max_conn - used AS available
FROM (
SELECT setting::int AS max_conn FROM pg_settings WHERE name = 'max_connections'
) AS mc
CROSS JOIN (
SELECT COUNT(*) AS used FROM pg_stat_activity
) AS uc;
If the available number is below 20, you are in the danger zone. And if you spot a stuck connection, this will show you exactly what query is holding it:
SELECT pid, state, age(clock_timestamp(), state_change) AS duration,
query
FROM pg_stat_activity
WHERE state = 'idle in transaction'
ORDER BY state_change ASC;
Anything sitting in `idle in transaction` for more than a few minutes is a problem. But here is the catch with connection pool issues: they tend to spike during peak hours and look perfectly fine when you check at off-peak times. A snapshot at 11 PM tells you nothing about what happened at 2 PM when 80 users were online. You need to track utilization over time to see the real pattern.
4. Cron Jobs That Silently Fail
Odoo relies heavily on scheduled actions (cron jobs) for all sorts of background work: sending emails, computing inventory, syncing with external systems, processing queues. When a cron job fails, Odoo does something surprising. It just... moves on. No alert, no notification, no angry red banner.
Even worse, after a certain number of failures, Odoo automatically deactivates the cron job. So not only did it fail, but it also stopped trying. And unless someone is manually checking the scheduled actions list, nobody will notice until the downstream effects become obvious. Like when a customer asks why they never got their shipping confirmation email.
We have worked with companies that had critical cron jobs silently disabled for weeks. Inventory sync, email queues, automated invoicing, all just stopped without anyone knowing.
What to look for: Check your scheduled actions regularly. Look for any that are deactivated but should not be. Check the "Last Execution Date" field and make sure it is recent for jobs that should be running frequently.
This one is easy to check. Run this against your Odoo database and see if anything looks familiar:
sql
SELECT c.cron_name, c.active, c.interval_number, c.interval_type,
c.lastcall, c.nextcall
FROM ir_cron c
WHERE c.active = false
ORDER BY c.lastcall DESC;
If you see crons in that list that you recognize (email queue, inventory scheduler, etc.), they were probably deactivated because of repeated failures. Odoo does not tell you when this happens. It just stops running them.
This next one is also worth running. It finds crons that are technically active but have not run in a suspiciously long time:
```sql
SELECT c.cron_name, c.interval_number || ' ' || c.interval_type AS frequency,
c.lastcall,
AGE(NOW(), c.lastcall) AS time_since_last_run
FROM ir_cron c
WHERE c.active = true
AND c.lastcall < NOW() - INTERVAL '2 days'
ORDER BY c.lastcall ASC;
```
A cron that is supposed to run every hour but has not run in 3 days is either stuck or something is blocking it. Either way, you want to know about it now, not when a customer complains.
The problem with running these checks manually is that crons can fail at any time. The email queue might be fine right now and break tomorrow because of a bad recipient address that causes an unhandled exception. Without something watching every cron execution and alerting on failures, you are always finding out too late.
5. Orphaned Files in the Filestore
Every time someone uploads an attachment, Odoo stores the file on disk (unless you have configured S3 or similar). Product images, PDF invoices, email attachments, report outputs, all of it goes into the filestore directory.
The problem is that when records get deleted, the associated files do not always get cleaned up. Over time, you end up with thousands of files on disk that no database record points to anymore. We call these ghost files, and they can silently eat through your disk space.
On a medium-sized Odoo instance that has been running for 3-4 years, we have seen 20-30 GB of orphaned files just sitting there, contributing nothing except higher hosting bills and slower backups.
What to look for: Compare the number of files in your filestore directory with the number of `ir.attachment` records in the database. If the filestore has significantly more files, you have ghosts.
If you have SSH access to your server, you can get a rough idea of the damage in about 30 seconds. First, check how big your filestore actually is:
```bash
# Replace with your actual filestore path
du -sh /var/lib/odoo/.local/share/Odoo/filestore/your_database_name/
```
Then count the actual files on disk:
```bash
find /var/lib/odoo/.local/share/Odoo/filestore/your_database_name/ -type f | wc -l
```
Now compare that to the number of file-based attachments in your database:
```sql
SELECT COUNT(*) AS db_attachments
FROM ir_attachment
WHERE store_fname IS NOT NULL;
```
If the file count on disk is significantly higher than the database count, the difference is ghost files. For example, if you have 85,000 files on disk but only 60,000 attachment records, you have roughly 25,000 orphaned files taking up space.
While you are at it, this query will show you what kinds of files are eating the most space:
```sql
SELECT mimetype,
COUNT(*) AS count,
pg_size_pretty(SUM(file_size)) AS total_size
FROM ir_attachment
WHERE store_fname IS NOT NULL
GROUP BY mimetype
ORDER BY SUM(file_size) DESC
LIMIT 10;
```
You might be surprised how much of it is auto-generated report PDFs or old email attachments that nobody needs. The tricky part is not finding the orphans. It is finding them safely. You do not want to delete a file that some custom module references through a non-standard mechanism. A proper ghost file scanner needs to cross-reference checksums, not just filenames, and that is not something you want to do with a shell one-liner.
6. Index Bloat and Missing Indexes
This one is related to table bloat but separate enough to deserve its own spot. PostgreSQL indexes can also become bloated over time, especially indexes on tables with heavy write activity. A bloated index is larger than it needs to be and takes longer to scan.
But there is another side to this: unused indexes. Every index you have on a table adds overhead to every INSERT and UPDATE operation on that table. If an index exists but is never used by any query, it is pure drag on write performance.
And then there are duplicate indexes, two or more indexes that cover the same columns. This is more common than you would think, especially when multiple developers have worked on custom modules over the years. Each one adds "just one more index" without checking what already exists.
What to look for: Three things. First, indexes that have never been scanned (idx_scan = 0 in `pg_stat_user_indexes`). Second, indexes that are disproportionately large compared to their table. Third, pairs of indexes that cover the same column combination.
Here is how you can find the dead weight. This query shows every non-unique, non-primary index that has never been used since the last stats reset:
```sql
SELECT s.relname AS table_name,
s.indexrelname AS index_name,
s.idx_scan AS times_used,
pg_size_pretty(pg_relation_size(s.indexrelid)) AS index_size
FROM pg_stat_user_indexes s
JOIN pg_index i ON s.indexrelid = i.indexrelid
WHERE s.idx_scan = 0
AND NOT i.indisunique -- keep unique constraint indexes
AND NOT i.indisprimary -- keep primary keys obviously
ORDER BY pg_relation_size(s.indexrelid) DESC;
```
The big ones at the top are wasting the most resources. If you see indexes that are 50 MB or 100 MB with zero scans, that is pure overhead on every write operation to that table.
And this one is useful for spotting bloated indexes, where the index has grown way beyond its reasonable size:
```sql
SELECT nspname, relname AS index_name,
pg_size_pretty(pg_relation_size(indexrelid)) AS index_size,
pg_size_pretty(pg_relation_size(indrelid)) AS table_size
FROM pg_stat_user_indexes
JOIN pg_index USING (indexrelid)
JOIN pg_class ON pg_class.oid = indexrelid
JOIN pg_namespace ON pg_namespace.oid = pg_class.relnamespace
WHERE pg_relation_size(indexrelid) > 10 * 1024 * 1024 -- bigger than 10 MB
ORDER BY pg_relation_size(indexrelid) DESC;
```
If an index is nearly as large as its table, or larger, it is almost certainly bloated and would benefit from a `REINDEX`:
```sql
-- Rebuild a specific bloated index
REINDEX INDEX your_index_name;
```
Be careful about dropping indexes without understanding why they were created. Some might be needed for unique constraints or foreign key lookups that happen rarely. But if an index has zero scans and is not a unique or primary key constraint, it is probably safe to drop.
The challenge with index health is that it changes as your data and usage patterns change. A new custom report can suddenly make a previously unused index critical. Or a module update can create duplicate indexes without you realizing. Checking once is useful, but you need periodic reviews to keep things clean.
7. Memory Leaks and Worker Overload
Odoo runs with a fixed number of worker processes (or in threaded mode for smaller setups). Each worker handles one request at a time, and each one consumes memory. Over time, Odoo workers tend to grow in memory usage. A worker that starts at 150 MB can balloon to 800 MB or more after handling thousands of requests, especially if there are poorly optimized custom modules in the mix.
When workers get too large, the operating system starts swapping to disk, and everything grinds to a halt. Even before swapping kicks in, having oversized workers means fewer effective workers for your user base, because the server runs out of RAM to allocate new ones.
The worst case is when a single heavy operation (like a massive report or a bulk data import) causes a worker to grab a huge amount of memory and never release it. That worker is now permanently bloated for its entire lifetime, handling simple requests with 2 GB of RAM allocated.
What to look for: Track per-worker memory usage over time. If workers consistently grow without stabilizing, you have a leak. Also check if the total memory consumed by all Odoo processes exceeds what your server can comfortably handle (you want to leave at least 25-30% of RAM free for PostgreSQL and the OS).
If you are on Linux, you can check the current memory situation pretty quickly:
```bash
# Shows each Odoo worker, its PID, and memory usage in MB
ps aux | grep '[o]doo' | awk '{printf "PID: %s RAM: %.0f MB %s\n", $2, $6/1024, $11}'
```
And to see the total across all workers:
```bash
# Total memory consumed by all Odoo processes
ps aux | grep '[o]doo' | awk '{sum += $6} END {printf "Total Odoo RAM: %.0f MB\n", sum/1024}'
```
Compare that to your total server RAM:
```bash
# Linux
free -h
# Or on Docker
cat /proc/meminfo | grep MemTotal
```
If Odoo is consuming more than 60-70% of your total RAM, you are leaving too little for PostgreSQL and the OS.
Also worth checking what limits you currently have set:
```bash
grep -E 'limit_memory|workers' /etc/odoo/odoo.conf
```
A reasonable starting point for an 8 GB server with 4 workers:
```
workers = 4
limit_memory_soft = 681574400 # 650 MB
limit_memory_hard = 805306368 # 768 MB
```
The soft limit triggers a graceful recycle after the current request finishes. The hard limit kills the worker immediately. Set the hard limit about 15-20% above the soft limit so workers get recycled before they cause real trouble.
The limitation of checking memory this way is that you only see a point-in-time number. Memory leaks are about trends. A worker using 400 MB right now might be fine, or it might have been at 200 MB two hours ago and climbing steadily. You need historical data to tell the difference, and `ps aux` does not give you that.
The Common Thread
You will notice a pattern here. Every single one of these problems shares two characteristics:
1. They do not trigger any alert or notification by default. Odoo is great at many things, but proactive performance monitoring is not one of them out of the box.
2. They get worse over time. None of these are situations that just appear overnight. They accumulate gradually, which makes them easy to ignore until they are not ignorable anymore.
The fix is not complicated, but it does require intentional effort. You need monitoring that runs continuously, tracks trends over time, and tells you when something crosses a threshold before your users notice.
If you are running Odoo in production and you do not have visibility into these metrics, you are essentially hoping that nothing goes wrong. And hope is not a great operations strategy.
Running into performance issues with your Odoo instance? We help companies diagnose and fix exactly these kinds of problems. Reach out to us at support@odexalabs.com or visit odexalabs.com to learn more about our Odoo services.