Configuration parameter work_mem in PostgreSQL on Linux

I posted your query plan on explain.depesz.com, have a look.

The query planner’s estimates are terribly wrong in some places.
Have you run ANALYZE recently?

Read the chapters in the manual on Statistics Used by the Planner and Planner Cost Constants. Pay special attention to the chapters on random_page_cost and default_statistics_target.
You might try:

ALTER TABLE diplomas ALTER COLUMN number SET STATISTICS 1000;
ANALYZE diplomas;

Or go even a higher for a table with 10M rows. It depends on data distribution and actual queries. Experiment. Default is 100, maximum is 10000.

For a database of that size, only 1 or 5 MB of work_mem are generally not enough. Read the Postgres Wiki page on Tuning Postgres that @aleroot linked to.

As your query needs 430104kB of memory on disk according to EXPLAIN output, you have to set work_mem to something like 500MB or more to allow in-memory sorting. In-memory representation of data needs some more space than on-disk representation. You may be interested in what Tom Lane posted on that matter recently.

Increasing work_mem by just a little, like you tried, won’t help much or can even slow down. Setting it to high globally can even hurt, especially with concurrent access. Multiple sessions might starve one another for resources. Allocating more for one purpose takes away memory from another if the resource is limited. The best setup depends on the complete situation.

To avoid side effects, only set it high enough locally in your session, and temporarily for the query:

SET work_mem = '500MB';

Reset it to your default afterwards:

RESET work_mem;

Or use SET LOCAL to set it just for the current transaction to begin with.

Leave a Comment