pg_dump on Database throwing error 'out of shared memory'

I solved this problem by taking backup for all schema individually as size of database (be it no.of schemas or no.of tables) increases it's hard to take backup using pg_dump.

I have done following modification to the script to take schema-wise backup:

  1. Before running pg_dump, list all database schemas into a file. So that we can iterate all schemas and take backup for a schema.

    Here is the command to list all schema to a file

    psql <db_name> -o <output_file> < <sql_to_list_schema>

    Here sql_to_list_schema contains

    SELECT n.nspname FROM pg_catalog.pg_namespace n WHERE n.nspname !~ '^pg_' AND n.nspname <> 'information_schema';

  2. Now read all lines of output_file and take backup of that schema

    pg_dump <db_name> -f <backup_file> -i -x -O -R -n <schema_name_read_from_file>


You might need to increase max_locks_per_transaction even further. Check the documentation for details on what this parameter means. If in doubt, check how many tables you have and set max_locks_per_transaction to at least that much, then you should be OK.