how to secure an open PostgreSQL port
Solution 1:
Require SSL, keep SELinux turned on, monitor the logs, and use a current PostgreSQL version.
Server side
Require SSL
In postgresql.conf
set ssl=on
and make sure you have your keyfile and certfile installed appropriately (see the docs and the comments in postgresql.conf
).
You might need to buy a certificate from a CA if you want to have it trusted by clients without special setup on the client.
In pg_hba.conf
use something like:
hostssl theuser thedatabase 1.2.3.4/32 md5
... possibly with "all" for user and/or database, and possibly with a wider source IP address filter.
Limit users who can log in, deny remote superuser login
Don't allow "all" for users if possible; you don't want to permit superuser logins remotely if you can avoid the need for it.
Limit rights of users
Restrict the rights of the user(s) that can log in. Don't give them CREATEDB
or CREATEUSER
rights.
REVOKE
the CONNECT
right from PUBLIC
on all your databases, then give it back to only the users/roles that should be able to access that database. (Group users into roles and grant rights to roles, rather than directly to individual users).
Make sure users with remote access can only connect to the DBs they need, and only have rights to the schemas, tables, and columns within that they actually need. This is good practice for local users too, it's just sensible security.
Client setup
In PgJDBC, pass the parameter ssl=true
:
To instruct the JDBC driver to try and establish a SSL connection you must add the connection URL parameter ssl=true.
... and install the server certificate in the client's truststore, or use a server certificate that's trusted by one of the CAs in Java's built-in truststore if you don't want the user to have to install the cert.
Ongoing action
Now make sure you keep PostgreSQL up to date. PostgreSQL has only had a couple of pre-auth security holes, but that's more than zero, so stay up to date. You should anyway, bugfixes are nice things to have.
Add a firewall in front if there are large netblocks/regions you know you don't ever need access from.
Log connections and disconnections (see postgresql.conf
). Log queries if practical. Run an intrusion detection system or fail2ban or similar in front if practical. For fail2ban with postgres, there is a convenient how-to here
Monitor the log files.
Bonus paranoia
Extra steps to think about...
Require client certificates
If you want, you can also use pg_hba.conf
to require that the client present an X.509 client certificate trusted by the server. It doesn't need to use the same CA as the server cert, you can do this with a homebrew openssl CA. A JDBC user needs to import the client certificate into their Java Keystore with keytool
and possibly configure some JSSE system properties to point Java at their keystore, so it's not totally transparent.
Quarantine the instance
If you want to be really paranoid, run the instance for the client in a separate container / VM, or at least under a different user account, with just the database(s) they require.
That way if they compromise the PostgreSQL instance they won't get any further.
Use SELinux
I should't have to say this, but ...
Run a machine with SELinux support like RHEL 6 or 7, and don't turn SELinux off or set it to permissive mode. Keep it in enforcing mode.
Use a non-default port
Security by only obscurity is stupidity. Security that uses a little obscurity once you've done the sensible stuff probably won't hurt.
Run Pg on a non-default port to make life a little harder for automated attackers.
Put a proxy in front
You can also run PgBouncer or PgPool-II in front of PostgreSQL, acting as a connection pool and proxy. That way you can let the proxy handle SSL, not the real database host. The proxy can be on a separate VM or machine.
Use of connection pooling proxies is generally a good idea with PostgreSQL anyway, unless the client app already has a built-in pool. Most Java application servers, Rails, etc have built-in pooling. Even then, a server side pooling proxy is at worst harmless.
Solution 2:
A simple extension to Craigs impressive action plan:
Maybe the user is using only a relative little set of network providers (for example, his mobil network provider while moving, his cable network from home and workplace outgoing ip from work).
Most network provider have many IPs, but not really many subnets. So, you can give an iptables filter, which limits the postgresql access to the network segments which are used by your customer. This greatly reduced the attack possibilities of randomly-selected trouble sources of the net.
A simple support scenario:
- Your customer calls you, "I can't login".
- You find out with a
tcpdump -i eth0 -p tcp port 5432
command, where is he coming from. - With a
whois 1.2.3.4
you can get the ip address used by this ip. For example, it can be1.2.3.0/24
. - With an
iptables -A INPUT -s 1.2.3.0/24 -p tcp --dport 5432 -j ACCEPT
(or some similar) you allows the tcp connections with his new subnet.
There is a very good perl script named uif
which can provide permanent and intuitive declarable iptables rulesets. (Google for "uif iptables").
Solution 3:
Here's a fairly simple Fail2ban configuration for PostgreSQL based on the HOWTO linked above but fine-tuned to actually work with Ubuntu packages, catch another error condition and skip over various debug messages to make it quicker:
/etc/fail2ban/filter.d/local-postgresql.conf
:
[Definition]
failregex = <HOST>\(\d+\) FATAL: password authentication failed for .+$
<HOST>\(\d+\) FATAL: no pg_hba.conf entry for host .+$
ignoreregex = duration:
/etc/fail2ban/jail.d/local-postgresql.conf
:
[local-postgresql]
enabled = true
filter = local-postgresql
action = iptables[name=PostgreSQL, port=5432, protocol=tcp]
sendmail-whois[name=PostgreSQL, dest=root@localhost]
logpath = /var/log/postgresql/postgresql-9.3-main.log
maxretry = 3