Is it safe to allow inbound on EC2 security group?

Solution 1:

The way security works is not binary. Your instances are never "safe".

There are hundreds/thousands of attack vectors, and you make a cost-benefit decisions to put defenses against some of these vectors. It's prohibitively expensive to be fully defended from all of them.

In your situation, your system can have a vulnerability in any service/app that listens on the network interface, for example causing leak of data.

You've opened all TCP and UDP ports. It's enough to have TCP/22 if you want to use that *.pem and whatever other port you know you need.

Even OpenSSH can have a vulnerability. Hence, yes, it's better to have your home network IP range only.

Solution 2:

Security is like an onion - its all about layers, stinky ogre-like layers.

By allowing SSH connections from everywhere you've removed one layer of protection and are now depending solely on the SSH key, which is thought to be secure at this time, but in the future a flaw could be discovered reducing or removing that layer.

And when there are no more layers, you have nothing left.

A quick layer is to install fail2ban or similar. These daemons monitor your auth.log file and as SSH connections fail, their IPs are added to an iptables chain for a while. This reduces the number of times a clinet can attempt connections every hour/day. I end up blacklisting bad sources indefinitely - but hosts that have to hang SSH out listening promiscuously might still get 3000 failed root login attempts a day. Most are from China, with Eastern Europe and Russia close behind.

If you have static source IPs then including them in your Security Group policy is good, and this means the rest of the world can't connect. Downside, what if you can't come from an authorised IP for some reason, like your ISP is dynamic or your link is down?

A reasonable solution is to run a VPN server on your instance, listening to all source IPs, and then once the tunnel is up, connect over the tunnel via SSH. Sure its not perfect protection, but its one more layer in your shield of ablative armour... OpenVPN is a good candidate,

You can also leverage AWS's "Client VPN" solution, which is a managed OpenVPN providing access to your VPC. No personal experience of this sorry.

Other (admittedly thin) layers are to move SSH to a different port. This doesn't really do much other than reducing the script-kiddy probes that default to port 22/tcp. Anyone trying hard will scan all ports and find your SSH server on 2222/tcp or 31337/tcp or whatever.

If possible, you can investigate IPv6 ssh only, again it merely limits the exposure without adding any real security. The number of unsolicited SSH connections on IPv6 is currently way lower than IPv4, but still non-zero.

Solution 3:

If software was perfect you could leave your server completely open to the internet as you have, but in practice there are bugs and other ways to compromise a server.

Best practice is to open specific ports to only the minimum IPs to achieve your goals. For example:

  • Open up port 22 (SSH) to only the IPs that require it, such as your home or work IPs.
  • Open ports 80 and 443 to the world, if you want to serve web traffic. However, if you want additional protection you can use a CDN / WAF such as CloudFront / CloudFlare (who haev a free tier) and only open 443 / 80 to CloudFlare IPs.
  • Open database ports to specific IPs only if required. If you do this your database has to be configured to accept those connections, which RDS isn't by default

You tend to only open other ports if absolutely required, and to the minimum number of IPs that will achieve what you need.

Solution 4:

The more restrictive you can be with your rules, the better.

Worth noting, some home ISPs will use dynamic addresses, if you find yourself unable to connect to your instance at some point, check that first.