How to create a user with limited RAM usage?
ulimit
is made for this.
You can setup defaults for ulimit
on a per user or a per group basis in
/etc/security/limits.conf
ulimit -v KBYTES
sets max virtual memory size. I don't think you can give a max amount of swap. It's just a limit on the amount of virtual memory the user can use.
So you limits.conf
would have the line (to a maximum of 4G
of memory)
luser hard as 4000000
UPDATE - CGroups
The limits imposed by ulimit
and limits.conf
is per process. I definitely wasn't clear on that point.
If you want to limit the total amount of memory a users uses (which is what you asked). You want to use cgroups.
In /etc/cgconfig.conf
:
group memlimit {
memory {
memory.limit_in_bytes = 4294967296;
}
}
This creates a cgroup
that has a max memory limit of 4GiB.
In /etc/cgrules.conf
:
luser memory memlimit/
This will cause all processes run by luser
to be run inside the memlimit
cgroups created in cgconfig.conf
.
cgroups
are the right way to do this, as other answers have pointed out. Unfortunately there is no perfect solution to the problem, as we'll get into below. There are a bunch of different ways to set cgroup memory usage limits. How one goes about making a user's login session automatically part of a cgroup varies from system to system. Red Hat has some tools, and so does systemd.
memory.memsw.limit_in_bytes
and memory.limit_in_bytes
set limits including and not including swap, respectively. The downside memory.limit_in_bytes
is that it counts files cached by the kernel on behalf of processes in the cgroup against the group's quota. Less caching means more disk access, so you're potentially giving up some performance if the system otherwise had some memory available.
On the other hand, memory.soft_limit_in_bytes
allows the cgroup to go over-quota, but if the kernel OOM killer gets invoked then those cgroups which are over their quotas get killed first, logically. The downside of that, however, is that there are situations where some memory is needed immediately and there isn't time for the OOM killer to look around for processes to kill, in which case something might fail before the over-quota user's processes are killed.
ulimit
, however, is absolutely the wrong tool for this. ulimit places limits on virtual memory usage, which is almost certainly not what you want. Many real-world applications use far more virtual memory than physical memory. Most garbage-collected runtimes (Java, Go) work this way to avoid fragmentation. A trivial "hello world" program in C, if compiled with address sanitizer, can use 20TB of virtual memory. Allocators which do not rely on sbrk
, such as jemalloc (which is the default allocator for Rust) or tcmalloc, will also have virtual memory usage substantially in excess of their physical usage. For efficiency, many tools will mmap files, which increases virtual usage but not necessarily physical usage. All of my Chrome processes are using 2TB of virtual memory each. I'm on a laptop with 8GB of physical memory. Any way one tried to set up virtual memory quotas here would either break Chrome, force Chrome to disable some security features which rely on allocating (but not using) large amounts of virtual memory, or be completely ineffective at preventing a user from abusing the system.
You cannot cap memory usage at the user level, ulimit can do that but for a single process.
Even with using per user limits in /etc/security/limits.conf
, a user can use all memory by running multiple processes.
Should you really want to cap resources, you need to use a resource management tool, like rcapd used by projects and zones under Solaris.
There is something that seems to provide similar features on Linux that you might investigate: cgroups.