How to generate a core dump in Linux on a segmentation fault?
This depends on what shell you are using. If you are using bash, then the ulimit command controls several settings relating to program execution, such as whether you should dump core. If you type
ulimit -c unlimited
then that will tell bash that its programs can dump cores of any size. You can specify a size such as 52M instead of unlimited if you want, but in practice this shouldn't be necessary since the size of core files will probably never be an issue for you.
In tcsh, you'd type
limit coredumpsize unlimited
As explained above the real question being asked here is how to enable core dumps on a system where they are not enabled. That question is answered here.
If you've come here hoping to learn how to generate a core dump for a hung process, the answer is
gcore <pid>
if gcore is not available on your system then
kill -ABRT <pid>
Don't use kill -SEGV as that will often invoke a signal handler making it harder to diagnose the stuck process
To check where the core dumps are generated, run:
sysctl kernel.core_pattern
or:
cat /proc/sys/kernel/core_pattern
where %e
is the process name and %t
the system time. You can change it in /etc/sysctl.conf
and reloading by sysctl -p
.
If the core files are not generated (test it by: sleep 10 &
and killall -SIGSEGV sleep
), check the limits by: ulimit -a
.
If your core file size is limited, run:
ulimit -c unlimited
to make it unlimited.
Then test again, if the core dumping is successful, you will see “(core dumped)” after the segmentation fault indication as below:
Segmentation fault: 11 (core dumped)
See also: core dumped - but core file is not in current directory?
Ubuntu
In Ubuntu the core dumps are handled by Apport and can be located in /var/crash/
. However, it is disabled by default in stable releases.
For more details, please check: Where do I find the core dump in Ubuntu?.
macOS
For macOS, see: How to generate core dumps in Mac OS X?