How to find out the number of CPUs using python

len(os.sched_getaffinity(0)) is what you usually want

https://docs.python.org/3/library/os.html#os.sched_getaffinity

os.sched_getaffinity(0) (added in Python 3) returns the set of CPUs available considering the sched_setaffinity Linux system call, which limits which CPUs a process and its children can run on.

0 means to get the value for the current process. The function returns a set() of allowed CPUs, thus the need for len().

multiprocessing.cpu_count() and os.cpu_count() on the other hand just returns the total number of physical CPUs.

The difference is especially important because certain cluster management systems such as Platform LSF limit job CPU usage with sched_getaffinity.

Therefore, if you use multiprocessing.cpu_count(), your script might try to use way more cores than it has available, which may lead to overload and timeouts.

We can see the difference concretely by restricting the affinity with the taskset utility, which allows us to control the affinity of a process.

Minimal taskset example

For example, if I restrict Python to just 1 core (core 0) in my 16 core system:

taskset -c 0 ./main.py

with the test script:

main.py

#!/usr/bin/env python3

import multiprocessing
import os

print(multiprocessing.cpu_count())
print(os.cpu_count())
print(len(os.sched_getaffinity(0)))

then the output is:

16
16
1

Vs nproc

nproc does respect the affinity by default and:

taskset -c 0 nproc

outputs:

1

and man nproc makes that quite explicit:

print the number of processing units available

Therefore, len(os.sched_getaffinity(0)) behaves like nproc by default.

nproc has the --all flag for the less common case that you want to get the physical CPU count without considering taskset:

taskset -c 0 nproc --all

os.cpu_count documentation

The documentation of os.cpu_count also briefly mentions this https://docs.python.org/3.8/library/os.html#os.cpu_count

This number is not equivalent to the number of CPUs the current process can use. The number of usable CPUs can be obtained with len(os.sched_getaffinity(0))

The same comment is also copied on the documentation of multiprocessing.cpu_count: https://docs.python.org/3/library/multiprocessing.html#multiprocessing.cpu_count

From the 3.8 source under Lib/multiprocessing/context.py we also see that multiprocessing.cpu_count just forwards to os.cpu_count, except that the multiprocessing one throws an exception instead of returning None if os.cpu_count fails:

    def cpu_count(self):
        '''Returns the number of CPUs in the system'''
        num = os.cpu_count()
        if num is None:
            raise NotImplementedError('cannot determine number of cpus')
        else:
            return num

3.8 availability: systems with a native sched_getaffinity function

The only downside of this os.sched_getaffinity is that this appears to be UNIX only as of Python 3.8.

cpython 3.8 seems to just try to compile a small C hello world with a sched_setaffinity function call during configuration time, and if not present HAVE_SCHED_SETAFFINITY is not set and the function will likely be missing:

  • https://github.com/python/cpython/blob/v3.8.5/configure#L11523
  • https://github.com/python/cpython/blob/v3.8.5/Modules/posixmodule.c#L6457

psutil.Process().cpu_affinity(): third-party version with a Windows port

The third-party psutil package (pip install psutil) had been mentioned at: https://stackoverflow.com/a/14840102/895245 but not the cpu_affinity function: https://psutil.readthedocs.io/en/latest/#psutil.Process.cpu_affinity

Usage:

import psutil
print(len(psutil.Process().cpu_affinity()))

This function does the same as the standard library os.sched_getaffinity on Linux, but they have also implemented it for Windows by making a call to the GetProcessAffinityMask Windows API function:

  • https://github.com/giampaolo/psutil/blob/ee60bad610822a7f630c52922b4918e684ba7695/psutil/_psutil_windows.c#L1112
  • https://docs.microsoft.com/en-us/windows/win32/api/winbase/nf-winbase-getprocessaffinitymask

So in other words: those Windows users have to stop being lazy and send a patch to the upstream stdlib :-)

Tested in Ubuntu 16.04, Python 3.5.2.


If you're interested into the number of processors available to your current process, you have to check cpuset first. Otherwise (or if cpuset is not in use), multiprocessing.cpu_count() is the way to go in Python 2.6 and newer. The following method falls back to a couple of alternative methods in older versions of Python:

import os
import re
import subprocess


def available_cpu_count():
    """ Number of available virtual or physical CPUs on this system, i.e.
    user/real as output by time(1) when called with an optimally scaling
    userspace-only program"""

    # cpuset
    # cpuset may restrict the number of *available* processors
    try:
        m = re.search(r'(?m)^Cpus_allowed:\s*(.*)$',
                      open('/proc/self/status').read())
        if m:
            res = bin(int(m.group(1).replace(',', ''), 16)).count('1')
            if res > 0:
                return res
    except IOError:
        pass

    # Python 2.6+
    try:
        import multiprocessing
        return multiprocessing.cpu_count()
    except (ImportError, NotImplementedError):
        pass

    # https://github.com/giampaolo/psutil
    try:
        import psutil
        return psutil.cpu_count()   # psutil.NUM_CPUS on old versions
    except (ImportError, AttributeError):
        pass

    # POSIX
    try:
        res = int(os.sysconf('SC_NPROCESSORS_ONLN'))

        if res > 0:
            return res
    except (AttributeError, ValueError):
        pass

    # Windows
    try:
        res = int(os.environ['NUMBER_OF_PROCESSORS'])

        if res > 0:
            return res
    except (KeyError, ValueError):
        pass

    # jython
    try:
        from java.lang import Runtime
        runtime = Runtime.getRuntime()
        res = runtime.availableProcessors()
        if res > 0:
            return res
    except ImportError:
        pass

    # BSD
    try:
        sysctl = subprocess.Popen(['sysctl', '-n', 'hw.ncpu'],
                                  stdout=subprocess.PIPE)
        scStdout = sysctl.communicate()[0]
        res = int(scStdout)

        if res > 0:
            return res
    except (OSError, ValueError):
        pass

    # Linux
    try:
        res = open('/proc/cpuinfo').read().count('processor\t:')

        if res > 0:
            return res
    except IOError:
        pass

    # Solaris
    try:
        pseudoDevices = os.listdir('/devices/pseudo/')
        res = 0
        for pd in pseudoDevices:
            if re.match(r'^cpuid@[0-9]+$', pd):
                res += 1

        if res > 0:
            return res
    except OSError:
        pass

    # Other UNIXes (heuristic)
    try:
        try:
            dmesg = open('/var/run/dmesg.boot').read()
        except IOError:
            dmesgProcess = subprocess.Popen(['dmesg'], stdout=subprocess.PIPE)
            dmesg = dmesgProcess.communicate()[0]

        res = 0
        while '\ncpu' + str(res) + ':' in dmesg:
            res += 1

        if res > 0:
            return res
    except OSError:
        pass

    raise Exception('Can not determine number of CPUs on this system')

If you have python with a version >= 2.6 you can simply use

import multiprocessing

multiprocessing.cpu_count()

http://docs.python.org/library/multiprocessing.html#multiprocessing.cpu_count


Another option is to use the psutil library, which always turn out useful in these situations:

>>> import psutil
>>> psutil.cpu_count()
2

This should work on any platform supported by psutil(Unix and Windows).

Note that in some occasions multiprocessing.cpu_count may raise a NotImplementedError while psutil will be able to obtain the number of CPUs. This is simply because psutil first tries to use the same techniques used by multiprocessing and, if those fail, it also uses other techniques.