
#HOW TO GET CORE COUNT IN WINDOWS WINDOWS#
Psutil.Process().cpu_affinity(): third-party version with a Windows port The only downside of this os.sched_getaffinity is that this appears to be UNIX only as of Python 3.8.Ĭpython 3.8 seems to just try to compile a small C hello world with a sched_setaffinity function call during configuration time, and if not present HAVE_SCHED_SETAFFINITY is not set and the function will likely be missing: Raise NotImplementedError('cannot determine number of cpus')ģ.8 availability: systems with a native sched_getaffinity function '''Returns the number of CPUs in the system''' The same comment is also copied on the documentation of multiprocessing.cpu_count: įrom the 3.8 source under Lib/multiprocessing/context.py we also see that multiprocessing.cpu_count just forwards to os.cpu_count, except that the multiprocessing one throws an exception instead of returning None if os.cpu_count fails: def cpu_count(self): The number of usable CPUs can be obtained with len(os.sched_getaffinity(0)) This number is not equivalent to the number of CPUs the current process can use. The documentation of os.cpu_count also briefly mentions this Nproc has the -all flag for the less common case that you want to get the physical CPU count without considering taskset: taskset -c 0 nproc -all

Therefore, len(os.sched_getaffinity(0)) behaves like nproc by default. Print the number of processing units available Nproc does respect the affinity by default and: taskset -c 0 nproc We can see the difference concretely by restricting the affinity with the taskset utility, which allows us to control the affinity of a process.įor example, if I restrict Python to just 1 core (core 0) in my 16 core system: taskset -c 0. Therefore, if you use multiprocessing.cpu_count(), your script might try to use way more cores than it has available, which may lead to overload and timeouts. The difference is especially important because certain cluster management systems such as Platform LSF limit job CPU usage with sched_getaffinity. Multiprocessing.cpu_count() and os.cpu_count() on the other hand just returns the total number of physical CPUs. The function returns a set() of allowed CPUs, thus the need for len(). Os.sched_getaffinity(0) (added in Python 3) returns the set of CPUs available considering the sched_setaffinity Linux system call, which limits which CPUs a process and its children can run on.Ġ means to get the value for the current process. Len(os.sched_getaffinity(0)) is what you usually want Raise Exception('Can not determine number of CPUs on this system') PseudoDevices = os.listdir('/devices/pseudo/')ĭmesg = open('/var/run/dmesg.boot').read()ĭmesgProcess = subprocess.Popen(, stdout=subprocess.PIPE) Res = open('/proc/cpuinfo').read().count('processor\t:') Res = int(os.sysconf('SC_NPROCESSORS_ONLN'))

Return psutil.cpu_count() # psutil.NUM_CPUS on old versions # cpuset may restrict the number of *available* processors User/real as output by time(1) when called with an optimally scaling """ Number of available virtual or physical CPUs on this system, i.e. The following method falls back to a couple of alternative methods in older versions of Python: import os Otherwise (or if cpuset is not in use), multiprocessing.cpu_count() is the way to go in Python 2.6 and newer.

If you're interested into the number of processors available to your current process, you have to check cpuset first.
