Change default GPU in TensorFlow
Just to be clear regarding the use of the environment variable CUDA_VISIBLE_DEVICES
:
To run a script my_script.py
on GPU 1 only, in the Linux terminal you can use the following command:
username@server:/scratch/coding/src$ CUDA_VISIBLE_DEVICES=1 python my_script.py
More examples illustrating the syntax:
Environment Variable Syntax Results
CUDA_VISIBLE_DEVICES=1 Only device 1 will be seen
CUDA_VISIBLE_DEVICES=0,1 Devices 0 and 1 will be visible
CUDA_VISIBLE_DEVICES="0,1" Same as above, quotation marks are optional
CUDA_VISIBLE_DEVICES=0,2,3 Devices 0, 2, 3 will be visible; device 1 is masked
CUDA_VISIBLE_DEVICES="" No GPU will be visible
FYI:
- CUDA Environment Variables
- Forcing TensorFlow-GPU to use the CPU from the command line
Suever's answer correctly shows how to pin your operations to a particular GPU. However, if you are running multiple TensorFlow programs on the same machine, it is recommended that you set the CUDA_VISIBLE_DEVICES
environment variable to expose different GPUs before starting the processes. Otherwise, TensorFlow will attempt to allocate almost the entire memory on all of the available GPUs, which prevents other processes from using those GPUs (even if the current process isn't using them).
Note that if you use CUDA_VISIBLE_DEVICES
, the device names "/gpu:0"
, "/gpu:1"
, etc. refer to the 0th and 1st visible devices in the current process.