How to get current available GPUs in tensorflow?
You can check all device list using following code:
from tensorflow.python.client import device_lib
device_lib.list_local_devices()
There is also a method in the test util. So all that has to be done is:
tf.test.is_gpu_available()
and/or
tf.test.gpu_device_name()
Look up the Tensorflow docs for arguments.
Since TensorFlow 2.1, you can use tf.config.list_physical_devices('GPU')
:
import tensorflow as tf
gpus = tf.config.list_physical_devices('GPU')
for gpu in gpus:
print("Name:", gpu.name, " Type:", gpu.device_type)
If you have two GPUs installed, it outputs this:
Name: /physical_device:GPU:0 Type: GPU
Name: /physical_device:GPU:1 Type: GPU
In TF 2.0, you must add experimental
:
gpus = tf.config.experimental.list_physical_devices('GPU')
See:
- Guide pages
- Current API
There is an undocumented method called device_lib.list_local_devices()
that enables you to list the devices available in the local process. (N.B. As an undocumented method, this is subject to backwards incompatible changes.) The function returns a list of DeviceAttributes
protocol buffer objects. You can extract a list of string device names for the GPU devices as follows:
from tensorflow.python.client import device_lib
def get_available_gpus():
local_device_protos = device_lib.list_local_devices()
return [x.name for x in local_device_protos if x.device_type == 'GPU']
Note that (at least up to TensorFlow 1.4), calling device_lib.list_local_devices()
will run some initialization code that, by default, will allocate all of the GPU memory on all of the devices (GitHub issue). To avoid this, first create a session with an explicitly small per_process_gpu_fraction
, or allow_growth=True
, to prevent all of the memory being allocated. See this question for more details.