How do I check if keras is using gpu version of tensorflow?
To find out which devices your operations and tensors are assigned to, create the session with log_device_placement configuration option set to True.
# Creates a graph.
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
# Creates a session with log_device_placement set to True.
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
# Runs the op.
print(sess.run(c))
You should see the following output:
Device mapping:
/job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: Tesla K40c, pci bus
id: 0000:05:00.0
b: /job:localhost/replica:0/task:0/device:GPU:0
a: /job:localhost/replica:0/task:0/device:GPU:0
MatMul: /job:localhost/replica:0/task:0/device:GPU:0
[[ 22. 28.]
[ 49. 64.]]
For more details, please refer the link Using GPU with tensorflow
You are using the GPU version. You can list the available tensorflow devices with (also check this question):
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices()) # list of DeviceAttributes
EDIT:
With tensorflow >= 1.4 you can run the following function:
import tensorflow as tf
tf.test.is_gpu_available() # True/False
# Or only check for gpu's with cuda support
tf.test.is_gpu_available(cuda_only=True)
EDIT 2:
The above function is deprecated in tensorflow > 2.1
. Instead you should use the following function:
import tensorflow as tf
tf.config.list_physical_devices('GPU')
NOTE:
In your case both the cpu and gpu are available, if you use the cpu version of tensorflow the gpu will not be listed. In your case, without setting your tensorflow device (with tf.device("..")
), tensorflow will automatically pick your gpu!
In addition, your sudo pip3 list
clearly shows you are using tensorflow-gpu. If you would have the tensoflow cpu version the name would be something like tensorflow(1.1.0)
.
Check this issue for information about the warnings.
A lot of things have to go right in order for Keras to use the GPU. Put this near the top of your jupyter notebook:
# confirm TensorFlow sees the GPU
from tensorflow.python.client import device_lib
assert 'GPU' in str(device_lib.list_local_devices())
# confirm Keras sees the GPU (for TensorFlow 1.X + Keras)
from keras import backend
assert len(backend.tensorflow_backend._get_available_gpus()) > 0
# confirm PyTorch sees the GPU
from torch import cuda
assert cuda.is_available()
assert cuda.device_count() > 0
print(cuda.get_device_name(cuda.current_device()))
NOTE: With the release of TensorFlow 2.0, Keras is now included as part of the TF API.