Caffe | Check failed: error == cudaSuccess (2 vs. 0) out of memory

The error you get is indeed out of memory, but it's not the RAM, but rather GPU memory (note that the error comes from CUDA).
Usually, when caffe is out of memory - the first thing to do is reduce the batch size (at the cost of gradient accuracy), but since you are already at batch size = 1...
Are you sure batch size is 1 for both TRAIN and TEST phases?


Caffe can use multiple GPU's. This is only supported in the C++ interface, not in the python one. You could also enable cuDNN for a lower memory footprint.

https://github.com/BVLC/caffe/blob/master/docs/multigpu.md