Force GPU memory limit in PyTorch

Update (04-MAR-2021): it is now available in the stable 1.8.0 version of PyTorch. Also, in the docs

Original answer follows.


This feature request has been merged into PyTorch master branch. Yet, not introduced in the stable release.

Introduced as set_per_process_memory_fraction

Set memory fraction for a process. The fraction is used to limit an caching allocator to allocated memory on a CUDA device. The allowed value equals the total visible memory multiplied fraction. If trying to allocate more than the allowed value in a process, will raise an out of memory error in allocator.

You can check the tests as usage examples.


Update pytorch to 1.8.0 (pip install --upgrade torch==1.8.0)

function: torch.cuda.set_per_process_memory_fraction(fraction, device=None)

params:

fraction (float) – Range: 0~1. Allowed memory equals total_memory * fraction.

device (torch.device or int, optional) – selected device. If it is None the default CUDA device is used.

eg:

import torch
torch.cuda.set_per_process_memory_fraction(0.5, 0)
torch.cuda.empty_cache()
total_memory = torch.cuda.get_device_properties(0).total_memory
# less than 0.5 will be ok:
tmp_tensor = torch.empty(int(total_memory * 0.499), dtype=torch.int8, device='cuda')
del tmp_tensor
torch.cuda.empty_cache()
# this allocation will raise a OOM:
torch.empty(total_memory // 2, dtype=torch.int8, device='cuda')

"""
It raises an error as follows: 
RuntimeError: CUDA out of memory. Tried to allocate 5.59 GiB (GPU 0; 11.17 GiB total capacity; 0 bytes already allocated; 10.91 GiB free; 5.59 GiB allowed; 0 bytes reserved in total by PyTorch)
"""

Tags:

Pytorch