Skip to content

Instantly share code, notes, and snippets.

@firobeid
Last active August 31, 2023 03:33
Show Gist options
  • Select an option

  • Save firobeid/737d393aee8b672c967125a4e500f2d2 to your computer and use it in GitHub Desktop.

Select an option

Save firobeid/737d393aee8b672c967125a4e500f2d2 to your computer and use it in GitHub Desktop.
gpu_devices = tf.config.list_physical_devices('GPU')
if gpu_devices:
print('Using GPU')
for gpu in gpu_devices[0:2]:
tf.config.experimental.set_memory_growth(gpu, True)
else:
print('Using CPU')
tf.config.optimizer.set_jit(True)
print('used: {}% free: {:.2f}GB'.format(psutil.virtual_memory().percent, float(psutil.virtual_memory().free)/1024**3))#@
gpus = tf.config.list_physical_devices('GPU')
if gpus:
# Restrict TensorFlow to only use some GPUs
try:
tf.config.experimental.set_visible_devices(gpus[0:2], 'GPU')
logical_gpus = tf.config.experimental.list_logical_devices('GPU')
print(len(gpus), "Physical GPU,", len(logical_gpus), "Logical GPUs")
except RuntimeError as e:
# Visible devices must be set at program startup
print(e)
# If you develop a model using Pytorch, you will probably encounter “GPU Out of Memory” at least once.
# If you have a deep understanding of GPU, you can solve it quickly, but otherwise, it is a perfect error to panic.
# In my experience, when learning reinforcement learning or learning a Policy Gradient-based learner, I encountered a phenomenon in which the GPU Memory continued to grow as learning progressed.
# There are various solutions, but in my case the solution is torch.cuda.empty_cache()!
def policy_update():
reward = torch.tensor(xxx)
state = torch.tensor(xxx)
action = torch.tensor(xxx)
....
del reward, state, action # Delete tensors
torch.cuda.empty_cache() # Delete GPU cache
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment