Google Colaboratory: misleading information about its GPU (only 5% RAM available to some users)

So to prevent another dozen of answers suggesting invalid in the context of this thread suggestion to !kill -9 -1, let’s close this thread: The answer is simple: As of this writing Google simply gives only 5% of GPU to some of us, whereas 100% to the others. Period. dec-2019 update: The problem still exists … Read more

AMD equivalent to NvOptimusEnablement

According to https://community.amd.com/thread/169965 extern “C” { __declspec(dllexport) int AmdPowerXpressRequestHighPerformance = 1; } This will select the high performance GPU as long as no profile exists that assigns the application to another GPU. Please make sure to use a 13.35 or newer driver. Older drivers do not support this.

How to make Jupyter Notebook to run on GPU?

I am answering my own question. Easiest way to do is use connect to Local Runtime (https://research.google.com/colaboratory/local-runtimes.html) then select hardware accelerator as GPU as shown in (https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d).

How do I check if PyTorch is using the GPU?

These functions should help: >>> import torch >>> torch.cuda.is_available() True >>> torch.cuda.device_count() 1 >>> torch.cuda.current_device() 0 >>> torch.cuda.device(0) <torch.cuda.device at 0x7efce0b03be0> >>> torch.cuda.get_device_name(0) ‘GeForce GTX 950M’ This tells us: CUDA is available and can be used by one device. Device 0 refers to the GPU GeForce GTX 950M, and it is currently chosen by PyTorch.

multi-GPU basic usage

Since CUDA 4.0 was released, multi-GPU computations of the type you are asking about are relatively easy. Prior to that, you would have need to use a multi-threaded host application with one host thread per GPU and some sort of inter-thread communication system in order to use mutliple GPUs inside the same host application. Now … Read more