The error occurs because you ran out of memory on your GPU.
One way to solve it is to reduce the batch size until your code runs without this error.
More Related Contents:
- Best way to save a trained model in PyTorch? [closed]
- Why `torch.cuda.is_available()` returns False even after installing pytorch with cuda?
- How do I save a trained model in PyTorch?
- How to initialize weights in PyTorch?
- How do I initialize weights in PyTorch?
- What does model.train() do in PyTorch?
- How does Pytorch’s “Fold” and “Unfold” work?
- Heroku: slug size too large after installing Pytorch
- What does model.eval() do in pytorch?
- PyTorch NotImplementedError in forward
- Pytorch – Inferring linear layer in_features
- PyTorch: How to use DataLoaders for custom Datasets
- Pytorch – RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed
- How do I print the model summary in PyTorch?
- What’s the difference between reshape and view in pytorch?
- What’s the difference between torch.stack() and torch.cat() functions?
- ModuleNotFoundError: No module named ‘tools.nnwrap’
- How do I check if PyTorch is using the GPU?
- How do I visualize a net in Pytorch?
- Pytorch RuntimeError: The size of tensor a (4) must match the size of tensor b (3) at non-singleton dimension 0
- Is .data still useful in pytorch?
- Pytorch: how to add L1 regularizer to activations?
- How to construct a network with two inputs in PyTorch
- RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same
- Data Augmentation in PyTorch
- How do I split a custom dataset into training and test datasets?
- RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! when resuming training
- Install PyTorch from requirements.txt
- Pytorch ValueError: optimizer got an empty parameter list
- PyTorch Binary Classification – same network structure, ‘simpler’ data, but worse performance?