PyTorch Binary Classification – same network structure, ‘simpler’ data, but worse performance?

TL;DR Your input data is not normalized. use x_data = (x_data – x_data.mean()) / x_data.std() increase the learning rate optimizer = torch.optim.Adam(model.parameters(), lr=0.01) You’ll get convergence in only 1000 iterations. More details The key difference between the two examples you have is that the data x in the first example is centered around (0, 0) … Read more

Pytorch ValueError: optimizer got an empty parameter list

Your NetActor does not directly store any nn.Parameter. Moreover, all other layers it eventually uses in forward are stored as a simple list in self.nn_layers. If you want self.actor_nn.parameters() to know that the items stored in the list self.nn_layers may contain trainable parameters, you should work with containers. Specifically, making self.nn_layers to be a nn.ModuleList … Read more

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! when resuming training

There might be an issue with the device parameters are on: If you need to move a model to GPU via .cuda() , please do so before constructing optimizers for it. Parameters of a model after .cuda() will be different objects with those before the call. In general, you should make sure that optimized parameters … Read more

How to construct a network with two inputs in PyTorch

By “combine them” I assume you mean to concatenate the two inputs. Assuming you concat along the second dimension: import torch from torch import nn class TwoInputsNet(nn.Module): def __init__(self): super(TwoInputsNet, self).__init__() self.conv = nn.Conv2d( … ) # set up your layer here self.fc1 = nn.Linear( … ) # set up first FC layer self.fc2 = … Read more