PyTorch Binary Classification – same network structure, ‘simpler’ data, but worse performance?

TL;DR Your input data is not normalized. use x_data = (x_data – x_data.mean()) / x_data.std() increase the learning rate optimizer = torch.optim.Adam(model.parameters(), lr=0.01) You’ll get convergence in only 1000 iterations. More details The key difference between the two examples you have is that the data x in the first example is centered around (0, 0) … Read more

Pytorch ValueError: optimizer got an empty parameter list

Your NetActor does not directly store any nn.Parameter. Moreover, all other layers it eventually uses in forward are stored as a simple list in self.nn_layers. If you want self.actor_nn.parameters() to know that the items stored in the list self.nn_layers may contain trainable parameters, you should work with containers. Specifically, making self.nn_layers to be a nn.ModuleList … Read more

How do you read Tensorboard files programmatically?

You can use TensorBoard’s Python classes or script to extract the data: How can I export data from TensorBoard? If you’d like to export data to visualize elsewhere (e.g. iPython Notebook), that’s possible too. You can directly depend on the underlying classes that TensorBoard uses for loading data: python/summary/event_accumulator.py (for loading data from a single … Read more

Google Colaboratory: misleading information about its GPU (only 5% RAM available to some users)

So to prevent another dozen of answers suggesting invalid in the context of this thread suggestion to !kill -9 -1, let’s close this thread: The answer is simple: As of this writing Google simply gives only 5% of GPU to some of us, whereas 100% to the others. Period. dec-2019 update: The problem still exists … Read more