pro_gan_pytorch: I met this error when run your cifar-10 train code.

Hi, @joebradly @joshalbrecht @akanimax @SaoYan

I met this error when run your cifar-10 train code.

– Files already downloaded and verified Starting the training process …

Currently working on Depth: 0 Current resolution: 4 x 4

Epoch: 1 Traceback (most recent call last): File “depth4.py”, line 61, in <module> batch_sizes=batch_sizes File “/home/oem/pro_gan_pytorch/pro_gan_pytorch/PRO_GAN.py”, line 1046, in train labels, current_depth, alpha) File “/home/oem/pro_gan_pytorch/pro_gan_pytorch/PRO_GAN.py”, line 865, in optimize_discriminator labels, depth, alpha) File “/home/oem/pro_gan_pytorch/pro_gan_pytorch/Losses.py”, line 345, in dis_loss fake_out = self.dis(fake_samps, labels, height, alpha) File “/home/oem/.local/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 477, in call result = self.forward(*input, **kwargs) File “/home/oem/.local/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py”, line 123, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File “/home/oem/.local/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py”, line 133, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File “/home/oem/.local/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py”, line 77, in parallel_apply raise output File “/home/oem/.local/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py”, line 53, in _worker output = module(*input, **kwargs) File “/home/oem/.local/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 477, in call result = self.forward(*input, **kwargs) File “/home/oem/pro_gan_pytorch/pro_gan_pytorch/PRO_GAN.py”, line 305, in forward out = self.final_block(y, labels) File “/home/oem/.local/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 477, in call result = self.forward(*input, **kwargs) File “/home/oem/pro_gan_pytorch/pro_gan_pytorch/CustomLayers.py”, line 445, in forward labels = self.label_embedder(labels) # [B x C] File “/home/oem/.local/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 477, in call result = self.forward(*input, **kwargs) File “/home/oem/.local/lib/python3.6/site-packages/torch/nn/modules/sparse.py”, line 110, in forward self.norm_type, self.scale_grad_by_freq, self.sparse) File “/home/oem/.local/lib/python3.6/site-packages/torch/nn/functional.py”, line 1110, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: torch/csrc/autograd/variable.cpp:166: get_grad_fn: Assertion output_nr_ == 0 failed.

What’s wrong to me?

Thanks.

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 16 (6 by maintainers)

Most upvoted comments

Well the example code in its current form definitely leads to a mode collaps pretty much straight away. When I removed max_norm=1 in nn.Embedding that’s also what BigGAN does and change the default loss to hinge loss it seems to stays stable

@bemoregt, Your code seems correct to me. Please try the following to update to latest pro_gan_pth package and latest Torch: pip install -U torch torchvision pro-gan-pth Let me know if this solves the problem 👍

Hi @bemoregt, sorry for late reply. This seems to be a pytorch’s bug link here. Could you please post the full code of your depth4.py file? And also try updating to the latest pytorch version? Cheers 🍻! @akanimax