Total3DUnderstanding: MGNet pretraining goes wrong
Hi Yinyu:
I tried to pretrain MGNet with python main.py configs/mgnet.yaml --mode train and test it with python main.py configs/mgnet.yaml --mode test.
However, after 50 epochs of training, the learning rate quickly reduced to a seemingly unreasonable level of 1e-08 with the best chamfer_loss stuck at 5.67 after the 6th epoch. log.txt
Also, the test results of the best checkpoint looks like below: log.txt
Is there anything I missed?
About this issue
- Original URL
- State: open
- Created 4 years ago
- Comments: 15
Hi,
Boundary loss will only work for points on open boundaries, which works at the second stage (tmn_subnetworks =2). So it will be 0s if tmn_subnetworks =1. The first stage means shape deformation and the second stage is for topology modification.
Edge loss is a regularization term to penalize extra-long edges. It will not change much during training.
Face loss is to classify whether a point on edges/faces should be removed.
We will update our README to make it more detailed after our deadline ends. Here is our training strategy, you can also follow the strategy in this work :
We first set ‘tmn_subnetworks=1’ and turn off the edge classifier by setting ‘with_edge_classifier=False’ in config.yaml for training (it is equivalent to AtlasNet). After converging, turn on the ‘with_edge_classifier=True’ to train the edge classifier in the first stage. The above are the modules in the first stage.
After that, we fix the above modules to train the second-stage decoder using this function. You can add a line
self.mesh_reconstruction.module.freeze_by_stage(2, ['decoder'])at this place and remember to turn on ‘with_edge_classifier=True’ and ‘tmn_subnetworks=2’.