Deep3DFaceRecon_pytorch: high reconstruct error on now challenge

Thanks for the work, it is very helpful. But I got some problem on test for now challenge. I tested the pretrained model that you supply on Now challenge and got much higher error on validation part of Now challenge.

(deep3d_pytroch_official_nexp - median: 1.286424, mean: 1.864963, std: 2.361429)

  • I use MTCNN to detect 5 landmarks for every face in Now challenge.

  • Align the face to the template mesh as you did in test.py

def read_data(im_path, lm_path, lm3d_std, to_tensor=True):
    # to RGB 
    im = Image.open(im_path).convert('RGB')
    W,H = im.size
    lm = np.loadtxt(lm_path).astype(np.float32)
    lm = lm.reshape([-1, 2])
    lm[:, -1] = H - 1 - lm[:, -1]
    _, im, lm, _ = align_img(im, lm, lm3d_std)
    if to_tensor:
        im = torch.tensor(np.array(im)/255., dtype=torch.float32).permute(2, 0, 1).unsqueeze(0)
        lm = torch.tensor(lm).unsqueeze(0)
    return im, lm
  • Also,I set exp and angle as 0 to produce netural face mesh for every picture.
    coef_dict = self.split_coeff(coeffs)
    ## kj add 
    coef_dict ['exp'][:] = 0.
    coef_dict['angle'][:]= 0.
    face_shape = self.compute_shape(coef_dict['id'], coef_dict['exp'])
    rotation = self.compute_rotation(coef_dict['angle'])
  • As for 7 3d landmarks metioned in Now challenge , I selected points like this:
    recon_shape = self.pred_vertex  # get reconstructed shape
    # print("recon_shape2:",recon_shape)
    # recon_shape[..., -1] = 10 - recon_shape[..., -1] # from camera space to world space  #save mesh part have already transformed shape to  world space 
    recon_shape = recon_shape.cpu().numpy()[0]
    lm_3d_indx = self.facemodel.keypoints.cpu().numpy()[[36,39,42,45,33,48,54]]
    lm_3d = recon_shape[lm_3d_indx,:]

I hope that you could point out the which step is wrong or what i missed , I would really apreciate it. It would be great if you could provide the evaluation code for Now challange.

About this issue

  • Original URL
  • State: open
  • Created 2 years ago
  • Reactions: 5
  • Comments: 15

Most upvoted comments

Hi, for NoW challenge, we use the full head region of BFM (which contains ears and neck) instead of the cropped face region. This is a key factor for reaching lower reconstruction error.

Hi, for NoW challenge, we use the full head region of BFM (which contains ears and neck) instead of the cropped face region. This is a key factor for reaching lower reconstruction error.

  • Thanks for your reply, I get much lower error on validation part of NoW challenge using the full head region of BFM. (deep3d_pytorch_official_full_region - median: 1.093577, mean: 1.372326, std: 1.158287). (deca_nexp - median: 1.174798, mean: 1.459256, std: 1.244139).

  • It’s better than deca on validation part and worse than deca on test part of NoW challenge. Is there any reasonable explanation for this ?Is it caused by some attributes of the test datasets?

  • It seems that NoW challenge can not comprehensively evaluate the quality of face reconstruction. What evaluation methods do you think are most important for face reconstruction?

Did you test the pretrained model on the validation part? Or you trained a new model? @1180800817