DexiNed: Curious: Why do _DenseLayer and _DenseBlock extend nn.Sequential instead of nn.Module?

Is there a benefit to extending nn.Sequential instead of, e.g.:

class _DenseBlock(nn.Module):
    def __init__(self, num_layers, input_features, out_features):
        super(_DenseBlock, self).__init__()
        for i in range(num_layers):
            layer = nn.Sequential(['conv/bn/relu/conv/bn'])  # Swapped for _DenseLayer
            self.add_module('denselayer%d' % (i + 1), layer)
            input_features = out_features

About this issue

  • Original URL
  • State: open
  • Created 4 years ago
  • Comments: 23 (23 by maintainers)

Most upvoted comments

Thanks for the update, the fixes, and the training. I look forward to testing the final model on new images.

Hi good to have you back 😃 the paper is based on the DexiNed TensorFlow version. By the way, this is not a big deal you know TF works BxHxWxC and PyTorch BxCxHxW that is why we use in Pytorch img.transpose.

I’ll be here for a bit. 😃

As long as this remains the best edge detection method, I’ll be here.

When the updated version of DexiNed is released, I’ll update the TF to match it.

Hi, and good to know that. About the update, After checking the model I forget changing two conv layers settings, I need to re-train , I am delayed 😦 respect to DexiNed-TF2 right now I am training, whenever the quantitative evaluation is performed I will update the ripo, Thank for the model by the way.

Cheers

I took a week away from this to let me return with a fresh perspective.

Does the paper mention why DexiNed includes img = img.transpose((2,0,1)) in testDataset.transform and BipedMyDataset.transform?

Hi good to have you back 😃 the paper is based on the DexiNed TensorFlow version. By the way, this is not a big deal you know TF works BxHxWxC and PyTorch BxCxHxW that is why we use in Pytorch img.transpose.

A PR will definitely be submitted with the Keras version when it’s complete.

Translating block_cat is giving me some trouble at the moment, but the obstacle doesn’t seem impossible. (torch.cat vs. tf.concat)

Will do. Thanks!

(Finishing up the Keras version now.)