DexiNed: Curious: Why do _DenseLayer and _DenseBlock extend nn.Sequential instead of nn.Module?
Is there a benefit to extending nn.Sequential instead of, e.g.:
class _DenseBlock(nn.Module):
def __init__(self, num_layers, input_features, out_features):
super(_DenseBlock, self).__init__()
for i in range(num_layers):
layer = nn.Sequential(['conv/bn/relu/conv/bn']) # Swapped for _DenseLayer
self.add_module('denselayer%d' % (i + 1), layer)
input_features = out_features
About this issue
- Original URL
- State: open
- Created 4 years ago
- Comments: 23 (23 by maintainers)
Thanks for the update, the fixes, and the training. I look forward to testing the final model on new images.
Hi, and good to know that. About the update, After checking the model I forget changing two conv layers settings, I need to re-train , I am delayed 😦 respect to DexiNed-TF2 right now I am training, whenever the quantitative evaluation is performed I will update the ripo, Thank for the model by the way.
Cheers
Hi good to have you back 😃 the paper is based on the DexiNed TensorFlow version. By the way, this is not a big deal you know TF works BxHxWxC and PyTorch BxCxHxW that is why we use in Pytorch img.transpose.
A PR will definitely be submitted with the Keras version when it’s complete.
Translating
block_catis giving me some trouble at the moment, but the obstacle doesn’t seem impossible. (torch.catvs.tf.concat)Will do. Thanks!
(Finishing up the Keras version now.)