ANTs: antsAI unexpected behavior

Describe the problem

Greetings!. I’m trying to apply ANTS for pairwise registration. My objects are 3D NMR images of plant seeds, they can have different orientations, and I understood that antsRegistration is not supposed to work with drastic affine differences. So I tried antsAI to produce a rough affine transformation, but it seems it doesn’t work as expected. I created a pair to test - the second object is rotated 60 deg on one of the 3D axes. antsRegistration can manage to find a rigid transformation (for greater rotations - it doesn’t find), but antsAI gives a weird transformation. When I apply antsApplyTransforms, the rotation is wrong. (It is ok with matrix from antsRegistration when rotation is small) I am not sure how to debug and what I am missing. Please advise. I spent a lot of time combing through the documentation, but I am still a beginner at comprehending complex registration techniques and terms, so I tried to follow the existing examples which are scattered in discussions and manuals.

To Reproduce

There are two files in the attachment nmr.zip: the source file, and the rotated file (I used monai.transforms.Rotate in Python to rotate). Two commands to produce transformation matrix:

antsAI --dimensionality 3 \
        --output Rigid_antsAI.mat \
        --transform Rigid[0.1] \
        --metric MI[seed_zero.nii.gz,seed_60.nii.gz,32,Regular,0.25] \
        --convergence 1000 \
        --search-factor 90 \
        --verbose
antsRegistration --dimensionality 3 \
        --output [antsReg, antsRegWarped.nii.gz] \
        --interpolation Linear \
        --winsorize-image-intensities [0.005,0.995] \
        --initial-moving-transform [seed_zero.nii.gz,seed_60.nii.gz,1] \
        --transform Rigid[0.1] \
        --metric MI[seed_zero.nii.gz,seed_60.nii.gz,1,32,Regular,0.25] \
        --convergence 1000x500x250x100 \
        --shrink-factors 8x4x2x1 \
        --smoothing-sigmas 3x2x1x0

The resulting matrix Rigid_antsAI.mat seems to be wrong but antsReg0GenericAffine.mat looks fine. And they are quite different according to antsTransformInfo When I apply transformations, only the registered object with antsReg0GenericAffine.mat rotates back correctly:

antsApplyTransforms -d 3 -r seed_zero.nii.gz -t antsReg0GenericAffine.mat  -i seed_60.nii.gz -o seed_60_to_zero.nii.gz
antsApplyTransforms -d 3 -r seed_zero.nii.gz -t Rigid_antsAI.mat  -i seed_60.nii.gz -o seed_60_to_zero_antsai.nii.gz

System information (please complete the following information)

  • OS: "CentOS Linux 7 (Core)
  • Type of system: HPC cluster

ANTs version information

Additional information

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Comments: 15 (5 by maintainers)

Most upvoted comments

I’m glad it worked.

You can open new issues if there’s problems with the tools specifically, or discussions for more general topics.

But is it possible to decompose the Affine transform to extract only the rigid part (translation + rotation)?

Yes, see my implementation (with references) used for another application

https://github.com/CoBrALab/optimized_antsMultivariateTemplateConstruction/blob/master/average_transform.py

From a general affine matrix my understanding is that it’s complicated to extract rotation. If there’s no shear component, it gets easier because SVD can represent the rotation and scaling in its component matrices.

The ANTs template scripts are not designed to handle large rotations between the input images. Such variations, if they exist, would need to be dealt with in preprocessing.

Even if the template could be constructed from the space of randomly oriented images, knowing the average rotation of an image with respect to some coordinate frame doesn’t seem that useful. You’d still need to test a wide range of initializations to do the pairwise registrations.

I don’t think this format lends itself to answering this type of question with sufficient clarity especially since it involves ongoing research. I would recommend looking at the various review articles that have been written on the topic. If I were forced to answer, I would respond briefly with:

Why are they developed?

Because of the significant potential for speed-up and/or accuracy.

what specific tasks do they solve that classic algorithms like ANTs can’t?

Off the top of my head? Not many. However, this case where there are significant angular differences is a definite possibility.

antsAI is the newer version. I don’t remember all the details, but it has features that the older code does not, like searching translations as well as rotations.

Exactly. It was originally an attempt to make the interface a bit nicer, in part, by using the ants command line options.

With downsampled images, how does it work?

The transforms are defined in physical space. The voxel to physical space transform of the input image is in the image header. So downsampling changes how a voxel index is mapped to physical space, but the same point in physical space will be transformed the same way. Example:

antsApplyTransforms -d 3 -i image.nii.gz -t transform.mat -r reference.nii.gz -o deformed.nii.gz
ResampleImageBySpacing 3 image.nii.gz downsample.nii.gz 2 2 2 0
antsApplyTransforms -d 3 -i downsample.nii.gz -t transform.mat -r reference.nii.gz -o deformedDownsample.nii.gz

downsample should overlap in physical space with image, and so should the deformed images.

So you can use your downsampled images in antsAI for speed. But if it’s running acceptably fast, you can skip this.

You can also smooth the images to improve robustness, either before downsampling or as a standalone step.

I also wonder what is the difference between antsAI and antsAffineInitializer?

antsAI is the newer version. I don’t remember all the details, but it has features that the older code does not, like searching translations as well as rotations.

Agreed, -s 90 will search rotations of -180, 0, +90 only. Reduce this to -s 20.

I usually use very few iterations and a lot of start points, try -c 10. You can refine the result with antsRegistration.

We normally downsample brain images before doing this, but given that this data is not very large and less detailed than a brain, I think it’s fine to run at full resolution. You could maybe downsample both by a factor of 2 with ResampleImageBySpacing to speed things up if needed. You would still do the actual registration with the original images.

You can also try -t Affine[ 0.1 ] in antsAI, unless you only want a rigid solution. This sometimes goes wrong but often finds a better solution because it can account for scale. Similarity is another option (allows global scale only) - I’ve not found this helps for brains, but might work for you.

In your antsRegistration command, you probably don’t want to go down to a shrink factor of 8, or use so much smoothing. Internally, it won’t downsample by that much anyway, because there would be hardly any voxels left.

A few comments as to what might be causing issues:

  1. You’re using the full resolution image for antsAI The only place I’ve seen it used, the images are downsampled before use https://github.com/ANTsX/ANTs/blob/dabf36fcd6094792042165f8038d9463cda5b47c/Scripts/antsBrainExtraction.sh#L486-L496

  2. You’re search-factor is way too large (90)

From the docs:

     -s, --search-factor searchFactor
                         [searchFactor=20,<arcFraction=1.0>]
          Incremental search factor (in degrees) which will sample the arc fraction around 
          the principal axis or default axis. 

You’re only sampling ever 90 degrees of rotation around the three axes, this is way too coarse.

Your convergence -c 1000 is probably way too large as well, spend more time on more startings with a smaller search factor and then feed the best one into a proper antsRegistration as the initialization.