mne-python: SNIRF loader not working without the optional sourcePos3D
Describe the bug
The function read_raw_snirf is not working if the optional field sourcePos3D does not exist in the SNIRF file.
Steps to reproduce
Given a nback.snirf SNIRF file with only sourcePos2D and no sourcePos3D:
from mne.io import read_raw_snirf
snirf_intensity = read_raw_snirf('nback.snirf')
Expected results
Loading nback.snirf
Actual results
Loading nback.snirf
Traceback (most recent call last):
File "...\snirf.py", line 3, in <module>
snirf_intensity = read_raw_snirf('nback.snirf')
File "...\mne-python\mne\io\snirf\_snirf.py", line 43, in read_raw_snirf
return RawSNIRF(fname, preload, verbose)
File "<decorator-gen-248>", line 24, in __init__
File "...\mne-python\mne\io\snirf\_snirf.py", line 119, in __init__
assert len(sources) == srcPos3D.shape[0]
IndexError: tuple index out of range
Additional information
This may be complex to solve as the beer_lambert_law function requires the 3D positions of optodes.
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Comments: 26 (21 by maintainers)
@Horschig @HanBnrd @larsoner
To ensure we support Artinis created SNIRF files we would need some test data in the testing repository. We would like a short (10-15s) recording exactly as created by your software, not edited afterwards (e.g.crop as this might not be faithful to how your software creates the file).
We also try to keep a record of the settings used to take the measurement so that we can include additional tests in our code. And it helps to understand when someone else brings a file that doesn’t behave well.
If the data has been converted after collection as I think happens here. Please include a copy of the script used to convert the data. So that we could reproduce the steps if required.
Some examples of PRs for adding files…
https://github.com/mne-tools/mne-testing-data/pull/51
https://github.com/mne-tools/mne-testing-data/pull/70
https://github.com/mne-tools/mne-testing-data/pull/57
It would be great to get an Artinis file in the test battery so we could claim comparability. I look forward to a PR.
In MNE in general we assume that 3D locations are known, so I expect only having 2D to create all sorts of issues down the line. For example, when people will follow our fNIRS tutorials, they’ll do a
plot_alignmentwith their own data to sanity check things in 3D, and they’ll see a flat disc of electrodes over the head (or worse and more likely, through it) and be rightly confused. The 2D plotting functions expect 3D locations when they do their own flattening. Any future physical modeling will also be wrong. There are probably other cases, too.All of these problems can be solved if we can transform the electrode locations from 2D to 3D. This is what we do with template EEG positions that only give azimuth and elevation. I’d prefer to figure out a suitable way to do this in the least painful way for fNIRS electrodes if possible, rather than change our code to accommodate 2D positions.
@kdarti what is the transformation that takes the 3D locations and transforms them to 2D? Ideally I think we would just invert this to get 3D locations back.
If this transformation is unknown and all that matters is the distance, there is probably some 2D-plane-to-3D-sphere (or sphere-ish) transform that preserves distances as much as possible, and we should just use that. If we’re willing to accept that the 2D approximation for 3D positions is acceptably “close enough” for MBLL, then presumably we can get the 2D back to 3D transformation to be similarly “close enough”.
Hi, I’m Kristoffer, I work for Artinis, and I felt that I should comment a little on this issue.
I agree that it would be nice to always have 3D positions, but I don’t see the issue with using 2D positions. MNE just takes the euclidean distance between optodes:
https://github.com/mne-tools/mne-python/blob/df6115ead7474c0374eda5958266fd02c9b5b1e0/mne/preprocessing/nirs/nirs.py#L30
We have a matlab toolbox for converting our proprietary format to .nirs, .snirf etc. The snirf implementation was written by Jeonghoon Choi & Jay Dubb from David Boas’s lab. They wrote it to only output 2D coordinates and we assumed that was fine since the SNIRF specification has it as an optional field (as already mentioned by Johann).
We now have some customers trying to use MNE by converting their data to .snirf, so I had to help them around this issue. What I did was simply to insert “fake” 3D coordinates by setting the Z coordinate to 0 for all optodes. MNE can then get the distances and run the MBLL conversion without issues. I verified that I get exactly the same conc values in our software and in MNE.
I don’t see why 2D positions can’t be used by MNE for determining the distances between optodes. Would make most sense to check for sourcePos3D first, if 3D coordinates are available, use those, if not, look for 2D positions.
For the case of 2D positions, you could either do the fake Z coordinate thing, or you’d have to rewrite https://github.com/mne-tools/mne-python/blob/df6115ead7474c0374eda5958266fd02c9b5b1e0/mne/preprocessing/nirs/nirs.py#L15
I know nothing about MNE though, so I can’t contribute unfortunately.
As a side note, I just checked how the Homer3 nirs to snirf converter handles nirs files with only 2D coordinates. Tt seems to do the same thing as I did, placing all the optodes on the same Z plane, so that might be the way to go in mne, in cases where a snirf file doesn’t contain 3D coordinates
https://github.com/BUNPC/Homer3/blob/b0eeb2274494c15d46fe6b95d1ea597c5e0177c0/DataTree/AcquiredData/Snirf/ProbeClass.m#L170
The only problem I see is that using “fake” 3D coordinates is that the user will have to know if the 3D positions are real or not, but once you visually inspect the positions it would be obvious if the 3D coordinates are from a real measurement on a head or not.
In that case I will move to M/EEG research 😆