neuroglancer: Automatic mesh generator: can't figure out how to run neuroglancer
I have tried few examples so far and couldn’t make them work in my case:
- Grayscale raw image:
raw = np.fromfile('raw.bin',dtype=np.uint16).reshape([1200,880,930])
- Segmented companion:
segs = np.fromfile('segs.bin',dtype=np.uint32).reshape(raw.shape)
But I don’t know where to start from in order to make neuroglancer to work in my case.
Could anyone please help me?
Thanks in Advance, Anar.
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Comments: 32 (1 by maintainers)
Regarding nyroglancer, I think it unfortunately does not support the automatic meshing.
The error you list at the top of init takes 3 positional arguments is a recent breakage in sockjs-tornado due to the release of tornado 5.0 just a few days ago. https://github.com/mrjoes/sockjs-tornado/issues/113
To fix that, you could either downgrade tornado to 4.5.3 via pip install tornado==4.5.3
or install this version of sockjs tornado from github.
https://github.com/mathben/sockjs-tornado/tree/fix_tornado_5.0_%23113
You can do that with this command: pip install ‘git+git://github.com/mathben/sockjs-tornado@212ba27’ --upgrade
Use the Slices checkbox or press s.
On Thu, May 17, 2018, 08:34 Anar Z. Yusifov notifications@github.com wrote:
The format is identical to the precomputed mesh format, documented here:
https://github.com/google/neuroglancer/tree/master/src/neuroglancer/datasource/precomputed#mesh-representation-of-segmented-object-surfaces
If you write the output to a file, then create the appropriate manifest JSON file for each object, you can view it as a precomputed mesh source.
Regarding the mesh generation, the fact that you were displaying both the raw data and your segmentation as segmentations may have affected things. However, in general the mesh generation is unfortunately slow. There are two steps — there is an initial marching cubes step that runs over the full volume, using multiple threads, that runs the first time you request any mesh, and then for each individual segment there is a simplification step that runs on a single thread and happens the first time you request a given segment.
The python integration doesn’t support a way to precompute the meshes, and is only practical for small volumes. For larger volumes you can convert the data to the precomputed format.
https://github.com/google/neuroglancer/blob/master/src/neuroglancer/datasource/precomputed
There are some third party scripts to help you generate that format — see e.g. https://github.com/FZJ-INM1-BDA/neuroglancer-scripts
On Sat, Mar 17, 2018, 13:41 Jeremy Maitin-Shepard jbms@google.com wrote: