neuroglancer: Automatic mesh generator: can't figure out how to run neuroglancer

I have tried few examples so far and couldn’t make them work in my case:

  1. Grayscale raw image:
raw = np.fromfile('raw.bin',dtype=np.uint16).reshape([1200,880,930])
  1. Segmented companion:
segs = np.fromfile('segs.bin',dtype=np.uint32).reshape(raw.shape)

But I don’t know where to start from in order to make neuroglancer to work in my case.

Could anyone please help me?

Thanks in Advance, Anar.

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 32 (1 by maintainers)

Most upvoted comments

Regarding nyroglancer, I think it unfortunately does not support the automatic meshing.

The error you list at the top of init takes 3 positional arguments is a recent breakage in sockjs-tornado due to the release of tornado 5.0 just a few days ago. https://github.com/mrjoes/sockjs-tornado/issues/113

To fix that, you could either downgrade tornado to 4.5.3 via pip install tornado==4.5.3

or install this version of sockjs tornado from github.

https://github.com/mathben/sockjs-tornado/tree/fix_tornado_5.0_%23113

You can do that with this command: pip install ‘git+git://github.com/mathben/sockjs-tornado@212ba27’ --upgrade

Use the Slices checkbox or press s.

On Thu, May 17, 2018, 08:34 Anar Z. Yusifov notifications@github.com wrote:

Thank you, @jbms https://github.com/jbms for your help! How to remove visualization of orthogonal 3D planes? The reason I need it is that I can browse through my data in 3 other windows but would like to see 3D generated object clearly in my 3D view without panels going into my sight. May be I can set transparency for the panels or somehow disable them in 3D view?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/google/neuroglancer/issues/79#issuecomment-389909590, or mute the thread https://github.com/notifications/unsubscribe-auth/AEBE6isOUB1xLepL4EK4LuhPz_Okhzebks5tzZh5gaJpZM4SiAuV .

The format is identical to the precomputed mesh format, documented here:

https://github.com/google/neuroglancer/tree/master/src/neuroglancer/datasource/precomputed#mesh-representation-of-segmented-object-surfaces

If you write the output to a file, then create the appropriate manifest JSON file for each object, you can view it as a precomputed mesh source.

Regarding the mesh generation, the fact that you were displaying both the raw data and your segmentation as segmentations may have affected things. However, in general the mesh generation is unfortunately slow. There are two steps — there is an initial marching cubes step that runs over the full volume, using multiple threads, that runs the first time you request any mesh, and then for each individual segment there is a simplification step that runs on a single thread and happens the first time you request a given segment.

The python integration doesn’t support a way to precompute the meshes, and is only practical for small volumes. For larger volumes you can convert the data to the precomputed format.

https://github.com/google/neuroglancer/blob/master/src/neuroglancer/datasource/precomputed

There are some third party scripts to help you generate that format — see e.g. https://github.com/FZJ-INM1-BDA/neuroglancer-scripts

On Sat, Mar 17, 2018, 13:41 Jeremy Maitin-Shepard jbms@google.com wrote:

It looks like you are displaying the raw data as a SegmentionLayer as well, rather than an ImageLayer.

On Sat, Mar 17, 2018, 13:33 Anar Z. Yusifov notifications@github.com wrote:

Thank you, @jbms https://github.com/jbms At first it looked not so right - I expected grayscale, though. [image: example_slice] https://user-images.githubusercontent.com/2971670/37559703-791f6546-29f8-11e8-894d-35868b3c21ee.gif Once I asked for the mesh I got some strange behavior. It looks like there is a lot of noise in my data. Or I interpret it the wrong way? I just don’t understand where it comes from - my raw data or my segs data? And another thing I noticed is low CPU utilization (top shows only 200% CPU out of 7200% possible - it’s skylake node) during mesh generation. There were also quite a lot of memory utilization, but I assume that it’s due to noise… [image: example_crop] https://user-images.githubusercontent.com/2971670/37559547-1c85d1f0-29f6-11e8-9fbb-43639a5647af.gif questions

  • Why I don’t see nice gray scale background behind the segments as in demos on the page?
    • I assume it’s because my format is uint16
    • if so - should I normalize it to uint8 first?
  • Do I generate mesh correctly?
    • I double click on the highlighted segment
    • Do I need to do any pre-processing to speed it up?

Thank you very much for going through this with me!

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/google/neuroglancer/issues/79#issuecomment-373950558, or mute the thread https://github.com/notifications/unsubscribe-auth/AEBE6ujXFCNrZxTMjH3HpThtJPqqNB_5ks5tfXMdgaJpZM4SiAuV .