orca: Orca hanging when large JSONs are piped in

I have a dataframe with 20 years of daily data. iplot can process any plots.

However, I can only use orca when I slice the dataframe for less than 4 years of data. It fails in the notebook as well as in command line from a json dumped file with the following text.

A JavaScript error occurred in the main process
Uncaught Exception:
TypeError: path must be a string or Buffer
    at Object.fs.mkdirSync (fs.js:891:18)
    at main (/usr/local/lib/node_modules/orca/bin/graph.js:105:8)
    at Object.<anonymous> (/usr/local/lib/node_modules/orca/bin/orca_electron.js:73:25)
    at Object.<anonymous> (/usr/local/lib/node_modules/orca/bin/orca_electron.js:99:3)
    at Module._compile (module.js:569:30)
    at Object.Module._extensions..js (module.js:580:10)
    at Module.load (module.js:503:32)
    at tryModuleLoad (module.js:466:12)
    at Function.Module._load (module.js:458:3)
    at loadApplicationPackage (/usr/local/lib/node_modules/electron/dist/resources/default_app.asar/main.js:287:12)

The JSON files are 250Kb large for the 20 years of data. data.zip

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 22 (18 by maintainers)

Most upvoted comments

Ok, I figured out a solution based on this article: http://veithen.github.io/2014/11/16/sigterm-propagation.html

In our wrapper bash script we basically just need to prefix the call to orca with exec. Then the bash process becomes the orca process and the signals sent from Python make it to orca.

Since we haven’t merged it yet, I’ll update this in my conda build PR.

Oh yes! that’s great I could get it the way you did. I will use this solution with temporary files, since in the notebook it doesn’t work. Just a matter of writing a small script to handle all the temp files.

Many thanks, I had been waiting for a long time for this export solution and it is really nice 😃.