tensorflow-wavenet: Can't generate samples from checkpoint file
When I try to run generate.py per the readme, I get this:
I tensorflow/core/common_runtime/gpu/gpu_device.cc:838] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX TITAN X, pci bus id: 0000:03:00.0)
Restoring model from model.ckpt-250
Traceback (most recent call last):
File "generate.py", line 86, in <module>
main()
File "generate.py", line 66, in main
feed_dict={samples: window})
File "/home/ubuntu/jupyter_base/venv/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 710, in run
run_metadata_ptr)
File "/home/ubuntu/jupyter_base/venv/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 908, in _run
feed_dict_string, options, run_metadata)
File "/home/ubuntu/jupyter_base/venv/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 958, in _do_run
target_list, options, run_metadata)
File "/home/ubuntu/jupyter_base/venv/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 978, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors.InvalidArgumentError: Output dimensions must be positive
[[Node: wavenet/dilated_stack/layer1/conv_filter/BatchToSpace = BatchToSpace[T=DT_FLOAT, block_size=2, _device="/job:localhost/replica:0/task:0/gpu:0"](wavenet/dilated_stack/layer1/conv_filter, wavenet/dilated_stack/layer1/conv_filter/BatchToSpace/crops)]]
Caused by op u'wavenet/dilated_stack/layer1/conv_filter/BatchToSpace', defined at:
File "generate.py", line 86, in <module>
main()
File "generate.py", line 51, in main
next_sample = net.predict_proba(samples)
File "/home/ubuntu/jupyter_base/project/tensorflow-wavenet/wavenet.py", line 154, in predict_proba
raw_output = self._create_network(encoded)
File "/home/ubuntu/jupyter_base/project/tensorflow-wavenet/wavenet.py", line 112, in _create_network
self.dilation_channels)
File "/home/ubuntu/jupyter_base/project/tensorflow-wavenet/wavenet.py", line 51, in _create_dilation_layer
name="conv_filter")
File "/home/ubuntu/jupyter_base/venv/lib/python2.7/site-packages/tensorflow/python/ops/nn_ops.py", line 228, in atrous_conv2d
block_size=rate)
File "/home/ubuntu/jupyter_base/venv/lib/python2.7/site-packages/tensorflow/python/ops/gen_array_ops.py", line 308, in batch_to_space
block_size=block_size, name=name)
File "/home/ubuntu/jupyter_base/venv/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 703, in apply_op
op_def=op_def)
File "/home/ubuntu/jupyter_base/venv/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2317, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/home/ubuntu/jupyter_base/venv/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1239, in __init__
self._traceback = _extract_stack()```
About this issue
- Original URL
- State: closed
- Created 8 years ago
- Reactions: 6
- Comments: 22 (16 by maintainers)
Commits related to this issue
- Cast output for workaround of CUDA_ERROR_ILLEGAL_ADDRESS in generation (#13) — committed to mecab/tensorflow-wavenet by mecab 8 years ago
- Cast output for workaround of CUDA_ERROR_ILLEGAL_ADDRESS Fixes #13 — committed to mecab/tensorflow-wavenet by mecab 8 years ago
- Add casting to supress CUDA_ERROR_ILLEGAL_ADDRESS in generating results. It is tentative workaround. Calculating softmax of float32 255x255 matrix causes CUDA_ERROR_ILLEGAL_ADDRESS somehow. We can ke... — committed to mecab/tensorflow-wavenet by mecab 8 years ago
- Add casting to supress CUDA_ERROR_ILLEGAL_ADDRESS in generating results. It is tentative workaround. Calculating softmax of float32 255x255 matrix causes CUDA_ERROR_ILLEGAL_ADDRESS somehow. We can ke... — committed to mecab/tensorflow-wavenet by mecab 8 years ago
- Add casting to supress CUDA_ERROR_ILLEGAL_ADDRESS in generating results. (#20) It is tentative workaround. Calculating softmax of float32 255x255 matrix causes CUDA_ERROR_ILLEGAL_ADDRESS somehow. We... — committed to ibab/tensorflow-wavenet by mecab 8 years ago
Thanks for taking a look at this! I no longer get the above error, but now get:
Trying even after pulling these patches and retraining (although not for long), I get this (OSX, no GPU):
This bug seems to be fixed now, so I’ll close the issue. If there are new problems with the generation script we should open a new issue, as this one is starting to get long.