deep-voice-conversion: eval2.py failing with "You must feed a value for placeholder tensor 'y_mel'

Hello,

I have made a lot of progress in getting this project to run. I am successfully training net1 to 70% accuracy, and net 2 to within 0.007 loss.

When I run eval2.py, I get the error described in the title, and the following traceback:

  File "eval2.py", line 68, in <module>
    eval(logdir1=logdir_train1, logdir2=logdir_train2)
  File "eval2.py", line 47, in eval
    summ_loss, = predictor(x_mfccs, y_spec)
  File "/home/mark/.local/lib/python2.7/site-packages/tensorpack/predict/base.py", line 39, in __call__
    output = self._do_call(dp)
  File "/home/mark/.local/lib/python2.7/site-packages/tensorpack/predict/base.py", line 131, in _do_call
    return self._callable(*dp)
  File "/home/mark/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1152, in _generic_run
    return self.run(fetches, feed_dict=feed_dict, **kwargs)
  File "/home/mark/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 877, in run
    run_metadata_ptr)
  File "/home/mark/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1100, in _run
    feed_dict_tensor, options, run_metadata)
  File "/home/mark/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1272, in _do_run
    run_metadata)
  File "/home/mark/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1291, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'y_mel' with dtype float and shape [?,401,80]
	 [[Node: y_mel = Placeholder[dtype=DT_FLOAT, shape=[?,401,80], _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]]
	 [[Node: net2/cbhg_linear/highwaynet_7/dense1/Tensordot/Shape/_1681 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_2452_net2/cbhg_linear/highwaynet_7/dense1/Tensordot/Shape", tensor_type=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]

Caused by op u'y_mel', defined at:
  File "eval2.py", line 68, in <module>
    eval(logdir1=logdir_train1, logdir2=logdir_train2)
  File "eval2.py", line 44, in eval
    predictor = OfflinePredictor(pred_conf)
  File "/home/mark/.local/lib/python2.7/site-packages/tensorpack/predict/base.py", line 146, in __init__
    input.setup(config.inputs_desc)
  File "/home/mark/.local/lib/python2.7/site-packages/tensorpack/utils/argtools.py", line 181, in wrapper
    return func(*args, **kwargs)
  File "/home/mark/.local/lib/python2.7/site-packages/tensorpack/input_source/input_source_base.py", line 97, in setup
    self._setup(inputs_desc)
  File "/home/mark/.local/lib/python2.7/site-packages/tensorpack/input_source/input_source.py", line 44, in _setup
    self._all_placehdrs = [v.build_placeholder_reuse() for v in inputs]
  File "/home/mark/.local/lib/python2.7/site-packages/tensorpack/graph_builder/model_desc.py", line 67, in build_placeholder_reuse
    return self.build_placeholder()
  File "/home/mark/.local/lib/python2.7/site-packages/tensorpack/graph_builder/model_desc.py", line 51, in build_placeholder
    self.type, shape=self.shape, name=self.name)
  File "/home/mark/.local/lib/python2.7/site-packages/tensorflow/python/ops/array_ops.py", line 1735, in placeholder
    return gen_array_ops.placeholder(dtype=dtype, shape=shape, name=name)
  File "/home/mark/.local/lib/python2.7/site-packages/tensorflow/python/ops/gen_array_ops.py", line 4925, in placeholder
    "Placeholder", dtype=dtype, shape=shape, name=name)
  File "/home/mark/.local/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
    op_def=op_def)
  File "/home/mark/.local/lib/python2.7/site-packages/tensorflow/python/util/deprecation.py", line 454, in new_func
    return func(*args, **kwargs)
  File "/home/mark/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 3155, in create_op
    op_def=op_def)
  File "/home/mark/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1717, in __init__
    self._traceback = tf_stack.extract_stack()

InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'y_mel' with dtype float and shape [?,401,80]`
	 [[Node: y_mel = Placeholder[dtype=DT_FLOAT, shape=[?,401,80], _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]]`
	 [[Node: net2/cbhg_linear/highwaynet_7/dense1/Tensordot/Shape/_1681 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_2452_net2/cbhg_linear/highwaynet_7/dense1/Tensordot/Shape", tensor_type=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]

When I run convert.py, I get an assertion error:

  File "convert.py", line 150, in <module>
    do_convert(args, logdir1=logdir_train1, logdir2=logdir_train2)
  File "convert.py", line 105, in do_convert
    audio, y_audio, ppgs = convert(predictor, df)
  File "convert.py", line 45, in convert
    pred_spec, y_spec, ppgs = predictor(next(df().get_data()))
  File "/home/mark/.local/lib/python2.7/site-packages/tensorpack/predict/base.py", line 39, in __call__
    output = self._do_call(dp)
  File "/home/mark/.local/lib/python2.7/site-packages/tensorpack/predict/base.py", line 119, in _do_call
    "{} != {}".format(len(dp), len(self.input_tensors))
AssertionError: 1 != 3

I am also getting a number of warnings when I run train2.py, eval2.py, and convert.py which I’m not sure are related:

[0831 18:26:38 @sessinit.py:90] WRN The following variables are in the graph, but not found in the checkpoint: net2/prenet/dense1/kernel, net2/prenet/dense1/bias, net2/prenet/dense2/kernel, net2/prenet/dense2/bias, net2/cbhg_mel/conv1d_banks/num_1/conv1d/conv1d/kernel, net2/cbhg_mel/conv1d_banks/num_1/normalize/beta, net2/cbhg_mel/conv1d_banks/num_1/normalize/gamma, net2/cbhg_mel/conv1d_banks/num_2/conv1d/conv1d/kernel, net2/cbhg_mel/conv1d_banks/num_2/normalize/beta, net2/cbhg_mel/conv1d_banks/num_2/normalize/gamma, net2/cbhg_mel/conv1d_banks/num_3/conv1d/conv1d/kernel, net2/cbhg_mel/conv1d_banks/num_3/normalize/beta, net2/cbhg_mel/conv1d_banks/num_3/normalize/gamma, net2/cbhg_mel/conv1d_banks/num_4/conv1d/conv1d/kernel, net2/cbhg_mel/conv1d_banks/num_4/normalize/beta, net2/cbhg_mel/conv1d_banks/num_4/normalize/gamma, net2/cbhg_mel/conv1d_banks/num_5/conv1d/conv1d/kernel, net2/cbhg_mel/conv1d_banks/num_5/normalize/beta, net2/cbhg_mel/conv1d_banks/num_5/normalize/gamma, net2/cbhg_mel/conv1d_banks/num_6/conv1d/conv1d/kernel, net2/cbhg_mel/conv1d_banks/num_6/normalize/beta, net2/cbhg_mel/conv1d_banks/num_6/normalize/gamma, net2/cbhg_mel/conv1d_banks/num_7/conv1d/conv1d/kernel, net2/cbhg_mel/conv1d_banks/num_7/normalize/beta, net2/cbhg_mel/conv1d_banks/num_7/normalize/gamma, net2/cbhg_mel/conv1d_banks/num_8/conv1d/conv1d/kernel, net2/cbhg_mel/conv1d_banks/num_8/normalize/beta, net2/cbhg_mel/conv1d_banks/num_8/normalize/gamma, net2/cbhg_mel/conv1d_1/conv1d/kernel, net2/cbhg_mel/normalize/beta, net2/cbhg_mel/normalize/gamma, net2/cbhg_mel/conv1d_2/conv1d/kernel, net2/cbhg_mel/highwaynet_0/dense1/kernel, net2/cbhg_mel/highwaynet_0/dense1/bias, net2/cbhg_mel/highwaynet_0/dense2/kernel, net2/cbhg_mel/highwaynet_0/dense2/bias, net2/cbhg_mel/highwaynet_1/dense1/kernel, net2/cbhg_mel/highwaynet_1/dense1/bias, net2/cbhg_mel/highwaynet_1/dense2/kernel, net2/cbhg_mel/highwaynet_1/dense2/bias, net2/cbhg_mel/highwaynet_2/dense1/kernel, net2/cbhg_mel/highwaynet_2/dense1/bias, net2/cbhg_mel/highwaynet_2/dense2/kernel, net2/cbhg_mel/highwaynet_2/dense2/bias, net2/cbhg_mel/highwaynet_3/dense1/kernel, net2/cbhg_mel/highwaynet_3/dense1/bias, net2/cbhg_mel/highwaynet_3/dense2/kernel, net2/cbhg_mel/highwaynet_3/dense2/bias, net2/cbhg_mel/highwaynet_4/dense1/kernel, net2/cbhg_mel/highwaynet_4/dense1/bias, net2/cbhg_mel/highwaynet_4/dense2/kernel, net2/cbhg_mel/highwaynet_4/dense2/bias, net2/cbhg_mel/highwaynet_5/dense1/kernel, net2/cbhg_mel/highwaynet_5/dense1/bias, net2/cbhg_mel/highwaynet_5/dense2/kernel, net2/cbhg_mel/highwaynet_5/dense2/bias, net2/cbhg_mel/highwaynet_6/dense1/kernel, net2/cbhg_mel/highwaynet_6/dense1/bias, net2/cbhg_mel/highwaynet_6/dense2/kernel, net2/cbhg_mel/highwaynet_6/dense2/bias, net2/cbhg_mel/highwaynet_7/dense1/kernel, net2/cbhg_mel/highwaynet_7/dense1/bias, net2/cbhg_mel/highwaynet_7/dense2/kernel, net2/cbhg_mel/highwaynet_7/dense2/bias, net2/cbhg_mel/gru/bidirectional_rnn/fw/gru_cell/gates/kernel, net2/cbhg_mel/gru/bidirectional_rnn/fw/gru_cell/gates/bias, net2/cbhg_mel/gru/bidirectional_rnn/fw/gru_cell/candidate/kernel, net2/cbhg_mel/gru/bidirectional_rnn/fw/gru_cell/candidate/bias, net2/cbhg_mel/gru/bidirectional_rnn/bw/gru_cell/gates/kernel, net2/cbhg_mel/gru/bidirectional_rnn/bw/gru_cell/gates/bias, net2/cbhg_mel/gru/bidirectional_rnn/bw/gru_cell/candidate/kernel, net2/cbhg_mel/gru/bidirectional_rnn/bw/gru_cell/candidate/bias, net2/pred_mel/kernel, net2/pred_mel/bias, net2/dense/kernel, net2/dense/bias, net2/cbhg_linear/conv1d_banks/num_1/conv1d/conv1d/kernel, net2/cbhg_linear/conv1d_banks/num_1/normalize/beta, net2/cbhg_linear/conv1d_banks/num_1/normalize/gamma, net2/cbhg_linear/conv1d_banks/num_2/conv1d/conv1d/kernel, net2/cbhg_linear/conv1d_banks/num_2/normalize/beta, net2/cbhg_linear/conv1d_banks/num_2/normalize/gamma, net2/cbhg_linear/conv1d_banks/num_3/conv1d/conv1d/kernel, net2/cbhg_linear/conv1d_banks/num_3/normalize/beta, net2/cbhg_linear/conv1d_banks/num_3/normalize/gamma, net2/cbhg_linear/conv1d_banks/num_4/conv1d/conv1d/kernel, net2/cbhg_linear/conv1d_banks/num_4/normalize/beta, net2/cbhg_linear/conv1d_banks/num_4/normalize/gamma, net2/cbhg_linear/conv1d_banks/num_5/conv1d/conv1d/kernel, net2/cbhg_linear/conv1d_banks/num_5/normalize/beta, net2/cbhg_linear/conv1d_banks/num_5/normalize/gamma, net2/cbhg_linear/conv1d_banks/num_6/conv1d/conv1d/kernel, net2/cbhg_linear/conv1d_banks/num_6/normalize/beta, net2/cbhg_linear/conv1d_banks/num_6/normalize/gamma, net2/cbhg_linear/conv1d_banks/num_7/conv1d/conv1d/kernel, net2/cbhg_linear/conv1d_banks/num_7/normalize/beta, net2/cbhg_linear/conv1d_banks/num_7/normalize/gamma, net2/cbhg_linear/conv1d_banks/num_8/conv1d/conv1d/kernel, net2/cbhg_linear/conv1d_banks/num_8/normalize/beta, net2/cbhg_linear/conv1d_banks/num_8/normalize/gamma, net2/cbhg_linear/conv1d_1/conv1d/kernel, net2/cbhg_linear/normalize/beta, net2/cbhg_linear/normalize/gamma, net2/cbhg_linear/conv1d_2/conv1d/kernel, net2/cbhg_linear/highwaynet_0/dense1/kernel, net2/cbhg_linear/highwaynet_0/dense1/bias, net2/cbhg_linear/highwaynet_0/dense2/kernel, net2/cbhg_linear/highwaynet_0/dense2/bias, net2/cbhg_linear/highwaynet_1/dense1/kernel, net2/cbhg_linear/highwaynet_1/dense1/bias, net2/cbhg_linear/highwaynet_1/dense2/kernel, net2/cbhg_linear/highwaynet_1/dense2/bias, net2/cbhg_linear/highwaynet_2/dense1/kernel, net2/cbhg_linear/highwaynet_2/dense1/bias, net2/cbhg_linear/highwaynet_2/dense2/kernel, net2/cbhg_linear/highwaynet_2/dense2/bias, net2/cbhg_linear/highwaynet_3/dense1/kernel, net2/cbhg_linear/highwaynet_3/dense1/bias, net2/cbhg_linear/highwaynet_3/dense2/kernel, net2/cbhg_linear/highwaynet_3/dense2/bias, net2/cbhg_linear/highwaynet_4/dense1/kernel, net2/cbhg_linear/highwaynet_4/dense1/bias, net2/cbhg_linear/highwaynet_4/dense2/kernel, net2/cbhg_linear/highwaynet_4/dense2/bias, net2/cbhg_linear/highwaynet_5/dense1/kernel, net2/cbhg_linear/highwaynet_5/dense1/bias, net2/cbhg_linear/highwaynet_5/dense2/kernel, net2/cbhg_linear/highwaynet_5/dense2/bias, net2/cbhg_linear/highwaynet_6/dense1/kernel, net2/cbhg_linear/highwaynet_6/dense1/bias, net2/cbhg_linear/highwaynet_6/dense2/kernel, net2/cbhg_linear/highwaynet_6/dense2/bias, net2/cbhg_linear/highwaynet_7/dense1/kernel, net2/cbhg_linear/highwaynet_7/dense1/bias, net2/cbhg_linear/highwaynet_7/dense2/kernel, net2/cbhg_linear/highwaynet_7/dense2/bias, net2/cbhg_linear/gru/bidirectional_rnn/fw/gru_cell/gates/kernel, net2/cbhg_linear/gru/bidirectional_rnn/fw/gru_cell/gates/bias, net2/cbhg_linear/gru/bidirectional_rnn/fw/gru_cell/candidate/kernel, net2/cbhg_linear/gru/bidirectional_rnn/fw/gru_cell/candidate/bias, net2/cbhg_linear/gru/bidirectional_rnn/bw/gru_cell/gates/kernel, net2/cbhg_linear/gru/bidirectional_rnn/bw/gru_cell/gates/bias, net2/cbhg_linear/gru/bidirectional_rnn/bw/gru_cell/candidate/kernel, net2/cbhg_linear/gru/bidirectional_rnn/bw/gru_cell/candidate/bias, net2/pred_spec/kernel, net2/pred_spec/bias
[0831 18:26:38 @sessinit.py:90] WRN The following variables are in the checkpoint, but not found in the graph: global_step:0, learning_rate:0

I have seen some similar problems like #23 and #42 which had the same error, but when running train2.py, and they say the problem was solved by setting “queue=True”. I’m not sure where to do that. I’ve added it in line 25 of eval2.py: def eval(logdir1, logdir2, queue=True): but I still get the same error. I can’t shake the feeling that it’s something obvious (like a version error) and I just don’t know enough about tensorflow/tensorpack to fix it.

Any help would be appreciated, as I’ve spent a few days trying to find out the nature of this issue. If it helps, I’m running on a single GPU, Ubuntu 18.04, and python 2.7

About this issue

Most upvoted comments

solved this. in function get_eval_input_names(), change the output to return [‘x_mfccs’, ‘y_spec’,‘y_mel’] and change the input to the predictor accordingly

Fantastic, that worked! Thanks!

I was also able to fix the assertion error for convert.py by moving the predictor call out from convert() and passing everything to convert() manually:

   def convert(predictor, df, pred_spec, y_spec, ppgs):
...
...
...
    pr, y_s, pp = next(df().get_data())
    pred_spec, y_spec, ppgs = predictor(pr, y_s, pp)
    audio, y_audio, ppgs = convert(predictor, df, pred_spec, y_spec, ppgs)

I’m still a little worried about those warnings, but I’m getting an output now. It’s still really garbled, though. I’m using a custom dataset, so I’ll re-check and see if there’s any noise in it. (maybe I just need to train longer)

@carlfm01 I used python2.7 directly and there are only several bugs. Maybe you can build a new conda env and try, building env is quick.

@carlfm01 I believe that 0.8 means 80% accuracy, yes. I’m not sure why it would get stuck at epoch 104 if it got that far (When mine gets stuck it’s usually around epoch 14 or so)

@ashavish I’ve trained my net2 for over 2000 epochs with around 0.005 loss and I have the same problem (too quiet and robotic with lots of noise.) are you using the arctic/TIMIT datasets or custom ones? (I’m trying custom data for net2)

@carlfm01 I ran mine for 260 epochs and it seemed to be leveling off at 70% after 200. I might run it for longer, but the README says accuracy of net1 is less important.

There’s also a pre-trained model in #12 that I couldn’t get to work, but you might have more luck.