tensorflow: InvalidArgumentError: No OpKernel was registered to support Op 'CudnnRNN' used by {{node cu_dnnlstm/CudnnRNN}}with these attrs: [seed=0, dropout=0, T=DT_FLOAT, input_mode="linear_input", direction="unidirectional", rnn_mode="lstm", is_training=true, seed2=0]

I am user macintosh

code

`import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, LSTM #, CuDNNLSTM


mnist = tf.keras.datasets.mnist  # mnist is a dataset of 28x28 images of handwritten digits and their labels
(x_train, y_train),(x_test, y_test) = mnist.load_data()  # unpacks images to x_train/x_test and labels to y_train/y_test

x_train = x_train/255.0
x_test = x_test/255.0

print(x_train.shape)
print(x_train[0].shape)

model = Sequential()


model.add(LSTM(128, input_shape=(x_train.shape[1:]), activation='relu', return_sequences=True))
model.add(Dropout(0.2))

model.add(LSTM(128, activation='relu'))
model.add(Dropout(0.1))

model.add(Dense(32, activation='relu'))
model.add(Dropout(0.2))

model.add(Dense(10, activation='softmax'))

opt = tf.keras.optimizers.Adam(lr=0.001, decay=1e-6)

# Compile model
model.compile(
    loss='sparse_categorical_crossentropy',
    optimizer=opt,
    metrics=['accuracy'],
)

model.fit(x_train,
          y_train,
          epochs=3,
          validation_data=(x_test, y_test))`

output

`(60000, 28, 28)
(28, 28)
Train on 60000 samples, validate on 10000 samples
---------------------------------------------------------------------------
InvalidArgumentError                      Traceback (most recent call last)
/anaconda3/lib/python3.7/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
   1333     try:
-> 1334       return fn(*args)
   1335     except errors.OpError as e:

/anaconda3/lib/python3.7/site-packages/tensorflow/python/client/session.py in _run_fn(feed_dict, fetch_list, target_list, options, run_metadata)
   1316       # Ensure any changes to the graph are reflected in the runtime.
-> 1317       self._extend_graph()
   1318       return self._call_tf_sessionrun(

/anaconda3/lib/python3.7/site-packages/tensorflow/python/client/session.py in _extend_graph(self)
   1351     with self._graph._session_run_lock():  # pylint: disable=protected-access
-> 1352       tf_session.ExtendSession(self._session)
   1353 

InvalidArgumentError: No OpKernel was registered to support Op 'CudnnRNN' used by {{node cu_dnnlstm/CudnnRNN}}with these attrs: [seed=0, dropout=0, T=DT_FLOAT, input_mode="linear_input", direction="unidirectional", rnn_mode="lstm", is_training=true, seed2=0]
Registered devices: [CPU]
Registered kernels:
  <no registered kernels>

	 [[{{node cu_dnnlstm/CudnnRNN}}]]

During handling of the above exception, another exception occurred:

InvalidArgumentError                      Traceback (most recent call last)
<ipython-input-7-d1c9ab123f67> in <module>
     41           y_train,
     42           epochs=3,
---> 43           validation_data=(x_test, y_test))

/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, max_queue_size, workers, use_multiprocessing, **kwargs)
    878           initial_epoch=initial_epoch,
    879           steps_per_epoch=steps_per_epoch,
--> 880           validation_steps=validation_steps)
    881 
    882   def evaluate(self,

/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/engine/training_arrays.py in model_iteration(model, inputs, targets, sample_weights, batch_size, epochs, verbose, callbacks, val_inputs, val_targets, val_sample_weights, shuffle, initial_epoch, steps_per_epoch, validation_steps, mode, validation_in_fit, **kwargs)
    249     # Setup work for each epoch
    250     epoch_logs = {}
--> 251     model.reset_metrics()
    252     callbacks.on_epoch_begin(epoch, epoch_logs, mode=mode)
    253     progbar.on_epoch_begin(epoch, epoch_logs)

/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in reset_metrics(self)
   1117     if hasattr(self, 'metrics'):
   1118       for m in self.metrics:
-> 1119         m.reset_states()
   1120       if self._distribution_strategy:
   1121         training_distributed._reset_metrics(self)  # pylint: disable=protected-access

/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/metrics.py in reset_states(self)
    458     """
    459     for v in self.variables:
--> 460       K.set_value(v, 0)
    461 
    462   @abc.abstractmethod

/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/backend.py in set_value(x, value)
   2845         x._assign_placeholder = assign_placeholder
   2846         x._assign_op = assign_op
-> 2847       get_session().run(assign_op, feed_dict={assign_placeholder: value})
   2848 
   2849 

/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/backend.py in get_session()
    480   if not _MANUAL_VAR_INIT:
    481     with session.graph.as_default():
--> 482       _initialize_variables(session)
    483   return session
    484 

/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/backend.py in _initialize_variables(session)
    756     # marked as initialized.
    757     is_initialized = session.run(
--> 758         [variables_module.is_variable_initialized(v) for v in candidate_vars])
    759     uninitialized_vars = []
    760     for flag, v in zip(is_initialized, candidate_vars):

/anaconda3/lib/python3.7/site-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata)
    927     try:
    928       result = self._run(None, fetches, feed_dict, options_ptr,
--> 929                          run_metadata_ptr)
    930       if run_metadata:
    931         proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

/anaconda3/lib/python3.7/site-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
   1150     if final_fetches or final_targets or (handle and feed_dict_tensor):
   1151       results = self._do_run(handle, final_targets, final_fetches,
-> 1152                              feed_dict_tensor, options, run_metadata)
   1153     else:
   1154       results = []

/anaconda3/lib/python3.7/site-packages/tensorflow/python/client/session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)
   1326     if handle is None:
   1327       return self._do_call(_run_fn, feeds, fetches, targets, options,
-> 1328                            run_metadata)
   1329     else:
   1330       return self._do_call(_prun_fn, handle, feeds, fetches)

/anaconda3/lib/python3.7/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
   1346           pass
   1347       message = error_interpolation.interpolate(message, self._graph)
-> 1348       raise type(e)(node_def, op, message)
   1349 
   1350   def _extend_graph(self):

InvalidArgumentError: No OpKernel was registered to support Op 'CudnnRNN' used by node cu_dnnlstm/CudnnRNN (defined at <ipython-input-6-3c5692df850b>:19) with these attrs: [seed=0, dropout=0, T=DT_FLOAT, input_mode="linear_input", direction="unidirectional", rnn_mode="lstm", is_training=true, seed2=0]
Registered devices: [CPU]
Registered kernels:
  <no registered kernels>

	 [[node cu_dnnlstm/CudnnRNN (defined at <ipython-input-6-3c5692df850b>:19) ]]

Caused by op 'cu_dnnlstm/CudnnRNN', defined at:
  File "/anaconda3/lib/python3.7/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/anaconda3/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py", line 16, in <module>
    app.launch_new_instance()
  File "/anaconda3/lib/python3.7/site-packages/traitlets/config/application.py", line 658, in launch_instance
    app.start()
  File "/anaconda3/lib/python3.7/site-packages/ipykernel/kernelapp.py", line 505, in start
    self.io_loop.start()
  File "/anaconda3/lib/python3.7/site-packages/tornado/platform/asyncio.py", line 132, in start
    self.asyncio_loop.run_forever()
  File "/anaconda3/lib/python3.7/asyncio/base_events.py", line 528, in run_forever
    self._run_once()
  File "/anaconda3/lib/python3.7/asyncio/base_events.py", line 1764, in _run_once
    handle._run()
  File "/anaconda3/lib/python3.7/asyncio/events.py", line 88, in _run
    self._context.run(self._callback, *self._args)
  File "/anaconda3/lib/python3.7/site-packages/tornado/ioloop.py", line 758, in _run_callback
    ret = callback()
  File "/anaconda3/lib/python3.7/site-packages/tornado/stack_context.py", line 300, in null_wrapper
    return fn(*args, **kwargs)
  File "/anaconda3/lib/python3.7/site-packages/tornado/gen.py", line 1233, in inner
    self.run()
  File "/anaconda3/lib/python3.7/site-packages/tornado/gen.py", line 1147, in run
    yielded = self.gen.send(value)
  File "/anaconda3/lib/python3.7/site-packages/ipykernel/kernelbase.py", line 357, in process_one
    yield gen.maybe_future(dispatch(*args))
  File "/anaconda3/lib/python3.7/site-packages/tornado/gen.py", line 326, in wrapper
    yielded = next(result)
  File "/anaconda3/lib/python3.7/site-packages/ipykernel/kernelbase.py", line 267, in dispatch_shell
    yield gen.maybe_future(handler(stream, idents, msg))
  File "/anaconda3/lib/python3.7/site-packages/tornado/gen.py", line 326, in wrapper
    yielded = next(result)
  File "/anaconda3/lib/python3.7/site-packages/ipykernel/kernelbase.py", line 534, in execute_request
    user_expressions, allow_stdin,
  File "/anaconda3/lib/python3.7/site-packages/tornado/gen.py", line 326, in wrapper
    yielded = next(result)
  File "/anaconda3/lib/python3.7/site-packages/ipykernel/ipkernel.py", line 294, in do_execute
    res = shell.run_cell(code, store_history=store_history, silent=silent)
  File "/anaconda3/lib/python3.7/site-packages/ipykernel/zmqshell.py", line 536, in run_cell
    return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
  File "/anaconda3/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 2819, in run_cell
    raw_cell, store_history, silent, shell_futures)
  File "/anaconda3/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 2845, in _run_cell
    return runner(coro)
  File "/anaconda3/lib/python3.7/site-packages/IPython/core/async_helpers.py", line 67, in _pseudo_sync_runner
    coro.send(None)
  File "/anaconda3/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3020, in run_cell_async
    interactivity=interactivity, compiler=compiler, result=result)
  File "/anaconda3/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3185, in run_ast_nodes
    if (yield from self.run_code(code, result)):
  File "/anaconda3/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3267, in run_code
    exec(code_obj, self.user_global_ns, self.user_ns)
  File "<ipython-input-6-3c5692df850b>", line 19, in <module>
    model.add(CuDNNLSTM(128, input_shape=(x_train.shape[1:]), return_sequences=True))
  File "/anaconda3/lib/python3.7/site-packages/tensorflow/python/training/checkpointable/base.py", line 442, in _method_wrapper
    method(self, *args, **kwargs)
  File "/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/engine/sequential.py", line 164, in add
    layer(x)
  File "/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/layers/recurrent.py", line 701, in __call__
    return super(RNN, self).__call__(inputs, **kwargs)
  File "/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 554, in __call__
    outputs = self.call(inputs, *args, **kwargs)
  File "/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/layers/cudnn_recurrent.py", line 111, in call
    output, states = self._process_batch(inputs, initial_state)
  File "/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/layers/cudnn_recurrent.py", line 501, in _process_batch
    is_training=True)
  File "/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/gen_cudnn_rnn_ops.py", line 142, in cudnn_rnn
    seed2=seed2, is_training=is_training, name=name)
  File "/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py", line 788, in _apply_op_helper
    op_def=op_def)
  File "/anaconda3/lib/python3.7/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
    return func(*args, **kwargs)
  File "/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 3300, in create_op
    op_def=op_def)
  File "/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 1801, in __init__
    self._traceback = tf_stack.extract_stack()

InvalidArgumentError (see above for traceback): No OpKernel was registered to support Op 'CudnnRNN' used by node cu_dnnlstm/CudnnRNN (defined at <ipython-input-6-3c5692df850b>:19) with these attrs: [seed=0, dropout=0, T=DT_FLOAT, input_mode="linear_input", direction="unidirectional", rnn_mode="lstm", is_training=true, seed2=0]
Registered devices: [CPU]
Registered kernels:
  <no registered kernels>

	 [[node cu_dnnlstm/CudnnRNN (defined at <ipython-input-6-3c5692df850b>:19) ]]`

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 26 (6 by maintainers)

Most upvoted comments

I had the same issue. On shifting to “GPU” option, the error disappeared and the code started working. In the “Runtime” menu, change “Change Runtime Types” to “GPU”.

How do I change to GPU? Where do I find a runtime menu?

Hi, I’m also facing same issue. I’m using, ubuntu 16.04 tensorflow==1.12.0 cuda-9.0 cudnn=7.0.5 GPU Tesla C2075

InvalidArgumentError (see above for traceback): No OpKernel was registered to support Op 'CudnnRNN' with these attrs.  Registered devices: [CPU,XLA_CPU,XLA_GPU], Registered kernels:
  device='GPU'; T in [DT_DOUBLE]
  device='GPU'; T in [DT_FLOAT]
  device='GPU'; T in [DT_HALF]
	
 [[node bidirectional_1/CudnnRNN (defined at /usr/local/lib/python3.5/dist-packages/tensorflow/contrib/cudnn_rnn/python/ops/cudnn_rnn_ops.py:922)  = CudnnRNN[T=DT_FLOAT, direction="unidirectional", dropout=0, input_mode="linear_input", is_training=true, rnn_mode="lstm", seed=87654321, seed2=0](bidirectional_1/transpose, bidirectional_1/ExpandDims_1, bidirectional_1/ExpandDims_2, bidirectional_1/concat)]]

Had same problem, Installing tensorflow-gpu compatible with CUDA solved the problem… Try to search “cuda tensorflow gpu compatibility”

I am getting a very similar error, but it is in a very specific set of circumstances.

I have 2 models, let’s call them model_A and model_B. model_A was trained on a CPU machine and has no GPU layers, only LSTM/Dense layers. model_B was trained on a GPU machine and has CuDNNGRU layers. I can almost always load model_A on a machine with no GPU, and I cannot load model_B on a machine with no GPU (as expected). The weird behavior is this: I spin up a new terminal on a machine with no GPUs. I load model_A on my machine, with no issues (as expected). I try to load model_B on my machine, and I get this error (as expected). I try to re-load model_A on my machine, and I get this error (not expected, model_A has no GPU dependent layers). I have to restart my terminal window to be able to load model_A. I can repeat this behavior when substituting any non-GPU model for model_A, and when substituting any GPU model for model_B.

The error is slightly different, it is CudnnRNNV2 instead of CudnnRNN: InvalidArgumentError: No OpKernel was registered to support Op 'CudnnRNNV2' used by node cu_dnngru_2/CudnnRNNV2 (defined at /my/conda/env/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py:1748) with these attrs: [input_mode="linear_input", T=DT_FLOAT, direction="unidirectional", rnn_mode="gru", is_training=true, seed2=0, seed=0, dropout=0].

@Nick-Lucas, does the tf.test.is_gpu_available return False? If that’s the case, then it definitely means the GPU is not configured correctly.

Also, the error message somehow indicating that the GPU device is not there:

Registered devices: [CPU,XLA_CPU,XLA_GPU]

XLA_GPU is not same as GPU.

I had the same problem even with a TPU execution on colab. It’s solved only by switching to a GPU execution

I had the same issue. On shifting to “GPU” option, the error disappeared and the code started working. In the “Runtime” menu, change “Change Runtime Types” to “GPU”.