tfjs: TypeError: Cannot read property 'backend' of undefined

I’m using the latest version of tfjs-node on npm:

{
 "peerDependencies": {
    "@tensorflow/tfjs-core": "^2.4.0"
  },
  "dependencies": {
      "@tensorflow/tfjs-converter": "^2.4.0",
      "@tensorflow/tfjs-node": "^2.4.0"
     }
}

I get this error when loading a saved model with loadSavedModel

TypeError: Cannot read property 'backend' of undefined
    at Engine.moveData (/node_modules/@tensorflow/tfjs-core/dist/tf-core.node.js:3280:31)
    at DataStorage.get (/node_modules/@tensorflow/tfjs-core/dist/tf-core.node.js:115:28)
    at NodeJSKernelBackend.getInputTensorIds (/node_modules/@tensorflow/tfjs-node/dist/nodejs_kernel_backend.js:153:43)
    at NodeJSKernelBackend.getMappedInputTensorIds (/node_modules/@tensorflow/tfjs-node/dist/nodejs_kernel_backend.js:1487:30)
    at NodeJSKernelBackend.runSavedModel (/node_modules/@tensorflow/tfjs-node/dist/nodejs_kernel_backend.js:1506:66)
    at TFSavedModel.predict (/node_modules/@tensorflow/tfjs-node/dist/saved_model.js:362:52)
    at /lib/tests/models/audio.js:44:22
const tf = require('@tensorflow/tfjs-node');

(function () {

    const modelPath='/root/saved_model/';
    
    // load model
    tf.node.loadSavedModel(modelPath)
    .then(model => {
       // it holds a waveform of audio file
        const data = require('fs').readFileSync('/root/test.json');
        const waveform = JSON.parse(data).data;
        const inputTensor = tf.tensor2d(waveform, [ waveform.length, 1], 'float32' );
        const inputs = {
            audio_id: '',
            mix_spectrogram: null,
            mix_stft: null,
            waveform: inputTensor
        };
        return model.predict(inputs);
    })
    .then(output => {
        console.dir(output, { depth: null, maxArrayLength: null });
    })
    .catch(error => {
        console.error(error);
    })

    // load model metadata
    tf.node.getMetaGraphsFromSavedModel(modelPath)
    .then(modelInfo => {
        console.dir(modelInfo[0].signatureDefs.serving_default.outputs, { depth: null, maxArrayLength: null });
        console.dir(modelInfo[0].signatureDefs.serving_default.inputs, { depth: null, maxArrayLength: null });
    })
    .catch(error => {
        console.error(error);
    })

}).call(this);

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 21 (6 by maintainers)

Most upvoted comments

So the cause of the error you are seeing is that you are passing null values as inputs to the model. Your code has

const inputs = {
	audio_id: '',
	mix_spectrogram: null,
	mix_stft: null,
	waveform: inputTensor
};

All of those inputs need to be tensors. Here is the code i used to get past that point (just creating some random data):

tf.node.loadSavedModel(modelPath, ['serve'], 'serving_default')
  .then(model => {
      const inputs = {
          audio_id: tf.tensor(['id']),
          mix_spectrogram: tf.randomNormal([2, 512, 1024, 2]),
          mix_stft: tf.randomNormal([2, 2049, 2]),
          waveform: tf.randomNormal([2, 2])
      };
      return model.predict(inputs);
  })
  .then(output => {
      console.dir(output, { depth: null, maxArrayLength: null });
  })
  .catch(error => {
      console.error(error);
  })

Beyond that I still ran into an issue Error: Session fail to run with error: Placeholder_1:0 is both fed and fetched., and looking at some of the signature def info (see below), I noticed that audio_id is both an input and and output and is referring to the same placeholder. From what I can tell this isn’t allowed, it appears one should at least call tf.identity on the input placeholder if you are trying to pass it through. Have you been able to execute this model succesfully in python (or otherwise know that it works in python)?

inputs {
  audio_id: {
    dtype: 'string',
    tfDtype: 'DT_STRING',
    name: 'Placeholder_1:0',
    shape: []
  },
  mix_spectrogram: {
    dtype: 'float32',
    tfDtype: 'DT_FLOAT',
    name: 'strided_slice_3:0',
    shape: [ [Object], [Object], [Object], [Object] ]
  },
  mix_stft: {
    dtype: 'complex64',
    tfDtype: 'DT_COMPLEX64',
    name: 'transpose_1:0',
    shape: [ [Object], [Object], [Object] ]
  },
  waveform: {
    dtype: 'float32',
    tfDtype: 'DT_FLOAT',
    name: 'Placeholder:0',
    shape: [ [Object], [Object] ]
  }
}
outputs {
  accompaniment: {
    dtype: 'float32',
    tfDtype: 'DT_FLOAT',
    name: 'strided_slice_23:0',
    shape: [ [Object], [Object] ]
  },
  audio_id: {
    dtype: 'string',
    tfDtype: 'DT_STRING',
    name: 'Placeholder_1:0',
    shape: []
  },
  vocals: {
    dtype: 'float32',
    tfDtype: 'DT_FLOAT',
    name: 'strided_slice_13:0',
    shape: [ [Object], [Object] ]
  }
}

booleanMaskAsync Does not work inside of tf.tidy(() =>{ ..., it does work outside of tidy

@tafsiri thank you, I appreciate it! So we did some step forward, using the checkpoint directly! @shoegazerstella is trying in this way

import os
import tensorflow as tf
trained_checkpoint_prefix = 'pretrained_models/2stems/model'
export_dir = os.path.join('export_dir', '0')
graph = tf.Graph()
with tf.compat.v1.Session(graph=graph) as sess:
    loader = tf.compat.v1.train.import_meta_graph(trained_checkpoint_prefix + '.meta')
    loader.restore(sess, trained_checkpoint_prefix)
    builder = tf.compat.v1.saved_model.builder.SavedModelBuilder(export_dir)
    builder.add_meta_graph_and_variables(sess,
                                         [tf.saved_model.TRAINING, tf.saved_model.SERVING],
                                         strip_default_attrs=True)
    builder.save()  

At this point it should be done, but we now get a Error: The SavedModel does not have signature: serving_default

2020-12-04 08:08:46.918697: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
Error: The SavedModel does not have signature: serving_default
    at getSignatureDefEntryFromMetaGraphInfo (/node_modules/@tensorflow/tfjs-node/dist/saved_model.js:210:23)

Thank you!

It is possible that tf.Estimator adds some preprocessing that is outside of the saved model (and which for example allows you to pass just one of 3 of the audio related inputs). We don’t directly support estimator models in tfjs-node, there are some parts of that api that are not part of the c++ api used under the hood (they are just in the python layer). You might be able to modify tf.estimator.export.ServingInputReceiver(features, features) to tweak the savedModel to just have the features you plan to use, but I don’t really know if that will work/is possible.

If you are able to execute the saved model in python without instantiating an estimator instance, that may suggest a path to using this model with tfjs-node.

Also going to cc @pyu10055 who may know about estimator compatibility.

Not yet, just got back from a long holiday in the US so will be able to take a closer look this week.