tfjs: Error in concat1D: rank of tensors[2] must be the same as the rank of the rest (1)

TensorFlow.js version

tensorflow/tfjs-node-gpu: 0.1.19

tensorflowjs 0.6.4 Dependency versions: keras 2.2.2 tensorflow 1.11.0

Describe the problem or feature request

I want to retrain faster_rcnn_inception_v2_coco with my own data, and run the prediction with tfjs-node-gpu. If I download the model directly and use tfjs-converter it works fine.

If I try to train it pipeline.config.txt

Miniconda3\python.exe models\research\object_detection\model_main.py --pipeline_config_path="pipeline.config" --model_dir="training_output" --num_train_steps=200000 --sample_1_of_n_eval_examples=1 --alsologtostderr

then freeze the model with:

Miniconda3\python.exe models\research\object_detection\export_inference_graph.py --input_type image_tensor --pipeline_config_path "training_output/pipeline.config" --trained_checkpoint_prefix "training_output/model.ckpt-4621" --output_directory "exported_model"

and then convert it with tensorflowjs_converter:

Miniconda3\Scripts\tensorflowjs_converter.exe --input_format=tf_saved_model --output_node_names="detection_boxes,detection_scores,num_detections,detection_classes" --saved_model_tags=serve "exported_model/saved_model" "exported_model/web_model"

I get the following error inNodeJS: Error in concat1D: rank of tensors[2] must be the same as the rank of the rest (1) when running the following code:

const model = await tf.loadFrozenModel(modelPath, weightsPath);

const shape = [1, 2560, 1920, 3];
const tensor = tf.fill(shape, 0, 'int32');
await model.executeAsync(
  { image_tensor: tensor },
  ['detection_boxes', 'detection_scores', 'detection_classes', 'num_detections'],
);

A side note, if I take the trained frozen model and try to run the prediction in Python it works.

(node:13376) UnhandledPromiseRejectionWarning: Error: Error in concat1D: rank of tensors[2] must be the same as the rank of the rest (1)
    at Object.assert (D:\Projects\jsblur\node_modules\@tensorflow\tfjs-core\dist\util.js:40:15)
    at D:\Projects\jsblur\node_modules\@tensorflow\tfjs-core\dist\ops\concat_util.js:7:14
    at Array.forEach (<anonymous>)
    at Object.assertParamsConsistent (D:\Projects\jsblur\node_modules\@tensorflow\tfjs-core\dist\ops\concat_util.js:6:12)
    at concat_ (D:\Projects\jsblur\node_modules\@tensorflow\tfjs-core\dist\ops\concat_split.js:36:19)
    at Object.concat (D:\Projects\jsblur\node_modules\@tensorflow\tfjs-core\dist\ops\operation.js:23:29)
    at Object.exports.executeOp (D:\Projects\jsblur\node_modules\@tensorflow\tfjs-converter\dist\src\operations\executors\slice_join_executor.js:10:25)
    at Object.executeOp (D:\Projects\jsblur\node_modules\@tensorflow\tfjs-converter\dist\src\operations\operation_executor.js:47:30)
    at _loop_1 (D:\Projects\jsblur\node_modules\@tensorflow\tfjs-converter\dist\src\executor\graph_executor.js:258:52)
    at GraphExecutor.processStack (D:\Projects\jsblur\node_modules\@tensorflow\tfjs-converter\dist\src\executor\graph_executor.js:282:13)

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 26 (2 by maintainers)

Most upvoted comments

@hsparrow and @xusongpei THe issue you are facing should be fixed in the next release of tfjs-converter. You don’t need to convert the model again, the fixes are on the javascript side.