Vitis-AI: Compilation error: Check failed: size % matmul_row_parallel == 0 size: 32, matmul_row_parallel: 64

Hello! I launched the Vitis-AI-RNN v2.0 docker image by running “docker_run_rnn.sh” to quantize and compile the following LSTM autoencoder:

class Encoder(nn.Module):
    def __init__(self, seq_len, n_features, embedding_dim,device):
        super(Encoder, self).__init__()
        self.batch_size = 1
        self.device=device
        self.seq_len, self.n_features = seq_len, n_features
        self.embedding_dim, self.hidden_dim = embedding_dim, 2 * embedding_dim
        self.rnn1 = nn.LSTM(
            input_size=self.n_features,
            hidden_size=self.embedding_dim,
            num_layers=1,
            batch_first=True
        )
    def forward(self, x):
        x = x.reshape((self.batch_size, self.seq_len, self.n_features))
        x, (hn, _) = self.rnn1(x)
        return hn.reshape((self.n_features, self.embedding_dim))
     
class Decoder(nn.Module):
    def __init__(self, seq_len, input_dim, n_features):
        super(Decoder, self).__init__()
        self.batch_size = 1
        self.seq_len, self.input_dim = seq_len, input_dim
        self.hidden_dim, self.n_features = 2 * input_dim, n_features
        self.rnn1 = nn.LSTM(
            input_size=input_dim,
            hidden_size=self.hidden_dim,
            num_layers=1,
            batch_first=True
        )
        self.fc= nn.Linear(self.hidden_dim, n_features
)
    def forward(self, x):
        x = x.repeat(self.seq_len, self.n_features)
        x = x.reshape((self.batch_size, self.seq_len, self.input_dim))
        x, _ = self.rnn1(x)
        x = x.reshape((self.seq_len, self.hidden_dim))
        out = self.fc(x)
        return out

class LSTMAutoencoder(nn.Module):
    def __init__(self, seq_len, n_features, device,embedding_dim=16):
        super(LSTMAutoencoder, self).__init__()
        self.device=device
        self.encoder = Encoder(seq_len, n_features, embedding_dim, device).to(self.device)
        self.decoder = Decoder(seq_len, embedding_dim, n_features).to(self.device
)
    def forward(self, x):
        x = self.encoder(x)
        x = self.decoder(x)
        return x

I managed to quantize the model and got 2 Xmodels but when I run the following command to compile the xmodels I get an error with the matmul operation.

(vitis-ai-rnn) Vitis-AI /workspace/> vai_c_rnn -x quantize_result/xmodel -t DPURAHR16L -b 3 -o output

**F0621 08:23:40.218751   167 u50_inst_gen.cc:731] Check failed: size % matmul_row_parallel == 0 size: 32, matmul_row_parallel: 64**
*** Check failure stack trace: ***
    @     0x7f09e03cb4dd  google::LogMessage::Fail()
    @     0x7f09e03d3071  google::LogMessage::SendToLog()
    @     0x7f09e03caecd  google::LogMessage::Flush()
    @     0x7f09e03cc76a  google::LogMessageFatal::~LogMessageFatal()
    @     0x56143b742324  dctc::compiler::backend::u50::U50InstrGen::emitHWInstForAddWithMatMulAsInput()
    @     0x56143b742d21  dctc::compiler::backend::u50::U50InstrGen::emitHWInstForAddOrSub()
    @     0x56143b74417f  dctc::compiler::backend::u50::U50InstrGen::emitHWInstrForAddInst()
    @     0x56143b76c55e  dctc::compiler::backend::InstrGenBase::visitAddInst()
    @     0x56143b76d4af  dctc::compiler::backend::InstrGenBase::emitInstr()
    @     0x56143b70b96d  dctc::compiler::backend::u50::U50Backend::doInstrGen()
    @     0x56143b71dd91  dctc::compiler::backend::u50::U50Backend::compile()
    @     0x56143b6f272a  dctc::compiler::compile::CompilerImpl::compile()
    @     0x56143b6f3033  dctc::compiler::compile::Compiler::compile()
    @     0x56143b6c12f1  main
    @     0x7f09df366c87  __libc_start_main
    @     0x56143b6ca3f9  (unknown)
Aborted (core dumped)

When I compile the same model with DPURADR16L, the process finishes succesfully:

(vitis-ai-rnn) Vitis-AI /workspace > vai_c_rnn -x quantize_result/xmodel -t DPURADR16L -b 1 -o output

* VITIS_AI RNN Compilation - Xilinx Inc.
INFO: Running frontend: parsing xmodel(s) into JSON representation...
INFO: JSON files will be written to output/tmp_json
[DCTC Frontend][INFO] Begin parsing Layer DecoderRnn1_StandardLstmCell_layer_0_forward.
[DCTC Frontend][INFO] Begin parsing Layer EncoderRnn1_StandardLstmCell_layer_0_forward.
[DCTC Frontend][INFO] Parsing finished, JSON files are generated in output/tmp_json
INFO: Running dctc_driver: compiling the JSON model to HW IP instructions...
INFO: dctc_driver command: 'dctc_driver -i output/tmp_json -m json -t U25 -b 1 -o output'
I0621 08:24:03.832082   177 compiler.cc:82] [INFO] Loading Json model files from path: output/tmp_json
I0621 08:24:03.833421   177 compiler.cc:84] [INFO] Parsing IR module from Json model files...
W0621 08:24:03.834378   177 json_irgen.cc:637] [WARNING] [IRGen] node will be ignored because it has not parents or children, node: actv_sgmd.
W0621 08:24:03.834466   177 json_irgen.cc:637] [WARNING] [IRGen] node will be ignored because it has not parents or children, node: actv_tanh.
W0621 08:24:03.835237   177 json_irgen.cc:637] [WARNING] [IRGen] node will be ignored because it has not parents or children, node: actv_sgmd.
W0621 08:24:03.835258   177 json_irgen.cc:637] [WARNING] [IRGen] node will be ignored because it has not parents or children, node: actv_tanh.
I0621 08:24:03.835626   177 compiler.cc:65] [INFO] Compiling model for target: U25, with batch size: 1
I0621 08:24:03.835637   177 compiler.cc:67] [INFO] Preprocessing IR module...
I0621 08:24:03.836982   177 compiler.cc:69] [INFO] Optimizing IR module...
I0621 08:24:03.839864   177 compiler.cc:71] [INFO] Compiling IR module...
I0621 08:24:03.863512   177 compiler.cc:73] [INFO] **Compilation successful!** The output xmodel file is located in path: output.

Could you please help me figure out why I get this error with the matmult operation? Thanks in advance

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Comments: 15

Commits related to this issue

Most upvoted comments

Hi @AdrFebles we have new vai_c_rnn packages for this issue, Could you please try build new rnn docker? or if you have rnn docker already, you can try with the following steps: inside the docker:sudo conda uninstall --force vai_c_rnn -n vitis-ai-rnn -y cd /tmp wget -O vai_c_rnn-1.4.1-py36h32e1ea0_5.tar.bz2 https://www.xilinx.com/bin/public/openDownload?filename=vai_c_rnn-1.4.1-py36h32e1ea0_5.tar.bz2 sudo conda install -n vitis-ai-rnn vai_c_rnn-1.4.1-py36h32e1ea0_5.tar.bz2