BotFramework-WebChat: Error: The first argument must be a Cognitive Services audio stream

Hello,

I am facing issues while trying to use cognitive service for QnA Azure chatbot. I can get answers if i type question. But when I try to get answer using microphone the bot crashes immediately after returning the text spoken at microphone. Same behavior on chrome or firefox browser on desktop as well as Safari browser on IOS.

These are the two errors I see on the console of chrome browser.

webchat.js:2 Error: The first argument must be a Cognitive Services audio stream.
    at new t (webchat.js:2)
    at t.default (webchat.js:2)
    at webchat.js:2
    at Object.useMemo (webchat.js:2)
    at useMemo (webchat.js:2)
    at e (webchat.js:2)
    at Ji (webchat.js:2)
    at webchat.js:2
    at Ha (webchat.js:2)
    at Wa (webchat.js:2)
la @ webchat.js:2
webchat.js:2 WebSocket is already in CLOSING or CLOSED state.

I am using very simple code for this

 (async function() {
          const adapters = await window.WebChat.createDirectLineSpeechAdapters({
           fetchCredentials: {
             region: 'eastus',
             subscriptionKey: 'MY_SPEECH_SUBSCRIPTION_KEY'
          }
         });
        // Pass the set of adapters to Web Chat.
        window.WebChat.renderWebChat(
          {
            ...adapters
          },
          document.getElementById('webchat')
        );

        document.querySelector('#webchat > *').focus();
      })().catch(err => console.error(err));

Any help on this is much appreciated

Just to add to it, if I check activity logs for my speech service I see that my requests are successfully processed, that means I think speech to text is working fine. But somewhere while sending this text to my QnA chat bot to get answer it is failing. Although I might be completely wrong about it.

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 16 (8 by maintainers)

Most upvoted comments

@shakil-san, the code snippet I provided would be placed (or the existing code updated) in the bot’s files. The code you referenced above is the code for the Web Chat client which is just the interface between your bot and any user.

If you don’t know already, you can access your bot’s files on Azure. Simply login, locate the resource group for the QnA bot you created, navigate to the bot registration (i.e. Web App Bot), click the “Build” blade, and from there you can download the bot’s source code.

With the bot source code, it is possible to run locally and test before redeploying. If you are confident with any changes, there is a redeploy script that will push your changes to Azure.

The second attached image is from the web app online editor, but you should see a file structure similar to this. You can see there are sendActivity() functions used. It is these you would apply any speak properties to. The above code snippet should be used as a reference on how to structure the activity. For example, using lines starting at 116, the code might be updated to the following. Keep in mind, you may need to make adjustments to meet your needs.

116    var message = QnACardBuilder.GetSuggestionCard(suggestedQuestions, qnaDialogResponseOptions.activeLearningCardTitle, qnaDialogResponseOptions.cardNoMatchText);
117    message.speak = "Here are some suggested questions for you to consider.";
118    await stepContext.context.sendActivity(message, "Here are some suggested questions for you to consider.");

image

image

This should be enough to get you setup. I’m going to close this issue as resolved. If, however, you continue to experience issues / errors around this, please feel free to re-open. However, any “How to” questions should be posted on Stack Overflow.

@shakil-san, when your bot sends an activity back to DLS, is it setting the speak tag on the activity?

In sample 11, the bot sends an activity without first setting the speak tag:

const qnaResults = await this.qnaMaker.getAnswers(context);

// If an answer was received from QnA Maker, send the answer back to the user.
if (qnaResults[0]) {
    // Passing in a string does not result in an activity with a speak tag
    await context.sendActivity(qnaResults[0].answer);

(link)

When DLS receives an activity without a speak tag, it will not generate an audio stream to send to the client, which is the cause of the error.

Changing the code to the following should address the issue:

const qnaResults = await this.qnaMaker.getAnswers(context);

// If an answer was received from QnA Maker, send the answer back to the user.
if (qnaResults[0]) {
    await context.sendActivity({ text: qnaResults[0].answer, speak: qnaResults[0].answer });

Please try this and let us know if it works.

Will investigate this.

I am able to reproduce this issue with a QnA bot that has DLS set up. We will need to investigate the cause.

@compulim can we get your eyes on this?

@corinagum Thanks for your help

Here is the output from browser console. It seems to be 4.8.0 version

<meta name="botframework-directlinespeech:version" content="4.8.0">↵<meta name="botframework-webchat:bundle:variant" content="full">↵<meta name="botframework-webchat:bundle:version" content="4.8.0">

To determine what version of Web Chat you are running, open your browser’s development tools, and paste the following line of code into the console.

[].map.call(document.head.querySelectorAll('meta[name^="botframework-"]'), function (meta) { return meta.outerHTML; }).join('\n')

If you are using Web Chat outside of a browser, please specify your hosting environment. For example, React Native on iOS, Cordova on Android, SharePoint, PowerApps, etc.

On your running Web Chat app, you can also just open developer tools and check the meta tags inside the <head> of the HTML.

I will spin up a QnA bot with Cognitive Services and see if I can reproduce your problem. I will report back here.