webpack-dev-server: Hard to catch error on proxying in Node.js 10

  • Operating System: macOs 10.14.1
  • Node Version: 10.15.0
  • NPM Version: yarn 1.13.0
  • webpack Version: 4.23.1
  • webpack-dev-server Version: 3.1.14
  • This is a bug
  • This is a modification request

Code

The bug I’m reporting here showed up after switching to Node js v10.x. I believe there are differences in how the net core package raises exceptions/policy toward unhandled error events.

To repro, you need to proxy a connection that uses WebSockets, allow it to connect using chrome, then restart the target server of the proxy, for example with Nodemon. Want to avoid creating a minimal repo of this if possible since it’s a bit comple and there seems to be a nice trail of ticket/changelogs.

This will cause the WDS process to die due to unhandled error event from ECONNRESET with the following stack trace:

HPM] Error occurred while trying to proxy request /foo from localhost:9090 to https://localhost:3333 (ECONNREFUSED) (https://nodejs.org/api/errors.html#errors_common_system_errors)
events.js:167
      throw er; // Unhandled 'error' event
      ^

Error: read ECONNRESET
    at TLSWrap.onStreamRead (internal/stream_base_commons.js:111:27)
Emitted 'error' event at:
    at emitErrorNT (internal/streams/destroy.js:82:8)
    at emitErrorAndCloseNT (internal/streams/destroy.js:50:3)
    at Object.apply (/Users/kieran/work/Atlas/ui/node_modules/harmony-reflect/reflect.js:2064:37)
    at process._tickCallback (internal/process/next_tick.js:63:19)

Before upgrading to node js 10, I believe something similar was happening but the exception wasn’t fatal.

After some debugging/ticket searching it seems that what changed is that it is now an error for the https module to encounter an exception without a registered handler, which impacts the behavior of your dependency node-http-proxy, like this ticket describes.

Debugging the error, I discovered that WDS proxy config is really http-proxy-middleware config, so it is possible to suppress the error and keep the server up by modifying config by adding an onError handler to proxy config for future readers, however this was fairly difficult to debug and ideally this package would register some sort of handler for this exception so that it isn’t fatal.

Willing to submit a PR, the change is very straight-forward. Involve modifying this code:

  // Proxy websockets without the initial http request
  // https://github.com/chimurai/http-proxy-middleware#external-websocket-upgrade
  websocketProxies.forEach(function (wsProxy) {
    this.listeningApp.on('upgrade', wsProxy.upgrade);
  }, this);

To be like:

// Proxy websockets without the initial http request
  // https://github.com/chimurai/http-proxy-middleware#external-websocket-upgrade
  websocketProxies.forEach(function (wsProxy) {
    this.listeningApp.on('upgrade', (req, socket, ...args) => {
      socket.on('error', (err) => console.error(err));
      return wsProxy.upgrade(req, socket, ...args);
    };
  }, this);

Expected Behavior

The server should stay alive when a client (on either side of the proxy) disconnects.

Actual Behavior

Server dies due to no handler for ECONNRESET

For Bugs; How can we reproduce the behavior?

Use Node 10.

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Reactions: 36
  • Comments: 97 (53 by maintainers)

Commits related to this issue

Most upvoted comments

I got the same error. What actually helped me out was:

process.on('uncaughtException', function (err) {
  console.error(err.stack);
  console.log("Node NOT Exiting...");
});

Found the solution on StackOverflow. Thanks to Ross the Boss.

IIRC, this also may have depended on a specific chrome version. Perhaps chrome has changed its behavior again.

To clarify my original ticket, this is non-blocking because WDS allows users to pass through config to the proxy package. So if anyone reading this is blocked, all you need to do is add an error handler on upgrade to the WDS config in your webpack config:

{
  // ... other config
  devServer: {
    proxy: {
      '/your_path': {
        target: 'https://localhost:XXX',
        ws: true,

        // Add this:
        onError(err) {
          console.log('Suppressing WDS proxy upgrade error:', err);
        },
      },
    },
  },
}

This error can be caught using the onProxyReqWs event in the config:

devServer: {
  proxy: {
    '/my-websocket-path': {
      target: 'wss://my.ip',
      xfwd: true,
      ws: true,
      secure: false,
      onProxyReqWs: (proxyReq, req, socket, options, head) => {
        socket.on('error', function (err) {
          console.warn('Socket error using onProxyReqWs event', err);
        });
      }
    }
  }
}

@mecampbellsoup Yep, will be great

@alexander-akait @rafaelsales, as with so many things, this turned out to be a user error 😢

In short, my WDS proxy config was invalid. Compare the following two:

Bad

proxy: {
      '/api': process.env.PROXY_URI,
      ws: true,
      onError: err => {
        console.log('Suppressing WDS proxy upgrade error:', err);
      },
    },

Good

proxy: {
      '/api': {
        target: process.env.PROXY_URI,
        ws: true,
        onError: err => {
          console.log('Suppressing WDS proxy upgrade error:', err);
        },
      },
    },

I blame JS/WDS for letting me pass an invalid but correct-looking proxy configuration object!

This has just started happening to me but I’m unsure of the change that caused it.

  • Operating System: macOs 10.14.5
  • Node Version: 10.15.3
  • NPM Version: 6.9.0
  • webpack Version: 4.31.0
  • webpack-dev-server Version: 3.3.1

The onError addition above doesn’t make a difference I still get the crash with:

events.js:174                                                                       
       throw er; // Unhandled 'error' event
       ^
Error: read ECONNRESET
     at TCP.onStreamRead (internal/stream_base_commons.js:111:27)
Emitted 'error' event at:
     at emitErrorNT (internal/streams/destroy.js:82:8)
     at emitErrorAndCloseNT (internal/streams/destroy.js:50:3)
     at process._tickCallback (internal/process/next_tick.js:63:19)

It happens at the point I try to open a websocket through the proxy (it doesn’t always happen):

proxy: {
            "/api": {
                target: "http://localhost:8181",
            },
            '/api/realtime': {
                target: "ws://localhost:8181",
                ws: true
            },
            "/auth": {
                target: "http://localhost:8181",
            },
        }

I tried reproducing this error in a minimal repo, but I can’t. Tried with NodeJS 10, 11 and 12. NodeJS 12 doesn’t work with WDS though (getting a weird error message different than the one posted above - maybe because I’m on windows).

Maybe you guys can use the repo I created. I have definitely seen this behavior in our project using NodeJS 11.x as well:

https://github.com/dhardtke/wds-ws-proxy-crash

Yes it’s still an issue.

No I can’t produce a minimal repro, concurrency is hard.

Please just fix it ❤️

Appreciate for your feedback @maknapp!

Published http-proxy-middleware@2.0.6 with your suggestion.

Let me know if this fix helps.

Bit challenging without a reproduction of the bug 😬

Can you imagine going a year and a half commenting over and over expecting people to be able to reproduce some obscure race condition instead of just taking the first comment which had code completely laid out to fix this? A year and a half ago.

There’s no way you can convince me that all of these comments over the last year and a half constituted less effort than just rolling in the fix.

I worked around this long ago by simply avoiding this broken functionality. I feel for anyone still dealing with this after a year and a half.

@chimurai I can confirm that version 2.0.6 solves the issue. It used to crash for me every time the app pool recycled (IIS), but now it just logs this and keeps running:

[webpack-dev-server] [HPM] WebSocket error: Error: read ECONNRESET
    at TCP.onStreamRead (node:internal/stream_base_commons:217:20) {
  errno: -4077,
  code: 'ECONNRESET',
  syscall: 'read'
}

So now I can remove the workaround, great work! 🙂

Hi @alexander-akait.

Might have found the issue while working on the next version of http-proxy-middleware.

I noticed http-proxy has an undocumented econnreset event.

Added a default handler to econnreset; Hopefully this will prevent the server from crashing. 🤞 Haven’t been able to replicate this bug as well.

Just published http-proxy-middleware@2.0.5

Curious to hear if v2.0.5 fixes this gnarly bug

Here is a reproducible test case. using node v16.3.0, Chrome 91.0.4472.114 on macOS 11.4.

I’m using Vue, so I just followed the webpack documentation to create a project. You don’t have to be running an actual server listening: the process never gets that far for me.

@chimurai Yep, it is interesting problem, I reproduce it using IE11/IE10, proxy websockets and sockjs, I don’t know why it happens, anyway I try to reproduce it again and put feedback on the next verison

After spending 5 hours I found how to reproduce the problem

@cha0s because any line of code should be tested, it can have side effects when you adding lines without test, like regressions, new problems and etc, so I ask developers provide more information for tests

I apologize for my tone. I will just say that it is always possible for perfect to be the enemy of good enough. It’s better (for your loving users) to fix and then keep the issue open to perfect the fix if you believe the very first comment’s fix is wrong. I haven’t seen a good argument for why it is the wrong fix, and I suspect it will be the ultimate resolution.

@evilebottnawi ok, here it is (based on CRA):

https://github.com/alexkuz/wds-ws-proxy-example

To reproduce:

  • install deps with yarn install
  • run server with node src/server.js
  • open dev app with yarn start

Server returns 502 error instead of proper WS response (turns out that’s what Nginx does on my server when I’m for example restarting container with my actual WS server), which leads to ECONNRESET error.

You can setup logging for http-proxy-middleware, there are options for it

v2.0.6 is working for me as well. Thanks @chimurai!

Glad to hear the fix is working.

Thanks @alexander-akait for helping out in this thread. 💪

@chimurai Thanks for fast fix. It works now.

I can confirm this fixes the issue with webpack-dev-server@4.7.4 and http-proxy-middleware@2.0.5

@chimurai Big thanks

@Delagen Can you test it without hack and with a new version?

Issue about 3 years, but still patch lives in my sources in postinstall action

diff --git a/node_modules/webpack-dev-server/lib/Server.js b/node_modules/webpack-dev-server/lib/Server.js
--- a/node_modules/webpack-dev-server/lib/Server.js
+++ b/node_modules/webpack-dev-server/lib/Server.js
@@ -1840,7 +1840,10 @@ class Server {
       (this.server).on(
         "upgrade",
         /** @type {RequestHandler & { upgrade: NonNullable<RequestHandler["upgrade"]> }} */
-        (webSocketProxy).upgrade
+        (req, socket, ...args) => {
+            socket.on('error', (err) => console.error(err));
+            return webSocketProxy.upgrade(req, socket, ...args);
+        }
       );
     }, this);
   }

@alexander-akait Why do we still need this ugly hack?

Dev-Server: 4.7.3 NodeJS: 17.9.0

@alexander-akait This is the error message printed to the console when I recycle the IIS app pool causing WDS to crash:

events.js:352
      throw er; // Unhandled 'error' event
      ^

Error: read ECONNRESET
    at TCP.onStreamRead (internal/stream_base_commons.js:209:20)
Emitted 'error' event on Socket instance at:
    at emitErrorNT (internal/streams/destroy.js:106:8)
    at emitErrorCloseNT (internal/streams/destroy.js:74:3)
    at processTicksAndRejections (internal/process/task_queues.js:82:21) {
  errno: -4077,
  code: 'ECONNRESET',
  syscall: 'read'
}

In our case this issue is very easy to reproduce. We use webpack-dev-server to proxy websocket requests to an application running in IIS. To reproduce this issue we simply recycle the app pool of the site.

Have used patch-package to apply @kmannislands’ fix, but now I see the problem can be resolved with @maknapp’s config suggestion which feels like a cleaner workaround. Would like to see it fixed properly though 🙂

@mlilback sorry for delay, thanks, I will look at this in near future

@mecampbellsoup Maybe you can create simple reproducible test repo, I will look, I think we fix it in master, jsut want to check

When I run this app outside of minikube, I don’t seem to have any WS connection issue or disconnects, so creating a reproducible test repo is going to require a docker compose or minikube deploy script… if you need this I’ll need some time.

The case I’ facing seems pretty typical. I have Angular client app served by webpack-dev-server (which is standard Angular proxy config). I use proxy to expose backend to the client app. Recently I added web socket connection (in particular to handle Apollo GraphQL subscriptions over websockets) which is proxied. During development on each backend restart the proxy dies due to this issue. After manually applying proposed change from the issue description to Server.js the proxy stays alive and client is able to reconnect on subsequent backend restarts.

@evilebottnawi

"webpack": "^4.43.0",
"webpack-dev-server": "^3.11.0",

proxy config in webpack options:

  proxy: {
    '/v3/websocket/socketServer.do': {
      target: 'http://localhost:8080/',
      changeOrigin: true,
      ws: true,
    },
    '/v3': {
      target: 'http://localhost:8080/',
      changeOrigin: true,
    },
  },

Steps:

  1. Start java or any other server (websocket server)
  2. Start dev server (websocket client)
  3. Open the browser using the dev URL, and the websocket code in it will connect to the server
  4. Shut down the server forcefully, the client attempts to reconnect at 3 second intervals
  5. Don’t close the browser or tabs and wait a few minutes or more time
  6. The dev server crash

imange

Something with CORS, maybe?

I think not, a problem in ws implementation, i think you can look at here https://github.com/websockets/ws/issues/1692

@alexkuz Yes, anyway feel free to send a PR, i think it should be easy to fix 👍

Update: i think env variable is better to use in your case, it is more flexible and faster

Could you submit a reproducible repo?

Yes I can try to muster up a repro repo when I have a spare second sometime this week!