socket.io: AWS EC2 Behind ELB Always Prints Error Unexpected response code: 400
Hey guys -
Can’t seem to find out a solution for this, but when I run my app behind a load balancer, I get this error on the client:
WebSocket connection to 'wss://fakedomain.com/socket.io/?EIO=3&transport=websocket&sid=QH8VmXbiEcp3ZyiLAAAD' failed: Error during WebSocket handshake: Unexpected response code: 400
I understand the error, since it’s trying to talk to the load balancer and now the EC2 instance (I’m not great with AWS so feel free to offer help on this one!) but what I don’t understand is how to make the error not show up!
I’d love to fix the root cause but I’m guessing it involves a separate dedicated socket.io server to handle all the real time stuff which I don’t have time for at the moment, but could someone please run me through supressing this error?
I’m assuming it’s falling back to polling, which seems to work just fine (I have the socket connection connected and it fires) but I don’t want to launch with a red error in my console.
Thanks in advance for any advice you might have!
About this issue
- Original URL
- State: closed
- Created 10 years ago
- Comments: 67
I assume you’re not using Elastic Beanstalk (the instructions there would be much easier).
Go to EC2->Network & Security->Load Balancers
Select your load balancer and go to Listeners. Ensure that both the Load Balancer protocol and the Instance Protocol are set to TCP for port 80 and SSL for port 443 rather than HTTP and HTTPS.
I found that on Elastic Beanstalk even though i had setup the Load balancer to use TCP and SSL with the web interface, when I checked the load balancer config directly it was still using HTTPS for the secure listener. So you should probably check in the EC2 > Load Balancers section that the ports are setup as they should be:
Once that is sorted add the following to .ebextensions and all works well for me 😃
This will only work 100% when you only have one instance however, because sticky session support doesnt work over TCP.
Thanks @niczak !!
I finally got mine working as well. However, in additional to configuring
I also need to set 3) Proxy server to be “none” in the Software Configuration section.
As of Aug 2016 you can use Amazon’s Application Load Balancer (ALB) instead of the “classic” ELB. Worked “out of the box” for me with listeners on HTTP 80 and HTTPS 443.
I have set up a Route 53 instance that manage my domain. i have configured a Hosted Zone with a record that points my domain directly to the load balancer.
the load balancer is configured to forward request to my Tomcat instances, as shown here:
the security group for ELB is pretty standard:
@niczak - I also has switched to ALB for socket traffic stickiness. However, I first struggled with the ALB Sticky Sessions configuration (as there weren’t too much doc on this subject; at least a few months ago). Finally have it figured out. The key thing is that “Stickiness is defined at a target group level”, so make sure you create target group to serve your traffic and then add stickiness to it.
We saw this basic issue, too. Basically, because SocketIO sends two requests to establish the websocket connection, if those calls get federated across instances, the session ID is not recognized and the handshake fails, leading to SocketIO falling back to long polling. This is true whether or not you’re using ws or wss; it assumes you’ve set up load balancing at layer-4 (TCP), as I could not get layer-7 (HTTP) load balancing to work with websockets.
The fix would be to enable sticky sessions, so that once a call goes across to an instance, the load balancer continues to send those requests to the same instance; however, if you’re using AWS’ ELBs, they do not allow you to use sticky sessions for layer-4 load balancing.
What we ended up doing was using raw websockets, rather than SocketIO, as then there is no session data that needed to be persisted across calls (just one call, the handshake, takes place, and that works).
A fix on AWS’ part would be to allow sticky sessions to be used in some manner for layer-4 load balancing.
A fix on SocketIO’s part would be to make it so the websocket handshake does not require the context of the prior sessionID (I don’t know the purpose or lifecycle around this; I am assuming it exists both to help long polling, and to more quickly reconnect in the event of connection termination; if the client determined a connection had died it could generate a new session and reconnect successfully, rather than have the backend refuse the handshake because it believed a connection already existed).
A potential fix would be to use Redis to persist session state across all instances. I have not tested this.
A workaround, if your load allows it, is to also ensure only a single node instance per availability zone, and disable cross-zone load balancing. That way, all calls will default to a single node, avoiding this issue.
I have seems to solved my problem by adding Route53 instance in front of the loadbalancer. i now have a working websocket with secure connections under AWS environment.
Try adding the Route 53, if this is not solving your problem i’ll be happy to share my configuration.
with this line, it helps to sit the jupyter server behind the elb.
in case somebody come to this ticket with kubernetes on aws.
It seems like this thread became a place for figuring out issues with AWS/socket.io—
My setup is that the client makes an HTTPS request with data for a socket.io connection. The HTTPS request should respond with the cookies needed for sticky sessions. The issue is that I’m unable to pass a
cookieheader in the socket.io connection.I’m using an ALB with these listeners:
Which points to a target group with stickiness configured like this:
I hit the HTTPS endpoint first with:
then initialize the socket with
The instance the HTTPS request hits must be the instance the socket connection hits.
Once a socket is established (if it hits the right instance by dumb luck) it stays there and things work well, the issue is getting the websocket to hit the same endpoint as the HTTPS request.
My first instinct is to use
This works in
node, but in the browsercookieis a forbidden header on XMLHTTPRequests, so I’m unable to send it along.Has anyone done something similar?
The rule for mapping the hash to an instance is the same across all HAProxy instances. That is, A.B.C.D will result in the same hash no matter the HAProxy instance it hits, and that hash will map to the same server, no matter the HAProxy instance in use, provided that every HAProxy knows about the same instances (and possibly in the same order/the same names). The actual IPs do not have to be known up front, because they’re immaterial. Any IP will hash to the same server regardless of the HAProxy it passes through.
For a completely faked example (HAProxy uses something that is not so predictable, I’m sure), imagine we had three servers, which are configured in order as server0, server1, and serve2 (the actual details don’t matter, just that there is a clear ordering, implicit is fine). Every HAProxy instance has the same servers, in the same order. They also have the same rule as to how to deal with an incoming IP address. For this trite example, it’s add up all four parts of the address as integers (easily extensible to support IPv6, add up all as hex into base 10), and divide by 3, take the remainder. So IP 123.12.61.24 = (123+12+61+24) % 3, or 1. So it gets routed to server1. So no matter what HAProxy instance it comes into, that connection will be sent to server1.
Now, if server1 goes offline, the algorithm changes across all HAProxy instances, to being add all the parts, modulus 2.
And this generally works -unless- you get a netsplit (or configure the HAProxy instances to have a different server list). Where one HAProxy sees 3 instances, and another sees 2. If that happens, you can’t be guaranteed to reach the same backend. And indeed, that can prove to be a problem. But that’s the solution being described in the link.
On Wed, Jan 13, 2016 at 3:51 PM, Felix Schlitter notifications@github.com wrote:
AN update from my original post:
I have my HTTPS changed to SSL (443 —> my custom port) and left HTTP (80 —> my custom port).
I have code that was checking with express !req.secure && X-Forwarded-Proto !=== https that redirects you to HTTPS if you come in on port 80…
HOWEVER
Changing to SSL makes X-Forwarded-Proto come back undefined… breaking this connection. (also req.secure seems like it may be deprecated)
So now:
seems to work great, and I don’t get the 400 error in the console any more.
@yoav200 think you could share your whole configuration for what made things work for you? Ie, ports on EBS security + protocol (http/tcp), any .ebextensions you needed, etc. I tried a ton of different things to hack my nginx configs to keep the upgrade headers and send them to node, but nothing worked for me. FWIW - a wiki page would be really, really helpful for this. Only thing that did work was going direct to my EC2 instance.
Oh man. This is solid advice I haven’t seen elsewhere. I’ll try in the AM and report back. Thank you!
On Wed, Oct 29, 2014, 7:48 PM Vadim Kazakov notifications@github.com wrote: