payload: Cannot edit uploaded image when hosting Payload CMS behind a reverse proxy

Link to reproduction

No response

Describe the Bug

The issue occurs because we’re running the application behind a reverse proxy (nginx), which causes the getExternalFile fetch request to stall and time out. This happens due to bad forwarding of headers here:

const res = await fetch(fileURL, {
  credentials: 'include',
  headers: {
    ...req.headers,
  },
  method: 'GET',
})

When we dug into the issue, we realized that this caused you to forward all headers from the incoming request to edit the image. That request is a POST-request, which includes a request body. Therefore that request also contained a “Content-Length” header. But obviously we don’t actually send a request body with the above GET-request, and therefore nginx never served the request (it kept waiting for a request body that never came).

I think you might want to be more explicit about which headers to forward (seemingly only cookies are relevant here?), so as to avoid causing issues with applications behind load balancers / reverse proxies / other?

To Reproduce

  1. Host Payload CMS behind a nginx reverse proxy
  2. Upload an image
  3. Try to edit the image

Payload Version

2.11.1

Adapters and Plugins

@payloadcms/plugin-cloud-storage/s3

About this issue

  • Original URL
  • State: open
  • Created 4 months ago
  • Reactions: 1
  • Comments: 15 (4 by maintainers)

Most upvoted comments

@castlemilk Some fixes were pushed. Please upgrade to 2.14.2 and let me know if you still see this issue.

@denolfe , I’ve tried your latest merge PR and now get error:

TypeError: headers.forEach is not a function
    at uploadConfig (/Users/benebsworth/projects/shorted/cms/node_modules/payload/src/uploads/getExternalFile.ts:59:5)
    at getExternalFile
...

Is the error in getExternalFile.ts, where it should be say:

function headersToObject(headers) {
  const headersObj = {}
  Object.entries(headers).forEach(([key, value]) => {
    // If the header value is an array, join its elements into a single string
    if (Array.isArray(value)) {
      headersObj[key] = value.join(', ')
    } else {
      headersObj[key] = value
    }
  })
  return headersObj
}

With this patch locally applied I now have GCS uploads working with edits 🥳 🙏

@denolfe Sure, it looks something like this:

server {
	server_name _;
	listen 80;

	client_max_body_size 10M;

	gzip		on;
	gzip_static	on;
	gzip_vary	on;
	gzip_proxied	any;
	gzip_types	text/plain text/css application/json application/javascript application/x-javascript text/javascript text/xml application/xml application/rss+xml application/atom+xml application/rdf+xml;

	location / {
		proxy_pass		http://127.0.0.1:8080; # or whatever port payload is listening for internally
		proxy_set_header	Upgrade $http_upgrade;
		proxy_set_header	Host $http_host;
		proxy_http_version	1.1;
		proxy_set_header	X-Forwarded-For $proxy_add_x_forwarded_for;
	}
}

Note however that we’re using certbot for SSL termination on the proxy as well, I’ve not included that here 😃

I know it’s none of my business, but personally I would start from looking into a more selective forwarding of headers. It seems to me to carry lots of potential issues that you forward all headers “as is”. I’m assuming it’s to preserve authorization, but why not select the actually relevant headers and forward only those? Potentially wrapped in an “authorizedFetch” if this is something that’s done in many places throughout the implementation.

Can confirm that this also affects uploads to S3 without the reverse proxy. It pops up as a different error:

FetchError: request to https://prod-public.s3.ap-southeast-2.amazonaws.com/media/image.jpg failed, reason: Hostname/IP does not match certificate's altnames: Host: localhost. is not in the cert's altnames: DNS:s3-ap-southeast-2.amazonaws.com, ...snip

Removing the headers block entirely from the fetch request lets us crop, and fixes the issue.

This might need to be configurable in the cases where you’re relying on payload to provide access control, but if you have disablePayloadAccessControl: true, requests are made directly to a public S3 bucket both out of payload and your frontend.

In might be just enough to rely on the disablePayloadAccessControl config setting in this case?