cheroot: Unhandled OpenSSL.SSL.WantReadError and WantWriteError raised by PyOpenSSL

I’m submitting a …

  • 🐞 bug report
  • 🐣 feature request
  • ❓ question about the decisions made in the repository

🐞 Describe the bug. What is the current behavior? This issue is reproducible against huge post request like multiple lines of textarea in form. We are upgrading from python 2.7 to 3.7 and hence also upgraded the cheroot 8.2.0. Cherrypy : 18+

📋 Details

Traceback (most recent call last):
  File "/usr/local/pyenv/versions/3.7.0/lib/python3.7/site-packages/cherrypy/_cprequest.py", line 628, in respond
    self._do_respond(path_info)
  File "/usr/local/pyenv/versions/3.7.0/lib/python3.7/site-packages/cherrypy/_cprequest.py", line 680, in _do_respond
    self.body.process()
  File "/usr/local/pyenv/versions/3.7.0/lib/python3.7/site-packages/cherrypy/_cpreqbody.py", line 982, in process
    super(RequestBody, self).process()
  File "/usr/local/pyenv/versions/3.7.0/lib/python3.7/site-packages/cherrypy/_cpreqbody.py", line 559, in process
    proc(self)
  File "/usr/local/pyenv/versions/3.7.0/lib/python3.7/site-packages/cherrypy/_cpreqbody.py", line 145, in process_urlencoded
    qs = entity.fp.read()
  File "/usr/local/pyenv/versions/3.7.0/lib/python3.7/site-packages/cherrypy/_cpreqbody.py", line 817, in read
    data = self.fp.read(chunksize)
  File "/usr/local/pyenv/versions/3.7.0/lib/python3.7/site-packages/cheroot/server.py", line 383, in read
    data = self.rfile.read(size)
  File "/usr/local/pyenv/versions/3.7.0/lib/python3.7/site-packages/cheroot/makefile.py", line 420, in read
    val = super().read(*args, **kwargs)
  File "/usr/local/pyenv/versions/3.7.0/lib/python3.7/_pyio.py", line 1014, in read
    return self._read_unlocked(size)
  File "/usr/local/pyenv/versions/3.7.0/lib/python3.7/_pyio.py", line 1054, in _read_unlocked
    chunk = self.raw.read(wanted)
  File "/usr/local/pyenv/versions/3.7.0/lib/python3.7/socket.py", line 589, in readinto
    return self._sock.recv_into(b)
  File "/usr/local/pyenv/versions/3.7.0/lib/python3.7/site-packages/OpenSSL/SSL.py", line 1335, in recv_into
    self._raise_ssl_error(self._ssl, result)
  File "/usr/local/pyenv/versions/3.7.0/lib/python3.7/site-packages/OpenSSL/SSL.py", line 1149, in _raise_ssl_error
    raise WantReadError()
OpenSSL.SSL.WantReadError

📋 Environment

  • Cheroot version: 8.2.0
  • CherryPy version: 18.1.1 (if applicable)
  • Python version: 3.6.7
  • OS: linux
  • Browser: [all | Chrome XX | Firefox XX | IE XX | Safari XX | Mobile Chrome XX | Android X.X Web Browser | iOS XX Safari | iOS XX UIWebView | iOS XX WKWebView ]

📋 Additional context I have went through this https://github.com/cherrypy/cheroot/issues/113 but still was unable to figure out the solution

About this issue

  • Original URL
  • State: open
  • Created 5 years ago
  • Reactions: 1
  • Comments: 35 (9 by maintainers)

Commits related to this issue

Most upvoted comments

@jpeckham Thanks a lot for this!

@webknjaz and for any one else:

I was able to fix this by replacing to read call at

File "/cheroot/makefile.py", line 420, in read val = super().read(*args, **kwargs)

with

val = self._safe_call(True, super().read, *args, **kwargs)

safe_call is specifically defined in pyopenssl.py for this reason.

@webknjaz This fix looks good?

I was getting WantWriteError on very large js files and we did this monkey patch


def _flush_unlocked(self):
    self._checkClosed('flush of closed file')
    while self._write_buf:
        try:
            # ssl sockets only except 'bytes', not bytearrays
            # so perhaps we should conditionally wrap this for perf?
            n = self.raw.write(self._write_buf)
        except io.BlockingIOError as e:
            n = e.characters_written
        except (OpenSSL.SSL.WantReadError,OpenSSL.SSL.WantWriteError, OpenSSL.SSL.WantX509LookupError) as e:
            n = 0
        del self._write_buf[:n]

BufferedWriter._flush_unlocked = _flush_unlocked

That file is part of pyOpenSSL, so you should probably open a pull request there

Yes, that is a PyOpenSSL file, but the suggestion that this problem lies with PyOpenSSL and not cheroot is absolutely, 100% misguided. It is not PyOpenSSL’s job to expose a retry option-- it is cheroot’s.

PyOpenSSL just implements OpenSSL, which in turn exposes this error as described at https://www.openssl.org/docs/man1.1.1/man3/SSL_get_error.html, which in turn states: “There is no fixed upper limit for the number of iterations that may be necessary until progress becomes visible at application protocol level”

The correct solution is for cheroot to support a configuration option for retry-- possibly similar to how curl provides retry options-- that, as appropriate, implements a retry of the self.raw.write() call in _flush_unlocked() of makefile.py.

Don’t think we are talking about exposing a new retry option in PyOpenSSL. The retry logic remains in Cheroot. The only change I proposed was to allow PyOpenSSL to call OpenSSL with a buffer that might have moved to a new location. Currently OpenSSL doesn’t allow this although it used to. Getting Cheroot to retry sending the buffer is pointless once it’s moved to a new location given the way PyOpenSSL currently talks to OpenSSL. Hence the need for small fixes in both PyOpenSSL and OpenSSL and not in Cheroot.

my understanding is that when SSL_WANT_READ or SSL_WANT_WRITE is encountered, the previous call MUST be repeated with the exact same arguments, hence requiring setting n back to 0 to preserve the buffer for write. openSSL further enforces that the address of the buffer passed to write is the exact same by default. buffers in python can change location in some circumstances so the built-in SSL library disables this check by setting SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER on connections.

pyopenssl does not appear to do this. nor does it expose a constant for this value, however, it should be possible to enable it via Context.set_mode(2). this may help get rid of the remaining bad write retry errors

for more details see:

I’m wondering about that, too. Perhaps a Tidelift subscription would help. (I actually checked that out but there’s no pricing on the website.)

Yeah I checked after you mentioned it, but Tidelift said their minimum subscription was for a team of 25 developers and even then they wouldn’t tell me how that costs but I am guessing $$$$. So not really an option as far as I am concerned.

Tidelift’s customers are corporate, not individuals. Their model spins around redistributing some funds acquired from the subscribers across the participating projects’ maintainers. AFAIK, they ask the clients to run some software that collects what they use. And the distribution of funds to the maintainers highly depends on the use of said OSS project. My impression is that most projects with low-to-middle popularity get under $50 or, more often, zero (there’s some threshold for starting getting some cash for a project). Those funds are per-project and if a project has more maintainers, they are split (usually equally). Say, a project has 2 maintainers, and just crossed the threshold for being lifted, that means they’d be getting like $25 a month each. Cheroot is in this range, being niche in a way. More popular projects get more, but not huge amounts.

This is to say that Tidelift is a say for businesses to give back to OSS, but (1) it doesn’t replace people’s salaries, nor (2) is it a good fit for individuals. If you want to support maintainers as a non-business, direct donations may be more impactful. Though, it’s typical for people to treat them as little $1 tokens of appreciation rather than full-fledged support.

Another thing that might be unobvious with Tidelift — they let the maintainers lead their projects without influencing them too much. For example, they don’t file feature requests or stuff like that. Though, they ask us to verify some packaging metadata, licenses, the security of accesses to package indexes etc., as well as communicate some vulnerability details (we can optionally delegate accepting security reports to them).

Note that both maintainers for the CherryPy projects have many other projects going on. Some get more attention, then others, naturally.

I also burn out occasionally, up to various degrees. When this happens, I have much less (or zero) energy for FOSS. Last year, it was rather intense (still is, really) — you can see the activity drops in the contribution graph in my profile and how they correspond with pings that accumulate in the old PRs. I’m trying to get to every PR, but sometimes they get lost or stuck for different reasons.

And even then, what is the probability they have fixed this particular issue?

Now, back to this particular issue. I don’t remember exactly what blocked @vashek’s #332, but I think that originally, it was introducing unrelated changes. They were reverted half a year later, but the notification got lost (maybe, because the PR was marked as a draft). I’ll take another look and maybe merge as is — I’ve just clicked “rebase” there to see if the CI explodes. Looks like there’s a linting violation to fix at least.

I recall that unrelated change sending me off trying to make some fixes upstream, but I didn’t succeed implementing tests for them — https://github.com/pyca/pyopenssl/pull/954 / https://github.com/pyca/pyopenssl/pull/955. It’d be nice if somebody could help out with those.

I’m also aware of a few seemingly similar issues that would be great to fix. But testing TLS stuff automatically doesn’t always work or behaves weird in CI. I’ve been mostly working on ways of making the CI and local tests as stable as possible.

I think that we need more contributions improving testing of TLS. It’s rather hard to make a full matrix since some components are coming from the GHA VMs and not PyPI. With that in mind, it’s hard to judge if PRs like #332 would contribute to the flakiness or stability. Hence, this causes indecisiveness on my part, sometimes.