undici: Custom timeout per request

At the moment we can configure a global timeout, but not a custom timeout per request, which would be quite handy. Is there a reason for this?

 const { Client } = require('undici')
 const client = new Client(`http://localhost:3000`)

 client.request({
   path: '/',
   method: 'GET',
+  timeout: 10000
 },  function (err, data) {
   //  handle response
 })

Related: #160

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 1
  • Comments: 18 (18 by maintainers)

Commits related to this issue

Most upvoted comments

I would simply do

Before we have received response headers, timeout is relative to when request was enqueued.

So we can offer a strong guarantee to our users.

socket.setTimeout just use setTimeout internally, so there is no point on optimizing it.

That’s not enough to determine how we implement this. Notice that undici is fundamentally different due to pipelining so that example cannot be directly applied. So how would you expect that timeout to work?

e.g. you might have requests that is being served ahead in the pipeline. Should the waiting time for other responses ahead in the pipeline count towards timeout or not?

client.request({ ...opts }, (err) => {
 // This is a large requests downloading lots of data.
})

client.request({ ...opts, timeout: 100 }, (err) => {
  // Should this timeout if there is no activity on this specific request 
  // since it's waiting for the previous large request. 
  // Or if there is no activity on the socket.
  // Do we start the timeout from the moment this request is at the head of 
  // the pipeline and then measure the socket activity?
})

I’m not quite sure how that would work. The current timeout is on the socket.

So you could never have a per request timeout that is larger than the socket timeout. Allowing to change the socket timeout per requests makes things complicated, especially with pipelining.

I guess we could add it per request as long as it’s less than the socket timeout. But it would incur a performance penalty when used.

Also does it apply on the request body (i.e read) or just socket activity (i.e. write/read)? What about when the request is delayed by other requests ahead in the pipeline?

Is there a reason for this?

Because it’s complicated with pipelining 😄