planck: sh hangs with long output

I noticed that, with the current HEAD a133e08, executing with sh a command which produces a long enough (>1k) output, it hangs.

E.g.: (sh “for i in 0 1 2 3 4 5 6 7 8 9; do for j in 0 1 2 3 4 5 6 7 8 9; do for k in 0 1 2 3 4 5 6 7 8 9 0 1 2 3; do echo -n 1; done; done; done”)

This is due to read_child_pipe() in planck-c/shell.c, as it seems that in such case the first read attempt to get 1023 chars, and if it succeeds, the following ones are trying to get 0 bytes.

Changing the function to the following seems to work better:

static char* read_child_pipe(int pipe) {
  const int BLOCK_SIZE = 1024;
  int block_count = 1;
  char* res = malloc(BLOCK_SIZE * block_count);
  int count = 0, total = 0, num_to_read = BLOCK_SIZE - 1;
  while ((count = read(pipe, res + total, num_to_read)) > 0) {
    total += count;
    if (count < num_to_read)
      num_to_read -= count;
    else {
      block_count += 1;
      res = realloc(res, BLOCK_SIZE * block_count);
      num_to_read = BLOCK_SIZE;
    }
  }
  res[total] = 0;
  return res;
}

Cheers, Lorenzo

About this issue

  • Original URL
  • State: closed
  • Created 8 years ago
  • Comments: 18 (17 by maintainers)

Commits related to this issue

Most upvoted comments

Yes, I was planning to do the select fix in a subsequent PR.

Yea, I don’t think we need streaming here. Both streams are completely consumed before the callback is fired with the results. The only risk here is OOM and that’s gonna happen whether we do them in sequence or in parallel.

Ahh yea I figured there were bugs lurking in this function. I intend to write some property tests to shore it up. Thx for the find!

Yes, sure, I only tried to fix this case to be able to get some larger output, without even looking at the big picture. In general you should really be consuming both stdout and stderr in parallel to prevent that the sh command fills either and stalls.

By the way, thanks for planck and especially for the linux port, I bet I’m going to have fun with it!