duckdb: ORDER BY errors with memory problems -- process killed

What happens?

The table is sort of large. 330M rows, 38 columns, about 20GB. I just cannot seem to sort it? See below.

This is on a partitioned virtual machine of 64 cores and almost 1TB RAM.

To Reproduce

I have moved the lengthy error to a gist per request: https://gist.github.com/tbeason/ad9be50426c81dba82a216c4d6dc4d1f

Additional failing attempt:

[beasont@tc-hm003 algos]$ duckdb testing.db
v0.8.0 e8e4cea
Enter ".help" for usage hints.
D .timer on
reads TO 8;
D .maxrows 100
D SET threads TO 8;
Run Time (s): real 0.002 user 0.001986 sys 0.004562
D SET max_memory TO '860GB';
Run Time (s): real 0.000 user 0.000001 sys 0.000059
D from merged order by date, parent_id;
 50% ▕██████████████████████████████                              ▏ Segmentation fault

OS:

CentOS 7

DuckDB Version:

0.8.0

DuckDB Client:

CLI

Full Name:

Tyler Beason

Affiliation:

Virginia Tech

Have you tried this on the latest master branch?

  • I agree

Have you tried the steps to reproduce? Do they include all relevant data and configuration? Does the issue you report still appear there?

  • I agree

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Comments: 23 (8 by maintainers)

Commits related to this issue

Most upvoted comments

would you mind edit your issue to move the track to a attachment? it is hard to read now.