b2_fuse: Segmentation Fault - Out of memory
I just tried this for the first time and tried to do an ls on the mount point. I had python running in one terminal and my ls in the other. I could see the program getting the files from b2 in the python terminal. After a few minutes I got a segmentation fault.
Here’s the bucket details:
Current Files: 435,185 Current GB: 635.111 GB
This is the only thing I see in the syslog that might be helpful (let me know if you want something else):
Mar 2 20:42:16 files kernel: [20445.742796] show_signal_msg: 30 callbacks suppressed Mar 2 20:42:16 files kernel: [20445.742801] python[13715]: segfault at 24 ip 0000000000558077 sp 00007fffd3521310 error 6 in python2.7[400000+2bc000] Mar 2 20:42:16 files kernel: [20445.763319] Core dump to |/usr/share/apport/apport 13715 11 0 13715 pipe failed
About this issue
- Original URL
- State: closed
- Created 8 years ago
- Comments: 22 (12 by maintainers)
Funny, just got this email from BackBlaze, looks like some new just in time features came out to fix these problems. https://www.backblaze.com/blog/b2-big-data-big-files/
Yes, technically storing the file to be uploaded on disk would be possible.
No, FTP does not load a file into memory but FTP is usually disk backed. This means that the file can be read from the disk as it is being uploaded. This was not an option for B2 previously, as the file had to be provided in a single go (typically from memory). The new large file API supports multi part uploads, which could be used to alleviate this issue.
So I did a test and expanded the droplet to 2GB x 2 CPU and tried with a 1GB file and it failed also. This technology looks promising once big file support is properly working.
I have a low quantity of files, but their all 1GB and have about 20 per batch.
I wish instead of RAM that we could use tmpfs as an alternative. Is this possible? I’d sacrifice a bit of speed using SSD in place of RAM.
Else, if RAM is the only way to go, if it had intelligence to measure free RAM and only use 50% of it, dumping and refreshing once full. FTP doesn’t have to load a file into memory does it?