seaweedfs: Can't upload on s3 bucket.

Hello!

Describe the bug I’ve run seaweed with s3 (3 masters, 3 volumes, 3 filers on the same servers). It works good. Some time later I have a problem. Create s3-bucket with s3cmd: s3cmd mb s3://test It succefully created. That I tried to upload some file. s3cmd put file s3://test I’ve got error: WARNING: Upload failed: /file (500 (InternalError): We encountered an internal error, please try again.) But on older bucket I can upload files.

System Setup

  • List the command line to start “weed master”, “weed volume”, “weed filer”, “weed s3”, “weed mount”. /opt/seaweedfs/weed master -mdir=/data/seaweedfs/master -peers=10.214.3.19:9333,10.214.3.16:9333,10.214.3.17:9333 -volumeSizeLimitMB 1024 /opt/seaweedfs/weed volume -mserver=10.214.3.19:9333,10.214.3.16:9333,10.214.3.17:9333 -dir=/data/seaweedfs/volume -dataCenter dc1 -rack rack1 -ip=10.214.3.16 -max=0 /opt/seaweedfs/weed filer -master 10.214.3.19:9333,10.214.3.16:9333,10.214.3.17:9333 -s3 -s3.config /etc/seaweedfs/s3.config.json -s3.domainName example.com -s3.port 80
  • OS version CentOS Linux release 7.8.2003 (Core)
  • output of weed version version 30GB 2.11 98827d6 linux amd64
  • if using filer, show the content of filer.toml Here is part of filer.toml
[postgres] # or cockroachdb
# CREATE TABLE IF NOT EXISTS filemeta (
#   dirhash     BIGINT,
#   name        VARCHAR(65535),
#   directory   VARCHAR(65535),
#   meta        bytea,
#   PRIMARY KEY (dirhash, name)
# );
enabled = "True"
hostname = "10.214.3.19"
port = 5432
username = "seaweedfs_user"
password = "SECRET_PASSWORD"
database = "seaweedfs_db"              # create or use an existing database
sslmode = "disable"
connection_max_idle = 100
connection_max_open = 100

Expected behavior I can upload file to s3.

Additional context Also I have the same problem with another s3 tools (aws, s3 sync)

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 16 (12 by maintainers)

Most upvoted comments

Each volume is configured to be 1GB. -volumeSizeLimitMB 1024 Each bucket will create 7 volumes by default.

The folder /data/seaweedfs/volume seems having the number of volumes multiply 1GB close to the disk limit.

You can reduce to -volumeSizeLimitMB 512.

If you are using the git master branch, to be released in 2.12, there is a more flexible configuration:

If you have a lot of buckets to add, you can configure the per bucket storage this way in weed shell:

> fs.configure -locationPrefix=/buckets/ -volumeGrowthCount=1 -apply

This will add 1 physical volume when existing volumes are full. If using replication, you will need to add more volumes.

See https://github.com/chrislusf/seaweedfs/wiki/Path-Specific-Configuration