cli: 'supabase start' frequently fails with 'service not healthy'

Bug report

Describe the bug

Running supabase start on Github builders frequently fails with ‘service not healthy’.

To Reproduce

Steps to reproduce the behavior, please provide code snippets or a repository:

  1. Create a Github action workflow with supabase/setup-cli@v1 and version: latest.
  2. In the workflow start the supabase setup with supabase start.
  3. Optionally, for good measure, do this with 2 other supabase configurations (using different ports).

Expected behavior

The builder starts cleanly without errors.

Github action log

Run supabase start
  supabase start
  supabase status
  shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0}
Pulling images... (1/13)
Pulling images... (1/13)
Pulling images... (2/13)
Pulling images... (3/13)
Pulling images... (4/13)
Pulling images... (5/13)
Pulling images... (6/13)
Pulling images... (7/13)
Pulling images... (8/13)
Pulling images... (9/13)
Pulling images... (10/13)
Pulling images... (11/13)
Pulling images... (12/13)
Starting database...
Restoring branches...
Setting up initial schema...
Applying migration 20230105212858_initial.sql...
Seeding data supabase/seed.sql...
Starting containers...
Error: service not healthy: [supabase_storage_supabase_test supabase_pg_meta_supabase_test supabase_studio_supabase_test]
Try rerunning the command with --debug to troubleshoot the error.
Error: Process completed with exit code 1.

Unfortunately, I wasn’t able to get a better log with --debug. With the debug flag, the action didn’t exhibit the problem.

System information

Github builder, ubuntu-latest (22.04). See https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners#supported-runners-and-hardware-resources

Additional context

Older versions don’t show the error.

This was most likely introduced by the “fix” to https://github.com/supabase/cli/issues/146 https://github.com/supabase/cli/pull/770

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Comments: 33 (9 by maintainers)

Most upvoted comments

I’m experiencing this issue with the supabase_studio_ container. I’ve tried all the solutions other people have found across the similar issues (destroying and redownloading images/containers, increasing memory resources, and ignoring health checks). It seems none of this is working as the container is stuck in a restart loop.

There isn’t anything in the container logs to help:

2023-07-21 11:50:42 info  - Loaded env from /app/studio/.env
2023-07-21 11:50:42 Listening on port 3000
2023-07-21 11:51:43 info  - Loaded env from /app/studio/.env
2023-07-21 11:51:43 Listening on port 3000
2023-07-21 11:52:44 info  - Loaded env from /app/studio/.env
2023-07-21 11:52:44 Listening on port 3000
2023-07-21 12:02:41 info  - Loaded env from /app/studio/.env
2023-07-21 12:02:41 Listening on port 3000
2023-07-21 12:03:42 info  - Loaded env from /app/studio/.env
2023-07-21 12:03:42 Listening on port 3000
2023-07-21 12:04:43 info  - Loaded env from /app/studio/.env
2023-07-21 12:04:43 Listening on port 3000
2023-07-21 12:04:44 No storage option exists to persist the session, which may result in unexpected behavior when using auth.
2023-07-21 12:04:44         If you want to set persistSession to true, please provide a storage option or you may set persistSession to false to disable this warning.

(The warning is there on my other setup which is working)

This is what I get when I run supabase start (it logs the same log as above before as well):

service not healthy: [supabase_studio_supabase-test]
Try rerunning the command with --debug to troubleshoot the error.

Docker Engine: v24.0.2 Supabase CLI: v1.77.9 OS: macOS 13.4 Node: v18.16.1

I’ve tried on a brand new supabase project and an existing project with the same results.

The standard GitHub Linux runner that we tested on has 7GB of RAM, which is more than your local machine or linode. Are you able to run on another instance with more RAM?

I suspect docker will start paging containers to disk if it runs out of physical memory, hence slowing down the start process significantly.

Just ran it on an 8GB RAM linode, and it started fine with no errors, I guess it is my machine’s limited resources. Running the start command with --ignore-health-check fixes it for me, I will keep using that. Thanks.

I’m experiencing this issue with the supabase_studio_ container. I’ve tried all the solutions other people have found across the similar issues (destroying and redownloading images/containers, increasing memory resources, and ignoring health checks). It seems none of this is working as the container is stuck in a restart loop. There isn’t anything in the container logs to help:

2023-07-21 11:50:42 info  - Loaded env from /app/studio/.env
2023-07-21 11:50:42 Listening on port 3000
2023-07-21 11:51:43 info  - Loaded env from /app/studio/.env
2023-07-21 11:51:43 Listening on port 3000
2023-07-21 11:52:44 info  - Loaded env from /app/studio/.env
2023-07-21 11:52:44 Listening on port 3000
2023-07-21 12:02:41 info  - Loaded env from /app/studio/.env
2023-07-21 12:02:41 Listening on port 3000
2023-07-21 12:03:42 info  - Loaded env from /app/studio/.env
2023-07-21 12:03:42 Listening on port 3000
2023-07-21 12:04:43 info  - Loaded env from /app/studio/.env
2023-07-21 12:04:43 Listening on port 3000
2023-07-21 12:04:44 No storage option exists to persist the session, which may result in unexpected behavior when using auth.
2023-07-21 12:04:44         If you want to set persistSession to true, please provide a storage option or you may set persistSession to false to disable this warning.

(The warning is there on my other setup which is working) This is what I get when I run supabase start (it logs the same log as above before as well):

service not healthy: [supabase_studio_supabase-test]
Try rerunning the command with --debug to troubleshoot the error.

Docker Engine: v24.0.2 Supabase CLI: v1.77.9 OS: macOS 13.4 Node: v18.16.1 I’ve tried on a brand new supabase project and an existing project with the same results.

@xHergz, same issue as well. Various ‘fixes’ tried, and no luck with any of them.

@xHergz @Nnanyielugo Did you ever figure this out? Having the same issue.

All container are healhy now, but here is their

docker logs supabase_studio_*

> studio@0.0.9 start
> next start

ready - started server on 0.0.0.0:3000, url: http://localhost:3000
info  - Loaded env from /app/studio/.env

docker logs supabase_pg_meta_*

> @supabase/postgres-meta@0.0.0-automated start
> node dist/server/app.js

(node:244) ExperimentalWarning: Importing JSON modules is an experimental feature. This feature could change at any time
(Use `node --trace-warnings ...` to show where the warning was created)
{"level":"info","time":"2023-01-20T08:16:06.585Z","pid":244,"hostname":"cb4d234efa4f","msg":"Server listening at http://0.0.0.0:8080"}
{"level":"info","time":"2023-01-20T08:16:06.585Z","pid":244,"hostname":"cb4d234efa4f","msg":"App started on port 8080"}
{"level":"info","time":"2023-01-20T08:16:06.899Z","pid":244,"hostname":"cb4d234efa4f","msg":"Server listening at http://0.0.0.0:8081"}
{"level":"info","time":"2023-01-20T08:16:06.900Z","pid":244,"hostname":"cb4d234efa4f","msg":"Admin App started on port 8081"}

docker logs storage_imgproxy_*

WARNING [2023-01-20T08:13:18Z] No keys defined, so signature checking is disabled 
WARNING [2023-01-20T08:13:18Z] No salts defined, so signature checking is disabled 
WARNING [2023-01-20T08:13:18Z] Exposing root via IMGPROXY_LOCAL_FILESYSTEM_ROOT is unsafe 
INFO    [2023-01-20T08:13:18Z] Starting server at :5001 
INFO    [2023-01-20T08:13:20Z] Started /health  request_id=2Kt7lVxv6PBlOYpj1y9Wt method=GET client_ip=127.0.0.1
INFO    [2023-01-20T08:13:20Z] Completed in 57.425µs /health  request_id=2Kt7lVxv6PBlOYpj1y9Wt method=GET status=200 client_ip=127.0.0.1
INFO    [2023-01-20T08:13:23Z] Started /health  request_id=hQD_zluzqcByMEC9u0jCw method=GET client_ip=127.0.0.1
INFO    [2023-01-20T08:13:23Z] Completed in 41.783113ms /health  request_id=hQD_zluzqcByMEC9u0jCw method=GET status=200 client_ip=127.0.0.1

...

docker logs supabase_storage_*

2023-01-20T08:13:21: PM2 log: Launching in no daemon mode
2023-01-20T08:13:25: PM2 log: App [server:0] starting in -fork mode-
2023-01-20T08:13:25: PM2 log: App [server:0] online
running migrations
finished migrations
{"level":"info","time":"2023-01-20T08:16:10.784Z","pid":60,"hostname":"dbf349e0ad6c","msg":"Server listening at http://0.0.0.0:5000"}
Server listening at http://0.0.0.0:5000
{"level":"info","time":"2023-01-20T08:17:29.153Z","pid":60,"hostname":"dbf349e0ad6c","reqId":"req-t","tenantId":"stub","project":"stub","results":[],"msg":"results"}
{"level":"info","time":"2023-01-20T08:17:29.160Z","pid":60,"hostname":"dbf349e0ad6c","reqId":"req-t","tenantId":"stub","project":"stub","req":{"method":"GET","url":"/bucket","headers":{"host":"supabase_storage_wavedj-ug:5000","x_forwarded_proto":"http","x_real_ip":"172.19.0.1","x_client_info":"supabase-js/2.1.1","user_agent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36","accept":"*/*","referer":"http://localhost:54323/"},"hostname":"supabase_storage_wavedj-ug:5000","remoteAddress":"172.19.0.3","remotePort":54976},"res":{"statusCode":200},"responseTime":640.5188610004261,"msg":"GET | 200 | 172.19.0.3 | req-t | /bucket | Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36"}
{"level":"info","time":"2023-01-20T08:27:25.980Z","pid":60,"hostname":"dbf349e0ad6c","reqId":"req-77","tenantId":"stub","project":"stub","results":[],"msg":"results"}

...

docker logs supabase_rest_*

20/Jan/2023:08:12:55 +0000: Attempting to connect to the database...
20/Jan/2023:08:12:57 +0000: Connection successful
20/Jan/2023:08:12:57 +0000: Listening on port 3000
20/Jan/2023:08:12:57 +0000: Listening for notifications on the pgrst channel
20/Jan/2023:08:12:57 +0000: Config reloaded
20/Jan/2023:08:12:58 +0000: Schema cache loaded
20/Jan/2023:08:13:03 +0000: Schema cache loaded

docker logs realtime-dev.supabase_realtime_*

08:13:02.075 [info] == Running 20230110180046 Realtime.Repo.Migrations.AddLimitsFieldsToTenants.change/0 forward
08:13:02.329 [info] alter table tenants
08:13:02.446 [info] == Migrated 20230110180046 in 0.0s
08:13:23.473 [debug] QUERY OK db=167.4ms queue=3148.3ms idle=0.0ms
begin []
08:13:24.470 [debug] QUERY OK source="tenants" db=108.4ms
SELECT t0."id", t0."name", t0."external_id", t0."jwt_secret", t0."postgres_cdc_default", t0."max_concurrent_users", t0."max_events_per_second", t0."max_bytes_per_second", t0."max_channels_per_client", t0."max_joins_per_second", t0."inserted_at", t0."updated_at" FROM "tenants" AS t0 WHERE (t0."external_id" = $1) ["realtime-dev"]
08:13:25.902 [debug] QUERY OK source="extensions" db=0.8ms
DELETE FROM "extensions" AS e0 WHERE (e0."tenant_external_id" = $1) ["realtime-dev"]
08:13:26.158 [debug] QUERY OK db=0.6ms
DELETE FROM "tenants" WHERE "id" = $1 [<<27, 74, 46, 155, 230, 156, 64, 220, 186, 91, 51, 124, 14, 237, 44, 136>>]
08:13:27.439 [debug] QUERY OK db=0.4ms
INSERT INTO "tenants" ("external_id","jwt_secret","max_bytes_per_second","max_channels_per_client","max_concurrent_users","max_events_per_second","max_joins_per_second","name","inserted_at","updated_at","id") VALUES ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11) ["realtime-dev", "iNjicxc4+llvc9wovDvqymwfnj9teWMlyOIbJ8Fh6j2WNU8CIJ2ZgjR6MUIKqSmeDmvpsKLsZ9jgXJmQPpwL8w==", 100000, 100, 200, 100, 500, "realtime-dev", ~N[2023-01-20 08:13:27], ~N[2023-01-20 08:13:27], <<172, 173, 127, 88, 125, 24, 64, 180, 151, 105, 18, 255, 75, 12, 16, 115>>]
08:13:27.892 [debug] QUERY OK db=419.1ms
INSERT INTO "extensions" ("settings","tenant_external_id","type","inserted_at","updated_at","id") VALUES ($1,$2,$3,$4,$5,$6) [%{"db_host" => "CGSMAJs4R39ttxGDbuRZQ1Lyh29dY6HKGWWO8Qn/mJg=", "db_name" => "sWBpZNdjggEPTQVlI52Zfw==", "db_password" => "sWBpZNdjggEPTQVlI52Zfw==", "db_port" => "+enMDFi1J/3IrrquHHwUmA==", "db_user" => "sWBpZNdjggEPTQVlI52Zfw==", "ip_version" => 4, "poll_interval_ms" => 100, "poll_max_changes" => 100, "poll_max_record_bytes" => 1048576, "publication" => "supabase_realtime", "region" => "us-east-1", "slot_name" => "supabase_realtime_replication_slot"}, "realtime-dev", "postgres_cdc_rls", ~N[2023-01-20 08:13:27], ~N[2023-01-20 08:13:27], <<47, 170, 70, 2, 21, 22, 64, 93, 178, 28, 221, 203, 241, 17, 177, 48>>]
08:13:27.990 [debug] QUERY OK db=98.2ms
commit []
08:13:56.852 [notice]     :alarm_handler: {:set, {:system_memory_high_watermark, []}}
08:13:57.599 [info] Elixir.Realtime.SignalHandler is being initialized...
08:13:57.600 [notice] SYN[realtime@127.0.0.1] Adding node to scope <users>
08:13:57.600 [notice] SYN[realtime@127.0.0.1] Creating tables for scope <users>
08:13:57.600 [notice] SYN[realtime@127.0.0.1|registry<users>] Discovering the cluster
08:13:57.601 [notice] SYN[realtime@127.0.0.1|pg<users>] Discovering the cluster
08:13:57.601 [notice] SYN[realtime@127.0.0.1] Adding node to scope <Elixir.RegionNodes>
08:13:57.601 [notice] SYN[realtime@127.0.0.1] Creating tables for scope <Elixir.RegionNodes>
08:13:57.601 [notice] SYN[realtime@127.0.0.1|registry<Elixir.RegionNodes>] Discovering the cluster
08:13:57.601 [notice] SYN[realtime@127.0.0.1|pg<Elixir.RegionNodes>] Discovering the cluster
08:13:57.621 [info] Running RealtimeWeb.Endpoint with cowboy 2.9.0 at :::4000 (http)
08:13:57.621 [info] Access RealtimeWeb.Endpoint at http://realtime.fly.dev
08:13:57.622 [notice] SYN[realtime@127.0.0.1] Adding node to scope <Elixir.PostgresCdcStream>
08:13:57.622 [notice] SYN[realtime@127.0.0.1] Creating tables for scope <Elixir.PostgresCdcStream>
08:13:57.623 [notice] SYN[realtime@127.0.0.1|registry<Elixir.PostgresCdcStream>] Discovering the cluster
08:13:57.623 [notice] SYN[realtime@127.0.0.1|pg<Elixir.PostgresCdcStream>] Discovering the cluster
08:13:57.625 [notice] SYN[realtime@127.0.0.1] Adding node to scope <Elixir.Extensions.PostgresCdcRls>
08:13:57.625 [notice] SYN[realtime@127.0.0.1] Creating tables for scope <Elixir.Extensions.PostgresCdcRls>
08:13:57.625 [notice] SYN[realtime@127.0.0.1|registry<Elixir.Extensions.PostgresCdcRls>] Discovering the cluster
08:13:57.625 [notice] SYN[realtime@127.0.0.1|pg<Elixir.Extensions.PostgresCdcRls>] Discovering the cluster
08:14:00.725 [debug] Tzdata polling for update.
08:14:03.054 [info] tzdata release in place is from a file last modified Fri, 22 Oct 2021 02:20:47 GMT. Release file on server was last modified Tue, 29 Nov 2022 17:25:53 GMT.
08:14:03.055 [debug] Tzdata downloading new data from https://data.iana.org/time-zones/tzdata-latest.tar.gz
08:14:04.541 [debug] Tzdata data downloaded. Release version 2022g.
08:14:05.548 [info] Tzdata has updated the release from 2021e to 2022g
08:14:05.548 [debug] Tzdata deleting ETS table for version 2021e
08:14:05.553 [debug] Tzdata deleting ETS table file for version 2021e

docker logs supabase_inbucket_*

Installing default greeting.html to /config
{"level":"info","phase":"startup","version":"v3.0.3","buildDate":"2022-08-08T02:52:31+00:00","time":"2023-01-20T08:12:45Z","message":"Inbucket starting"}
{"level":"info","phase":"startup","module":"storage","time":"2023-01-20T08:12:45Z","message":"Retention configured for 72h0m0s"}
{"level":"info","module":"web","phase":"startup","path":"ui","time":"2023-01-20T08:12:45Z","message":"Web UI content mapped"}
{"level":"info","module":"smtp","phase":"startup","addr":"0.0.0.0:2500","time":"2023-01-20T08:12:45Z","message":"SMTP listening on tcp4"}
{"level":"info","module":"web","phase":"startup","addr":"0.0.0.0:9000","time":"2023-01-20T08:12:45Z","message":"HTTP listening on tcp4"}
{"level":"info","module":"pop3","phase":"startup","addr":"0.0.0.0:1100","time":"2023-01-20T08:12:45Z","message":"POP3 listening on tcp4"}

docker logs supabase_auth_*

{"level":"info","msg":"Go runtime metrics collection started","time":"2023-01-20T08:12:44Z"}
{"component":"pop","level":"info","msg":"Migrations already up to date, nothing to apply","time":"2023-01-20T08:12:44Z"}
{"args":[0.028376076],"component":"pop","level":"info","msg":"%.4f seconds","time":"2023-01-20T08:12:44Z"}
{"level":"info","msg":"GoTrue migrations applied successfully","time":"2023-01-20T08:12:44Z"}
{"component":"api","level":"warning","msg":"DEPRECATION NOTICE: GOTRUE_JWT_ADMIN_GROUP_NAME not supported by Supabase's GoTrue, will be removed soon","time":"2023-01-20T08:12:44Z"}
{"component":"api","level":"warning","msg":"DEPRECATION NOTICE: GOTRUE_JWT_DEFAULT_GROUP_NAME not supported by Supabase's GoTrue, will be removed soon","time":"2023-01-20T08:12:44Z"}
{"level":"info","msg":"GoTrue API started on: 0.0.0.0:9999","time":"2023-01-20T08:12:44Z"}

docker logs supabase_kong_*

2023/01/20 08:12:35 [warn] 8#0: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /usr/local/kong/nginx.conf:6
nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /usr/local/kong/nginx.conf:6
2023/01/20 08:12:44 [notice] 8#0: using the "epoll" event method
2023/01/20 08:12:44 [notice] 8#0: openresty/1.19.9.1
2023/01/20 08:12:44 [notice] 8#0: built by gcc 6.4.0 (Alpine 6.4.0) 
2023/01/20 08:12:44 [notice] 8#0: OS: Linux 6.1.1-1-MANJARO
2023/01/20 08:12:44 [notice] 8#0: getrlimit(RLIMIT_NOFILE): 1073741816:1073741816
2023/01/20 08:12:44 [notice] 8#0: start worker processes
2023/01/20 08:12:44 [notice] 8#0: start worker process 1123
2023/01/20 08:12:44 [notice] 8#0: start worker process 1124
2023/01/20 08:12:44 [notice] 8#0: start worker process 1125
2023/01/20 08:12:44 [notice] 8#0: start worker process 1126
2023/01/20 08:12:44 [notice] 1124#0: *2 [lua] init.lua:260: purge(): [DB cache] purging (local) cache, context: init_worker_by_lua*
2023/01/20 08:12:44 [notice] 1124#0: *2 [lua] init.lua:260: purge(): [DB cache] purging (local) cache, context: init_worker_by_lua*
2023/01/20 08:12:44 [notice] 1124#0: *2 [kong] init.lua:426 declarative config loaded from /home/kong/kong.yml, context: init_worker_by_lua*
2023/01/20 08:12:44 [notice] 1124#0: *2 [kong] init.lua:312 only worker #0 can manage, context: init_worker_by_lua*
2023/01/20 08:12:44 [notice] 1125#0: *3 [kong] init.lua:312 only worker #0 can manage, context: init_worker_by_lua*
2023/01/20 08:12:44 [notice] 1126#0: *4 [kong] init.lua:312 only worker #0 can manage, context: init_worker_by_lua*
172.19.0.1 - - [20/Jan/2023:08:14:04 +0000] "HEAD /rest/v1/ HTTP/1.1" 200 0 "-" "Go-http-client/1.1"
172.19.0.1 - - [20/Jan/2023:08:17:08 +0000] "OPTIONS /rest/v1/ HTTP/1.1" 200 0 "http://localhost:54323/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36"
172.19.0.1 - - [20/Jan/2023:08:17:09 +0000] "HEAD /rest/v1/ HTTP/1.1" 200 0 "http://localhost:54323/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36"

...

docker logs supabase_db_*

The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locale "en_US.UTF-8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... ok
selecting dynamic shared memory implementation ... posix
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default time zone ... Etc/UTC
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok


Success. You can now start the database server using:

initdb: warning: enabling "trust" authentication for local connections
initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb.
    pg_ctl -D /var/lib/postgresql/data -l logfile start

waiting for server to start.... 2023-01-20 08:12:16.255 UTC [54] LOG:  pgaudit extension initialized
 2023-01-20 08:12:16.368 UTC [54] LOG:  pgsodium primary server secret key loaded
 2023-01-20 08:12:16.520 UTC [54] LOG:  redirecting log output to logging collector process
 2023-01-20 08:12:16.520 UTC [54] HINT:  Future log output will appear in directory "/var/log/postgresql".
. done
server started

/usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/00-schema.sql
CREATE ROLE
REVOKE
CREATE SCHEMA
CREATE FUNCTION
REVOKE
GRANT


/usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/01-extension.sql
CREATE SCHEMA
CREATE EXTENSION


/usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/init-scripts

/usr/local/bin/docker-entrypoint.sh: sourcing /docker-entrypoint-initdb.d/migrate.sh

/usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/migrations

waiting for server to shut down..... done
server stopped

PostgreSQL init process complete; ready for start up.

 2023-01-20 08:12:19.122 UTC [1] LOG:  pgaudit extension initialized
 2023-01-20 08:12:19.134 UTC [1] LOG:  pgsodium primary server secret key loaded
 2023-01-20 08:12:19.377 UTC [1] LOG:  redirecting log output to logging collector process
 2023-01-20 08:12:19.377 UTC [1] HINT:  Future log output will appear in directory "/var/log/postgresql".

The standard GitHub Linux runner that we tested on has 7GB of RAM, which is more than your local machine or linode. Are you able to run on another instance with more RAM? I suspect docker will start paging containers to disk if it runs out of physical memory, hence slowing down the start process significantly.

Just ran it on an 8GB RAM linode, and it started fine with no errors, I guess it is my machine’s limited resources. Running the start command with --ignore-health-check fixes it for me, I will keep using that. Thanks.

when I run “supabase start --help” I get this: Start containers for Supabase local development

Usage: supabase start [flags]

Flags: -x, --exclude strings Names of containers to not start. [gotrue, realtime, storage-api, imgproxy, kong, inbucket, postgrest, pgadmin-schema-diff, migra, postgres-meta, studio, deno-relay] -h, --help help for start

Global Flags: –debug output debug logs to stderr –experimental enable experimental features –workdir string path to a Supabase project directory

This is with supabase CLI version 1.33.0

You have to update your CLI version, the version am using is 1.34.5

I can increase the wait time from 10s to 20s and see how it goes. You can also exclude services from starting if they are not needed, for eg.

supabase start -x storage-api,postgres-meta,studio

If you are testing migrations only, just the database needs to be started supabase db start