scrutiny: Testing influx version and get password and username not found

This is run on a new docker container now settings from old version where left.

`Docker:~/docker-compose$ docker logs scrutiny [s6-init] making user provided files available at /var/run/s6/etc…exited 0. [s6-init] ensuring user provided files have correct perms…exited 0. [fix-attrs.d] applying ownership & permissions fixes… [fix-attrs.d] done. [cont-init.d] executing container initialization scripts… [cont-init.d] 01-timezone: executing… [cont-init.d] 01-timezone: exited 0. [cont-init.d] 50-config: executing… [cont-init.d] 50-config: exited 0. [cont-init.d] done. [services.d] starting services waiting for influxdb waiting for scrutiny service to start starting cron [services.d] done. starting influxdb influxdb not ready scrutiny api not ready ts=2022-05-08T15:12:02.531330Z lvl=info msg=“Welcome to InfluxDB” log_id=0aL3UZtl000 version=v2.2.0 commit=a2f8538837 build_date=2022-04-06T17:36:40Z ts=2022-05-08T15:12:02.535848Z lvl=info msg=“Resources opened” log_id=0aL3UZtl000 service=bolt path=/scrutiny/influxdb/influxd.bolt ts=2022-05-08T15:12:02.535900Z lvl=info msg=“Resources opened” log_id=0aL3UZtl000 service=sqlite path=/scrutiny/influxdb/influxd.sqlite ts=2022-05-08T15:12:02.536757Z lvl=info msg=“Bringing up metadata migrations” log_id=0aL3UZtl000 service=“KV migrations” migration_count=19 ts=2022-05-08T15:12:02.615243Z lvl=info msg=“Bringing up metadata migrations” log_id=0aL3UZtl000 service=“SQL migrations” migration_count=5 ts=2022-05-08T15:12:02.629517Z lvl=info msg=“Using data dir” log_id=0aL3UZtl000 service=storage-engine service=store path=/scrutiny/influxdb/engine/data ts=2022-05-08T15:12:02.629583Z lvl=info msg=“Compaction settings” log_id=0aL3UZtl000 service=storage-engine service=store max_concurrent_compactions=8 throughput_bytes_per_second=50331648 throughput_bytes_per_second_burst=50331648 ts=2022-05-08T15:12:02.629595Z lvl=info msg=“Open store (start)” log_id=0aL3UZtl000 service=storage-engine service=store op_name=tsdb_open op_event=start ts=2022-05-08T15:12:02.629632Z lvl=info msg=“Open store (end)” log_id=0aL3UZtl000 service=storage-engine service=store op_name=tsdb_open op_event=end op_elapsed=0.039ms ts=2022-05-08T15:12:02.629654Z lvl=info msg=“Starting retention policy enforcement service” log_id=0aL3UZtl000 service=retention check_interval=30m ts=2022-05-08T15:12:02.629659Z lvl=info msg=“Starting precreation service” log_id=0aL3UZtl000 service=shard-precreation check_interval=10m advance_period=30m ts=2022-05-08T15:12:02.630081Z lvl=info msg=“Starting query controller” log_id=0aL3UZtl000 service=storage-reads concurrency_quota=1024 initial_memory_bytes_quota_per_query=9223372036854775807 memory_bytes_quota_per_query=9223372036854775807 max_memory_bytes=0 queue_size=1024 ts=2022-05-08T15:12:02.631198Z lvl=info msg=“Configuring InfluxQL statement executor (zeros indicate unlimited).” log_id=0aL3UZtl000 max_select_point=0 max_select_series=0 max_select_buckets=0 ts=2022-05-08T15:12:02.636081Z lvl=info msg=Listening log_id=0aL3UZtl000 service=tcp-listener transport=http addr=:8086 port=8086 scrutiny api not ready starting scrutiny 2022/05/08 09:12:07 No configuration file found at /scrutiny/config/scrutiny.yaml. Using Defaults. time=“2022-05-08T09:12:07-06:00” level=info msg=“Trying to connect to scrutiny sqlite db: \n”


/ ) / )( _ ( )( )( )( )( ( )( / ) _ ( ( ) / )()( )( )( ) ( \ / (/ _)()_)() () ()()_) (__) github.com/AnalogJ/scrutiny dev-0.3.12

Start the scrutiny server [GIN-debug] [WARNING] Running in “debug” mode. Switch to “release” mode in production.

  • using env: export GIN_MODE=release
  • using code: gin.SetMode(gin.ReleaseMode)

time=“2022-05-08T09:12:07-06:00” level=info msg=“Successfully connected to scrutiny sqlite db: \n” panic: a username and password is required for a setup

goroutine 1 [running]: github.com/analogj/scrutiny/webapp/backend/pkg/web/middleware.RepositoryMiddleware(0x129e540, 0xc000114078, 0x12a3720, 0xc000482230, 0x129e5c0) /go/src/github.com/analogj/scrutiny/webapp/backend/pkg/web/middleware/repository.go:14 +0xe6 github.com/analogj/scrutiny/webapp/backend/pkg/web.(*AppEngine).Setup(0xc000113290, 0x12a3720, 0xc000482230, 0x1) /go/src/github.com/analogj/scrutiny/webapp/backend/pkg/web/server.go:26 +0xcf github.com/analogj/scrutiny/webapp/backend/pkg/web.(*AppEngine).Start(0xc000113290, 0x0, 0x0) /go/src/github.com/analogj/scrutiny/webapp/backend/pkg/web/server.go:91 +0x234 main.main.func2(0xc00011b380, 0x4, 0x6) /go/src/github.com/analogj/scrutiny/webapp/backend/cmd/scrutiny/scrutiny.go:112 +0x198 github.com/urfave/cli/v2.(*Command).Run(0xc000484480, 0xc00011b200, 0x0, 0x0) /go/pkg/mod/github.com/urfave/cli/v2@v2.2.0/command.go:164 +0x4e0 github.com/urfave/cli/v2.(*App).RunContext(0xc000102600, 0x128d440, 0xc0001a8010, 0xc0001a0020, 0x2, 0x2, 0x0, 0x0) /go/pkg/mod/github.com/urfave/cli/v2@v2.2.0/app.go:306 +0x814 github.com/urfave/cli/v2.(*App).Run(…) /go/pkg/mod/github.com/urfave/cli/v2@v2.2.0/app.go:215 main.main() /go/src/github.com/analogj/scrutiny/webapp/backend/cmd/scrutiny/scrutiny.go:137 +0x65a waiting for influxdb starting scrutiny 2022/05/08 09:12:07 No configuration file found at /scrutiny/config/scrutiny.yaml. Using Defaults. `

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Comments: 18 (6 by maintainers)

Most upvoted comments

I’m experiencing the same error. If I remove the SCRUTINY_WEB env var the error disappears, but the app has no web UI. This is not a proper fix IMHO.

I had the same issue.

Fixed it by switching from the linuxserver image to the official one and removed the following from the docker compose:

      - SCRUTINY_API_ENDPOINT=http://localhost:8080
      - SCRUTINY_WEB=true
      - SCRUTINY_COLLECTOR=true

@derekcentrico - '/var/run/docker.sock:/tmp/docker.sock:ro' is unnecessary, personally I’d be really careful about volume mounting the docker socket in random containers, its a huge security hole.

I’m still looking into your sde error message.


For everyone else following this thread, I’ve created an InfluxDB troubleshooting guide:

https://github.com/AnalogJ/scrutiny/blob/master/docs/TROUBLESHOOTING_INFLUXDB.md

here’s a couple of confirmed working docker-compose files that you may want to look at:

Some notes:

  • SCRUTINY_WEB=true is unnecessary with the official image, as the web/api service is always running.
  • You must always persist the influxdb folder (omnibus: /opt/scrutiny/influxdb, vanilla image: /var/lib/influxdb2) otherwise the influx database will not be persisted and the credentials stored in the config file will be incorrect (and your database will be empty between restarts).

@derekcentrico I had some UI issues which mixed up warn vs failed disks, which could be why your disks are all classified as failed now. Can you send me over the output of the following command for your devices?

smartctl -x -j /dev/[DEVICE_NAME_HERE]

Thanks!

Well this is fascinating. I have it operational and all my ATAs are “failed” whereas on linuxserver’s variant none were. Head scratcher.