quivr: [Bug]: I have deployed it on the cloud server, but the interface keeps throwing errors.
What happened?
A bug happened! The error message is: Error occurred while fetching your chats
Relevant log output
root@iZ6we7nwen3zaknisrym6xZ:~/quivr# docker-compose -f docker-compose.local.yml up
redis is up-to-date
traefik is up-to-date
beat is up-to-date
worker is up-to-date
web is up-to-date
backend-core is up-to-date
flower is up-to-date
Attaching to redis, traefik, beat, worker, web, backend-core, flower
backend-core | INFO: Will watch for changes in these directories: ['/code']
backend-core | WARNING: "workers" flag is ignored when reloading is enabled.
backend-core | INFO: Uvicorn running on http://0.0.0.0:5050 (Press CTRL+C to quit)
backend-core | INFO: Started reloader process [1] using StatReload
backend-core | INFO: Started server process [7]
backend-core | INFO: Waiting for application startup.
backend-core | [INFO] Downloading pandoc from https://github.com/jgm/pandoc/releases/download/3.1.9/pandoc-3.1.9-1-amd64.deb ...
backend-core | 2023-11-16 19:01:03,993:INFO - Downloading pandoc from https://github.com/jgm/pandoc/releases/download/3.1.9/pandoc-3.1.9-1-amd64.deb ...
backend-core | [INFO] Unpacking pandoc-3.1.9-1-amd64.deb to tempfolder...
backend-core | 2023-11-16 19:01:05,825:INFO - Unpacking pandoc-3.1.9-1-amd64.deb to tempfolder...
backend-core | [INFO] Copying pandoc to /root/bin ...
backend-core | 2023-11-16 19:01:06,892:INFO - Copying pandoc to /root/bin ...
backend-core | [INFO] Making /root/bin/pandoc executeable...
backend-core | 2023-11-16 19:01:07,004:INFO - Making /root/bin/pandoc executeable...
backend-core | [INFO] Copying copyright to /root/bin ...
backend-core | 2023-11-16 19:01:07,005:INFO - Copying copyright to /root/bin ...
backend-core | INFO: Application startup complete.
backend-core | INFO: 172.18.0.2:56616 - "GET /.git/config HTTP/1.1" 404 Not Found
backend-core | INFO: 172.18.0.2:56618 - "HEAD / HTTP/1.1" 405 Method Not Allowed
backend-core | INFO: 172.18.0.2:45804 - "GET / HTTP/1.1" 200 OK
backend-core | INFO: 172.18.0.2:34654 - "GET / HTTP/1.1" 200 OK
backend-core | INFO: 172.18.0.2:49182 - "HEAD / HTTP/1.1" 405 Method Not Allowed
backend-core | INFO: 172.18.0.2:46670 - "GET / HTTP/1.1" 200 OK
backend-core | INFO: 172.18.0.2:46670 - "GET / HTTP/1.1" 200 OK
backend-core | INFO: 172.18.0.2:54740 - "GET / HTTP/1.1" 200 OK
backend-core | INFO: 172.18.0.2:54740 - "HEAD /favicon.ico HTTP/1.1" 404 Not Found
backend-core | INFO: 172.18.0.2:45068 - "GET / HTTP/1.1" 200 OK
backend-core | INFO: 172.18.0.2:45068 - "GET /favicon.ico HTTP/1.1" 404 Not Found
backend-core | INFO: 172.18.0.2:55704 - "GET /robots.txt HTTP/1.1" 404 Not Found
backend-core | INFO: 172.18.0.2:55704 - "GET / HTTP/1.1" 200 OK
backend-core | INFO: 172.18.0.2:43258 - "GET / HTTP/1.1" 200 OK
backend-core | INFO: 172.18.0.2:49698 - "GET / HTTP/1.1" 200 OK
backend-core | INFO: 172.18.0.2:49698 - "GET /favicon.ico HTTP/1.1" 404 Not Found
backend-core | INFO: 172.18.0.2:40356 - "GET / HTTP/1.1" 200 OK
backend-core | INFO: 172.18.0.2:47010 - "GET / HTTP/1.1" 200 OK
backend-core | INFO: 172.18.0.2:47010 - "GET / HTTP/1.1" 200 OK
backend-core | INFO: 172.18.0.2:47010 - "GET / HTTP/1.1" 200 OK
backend-core | INFO: 172.18.0.2:47010 - "GET / HTTP/1.1" 200 OK
backend-core | INFO: 172.18.0.2:47010 - "GET / HTTP/1.1" 200 OK
backend-core | INFO: 172.18.0.2:47018 - "GET / HTTP/1.1" 200 OK
backend-core | INFO: 172.18.0.2:50948 - "GET / HTTP/1.1" 200 OK
backend-core | INFO: 172.18.0.2:50948 - "GET /Public/home/js/check.js HTTP/1.1" 404 Not Found
backend-core | INFO: 172.18.0.2:50948 - "GET /static/admin/javascript/hetong.js HTTP/1.1" 404 Not Found
backend-core | INFO: 172.18.0.2:34062 - "HEAD / HTTP/1.1" 405 Method Not Allowed
backend-core | INFO: 172.18.0.2:50430 - "GET / HTTP/1.1" 200 OK
backend-core | INFO: 172.18.0.2:50430 - "GET /Public/home/js/check.js HTTP/1.1" 404 Not Found
backend-core | INFO: 172.18.0.2:50430 - "GET /static/admin/javascript/hetong.js HTTP/1.1" 404 Not Found
backend-core | INFO: 172.18.0.2:55734 - "GET / HTTP/1.1" 200 OK
backend-core | INFO: 172.18.0.2:55734 - "GET /favicon.ico HTTP/1.1" 404 Not Found
backend-core | INFO: 172.18.0.2:57138 - "GET / HTTP/1.1" 200 OK
backend-core | INFO: 172.18.0.2:57138 - "GET /favicon.ico HTTP/1.1" 404 Not Found
backend-core | INFO: 172.18.0.2:50866 - "GET / HTTP/1.1" 200 OK
backend-core | INFO: 172.18.0.2:50866 - "GET /favicon.ico HTTP/1.1" 404 Not Found
backend-core | INFO: 172.18.0.2:36076 - "HEAD / HTTP/1.1" 405 Method Not Allowed
backend-core | INFO: 172.18.0.2:52514 - "GET / HTTP/1.1" 200 OK
backend-core | INFO: 172.18.0.2:38860 - "GET / HTTP/1.1" 200 OK
backend-core | INFO: 172.18.0.2:54758 - "GET / HTTP/1.1" 200 OK
backend-core | INFO: 172.18.0.2:54758 - "GET /favicon.ico HTTP/1.1" 404 Not Found
backend-core | INFO: 172.18.0.2:55762 - "GET / HTTP/1.1" 200 OK
flower | 2023-11-16 19:00:59,260:INFO - Visit me at http://0.0.0.0:5555
flower | 2023-11-16 19:00:59,265:INFO - Broker: redis://redis:6379/0
flower | 2023-11-16 19:00:59,268:INFO - Registered tasks:
flower | ['celery.accumulate',
flower | 'celery.backend_cleanup',
flower | 'celery.chain',
flower | 'celery.chord',
flower | 'celery.chord_unlock',
flower | 'celery.chunks',
flower | 'celery.group',
flower | 'celery.map',
flower | 'celery.starmap',
flower | 'celery_task.create_embedding_for_document',
flower | 'celery_worker.remove_onboarding_more_than_x_days_task',
flower | 'process_crawl_and_notify',
flower | 'process_file_and_notify']
flower | 2023-11-16 19:00:59,272:INFO - Connected to redis://redis:6379/0
flower | 2023-11-16 19:01:00,298:WARNING - Inspect method registered failed
flower | 2023-11-16 19:01:00,301:WARNING - Inspect method active_queues failed
flower | 2023-11-16 19:01:00,307:WARNING - Inspect method scheduled failed
flower | 2023-11-16 19:01:00,308:WARNING - Inspect method stats failed
flower | 2023-11-16 19:01:00,312:WARNING - Inspect method active failed
flower | 2023-11-16 19:01:00,313:WARNING - Inspect method reserved failed
beat | [2023-11-16 19:00:58,577: INFO/MainProcess] beat: Starting...
beat | [2023-11-17 00:00:00,002: INFO/MainProcess] Scheduler: Sending due task remove_onboarding_more_than_x_days_task (celery_worker.remove_onboarding_more_than_x_days_task)
beat | [2023-11-18 00:00:00,000: INFO/MainProcess] Scheduler: Sending due task remove_onboarding_more_than_x_days_task (celery_worker.remove_onboarding_more_than_x_days_task)
redis | 1:C 16 Nov 2023 19:00:53.811 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can also cause failures without low memory condition, see https://github.com/jemalloc/jemalloc/issues/1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
redis | 1:C 16 Nov 2023 19:00:53.811 * oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis | 1:C 16 Nov 2023 19:00:53.811 * Redis version=7.2.3, bits=64, commit=00000000, modified=0, pid=1, just started
redis | 1:C 16 Nov 2023 19:00:53.811 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis | 1:M 16 Nov 2023 19:00:53.811 * monotonic clock: POSIX clock_gettime
redis | 1:M 16 Nov 2023 19:00:53.812 * Running mode=standalone, port=6379.
redis | 1:M 16 Nov 2023 19:00:53.812 * Server initialized
redis | 1:M 16 Nov 2023 19:00:53.812 * Ready to accept connections tcp
redis | 1:M 16 Nov 2023 19:55:49.455 * 100 changes in 300 seconds. Saving...
redis | 1:M 16 Nov 2023 19:55:49.455 * Background saving started by pid 19
redis | 19:C 16 Nov 2023 19:55:49.459 * DB saved on disk
redis | 19:C 16 Nov 2023 19:55:49.459 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis | 1:M 16 Nov 2023 19:55:49.556 * Background saving terminated with success
redis | 1:M 16 Nov 2023 20:55:50.073 * 1 changes in 3600 seconds. Saving...
redis | 1:M 16 Nov 2023 20:55:50.074 * Background saving started by pid 20
redis | 20:C 16 Nov 2023 20:55:50.077 * DB saved on disk
redis | 20:C 16 Nov 2023 20:55:50.077 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis | 1:M 16 Nov 2023 20:55:50.174 * Background saving terminated with success
redis | 1:M 16 Nov 2023 21:55:51.093 * 1 changes in 3600 seconds. Saving...
redis | 1:M 16 Nov 2023 21:55:51.094 * Background saving started by pid 21
redis | 21:C 16 Nov 2023 21:55:51.097 * DB saved on disk
redis | 21:C 16 Nov 2023 21:55:51.098 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis | 1:M 16 Nov 2023 21:55:51.194 * Background saving terminated with success
redis | 1:M 16 Nov 2023 22:55:52.015 * 1 changes in 3600 seconds. Saving...
redis | 1:M 16 Nov 2023 22:55:52.016 * Background saving started by pid 22
redis | 22:C 16 Nov 2023 22:55:52.019 * DB saved on disk
redis | 22:C 16 Nov 2023 22:55:52.019 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis | 1:M 16 Nov 2023 22:55:52.116 * Background saving terminated with success
redis | 1:M 16 Nov 2023 23:55:53.090 * 1 changes in 3600 seconds. Saving...
redis | 1:M 16 Nov 2023 23:55:53.090 * Background saving started by pid 23
redis | 23:C 16 Nov 2023 23:55:53.093 * DB saved on disk
redis | 23:C 16 Nov 2023 23:55:53.094 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis | 1:M 16 Nov 2023 23:55:53.190 * Background saving terminated with success
redis | 1:M 17 Nov 2023 00:55:54.058 * 1 changes in 3600 seconds. Saving...
redis | 1:M 17 Nov 2023 00:55:54.058 * Background saving started by pid 24
redis | 24:C 17 Nov 2023 00:55:54.062 * DB saved on disk
redis | 24:C 17 Nov 2023 00:55:54.062 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis | 1:M 17 Nov 2023 00:55:54.159 * Background saving terminated with success
redis | 1:M 17 Nov 2023 01:55:55.035 * 1 changes in 3600 seconds. Saving...
redis | 1:M 17 Nov 2023 01:55:55.035 * Background saving started by pid 25
redis | 25:C 17 Nov 2023 01:55:55.039 * DB saved on disk
redis | 25:C 17 Nov 2023 01:55:55.039 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis | 1:M 17 Nov 2023 01:55:55.136 * Background saving terminated with success
redis | 1:M 17 Nov 2023 02:55:56.000 * 1 changes in 3600 seconds. Saving...
redis | 1:M 17 Nov 2023 02:55:56.001 * Background saving started by pid 26
redis | 26:C 17 Nov 2023 02:55:56.004 * DB saved on disk
redis | 26:C 17 Nov 2023 02:55:56.005 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis | 1:M 17 Nov 2023 02:55:56.101 * Background saving terminated with success
redis | 1:M 17 Nov 2023 03:55:57.094 * 1 changes in 3600 seconds. Saving...
redis | 1:M 17 Nov 2023 03:55:57.095 * Background saving started by pid 27
redis | 27:C 17 Nov 2023 03:55:57.098 * DB saved on disk
redis | 27:C 17 Nov 2023 03:55:57.098 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis | 1:M 17 Nov 2023 03:55:57.195 * Background saving terminated with success
redis | 1:M 17 Nov 2023 04:55:58.014 * 1 changes in 3600 seconds. Saving...
redis | 1:M 17 Nov 2023 04:55:58.014 * Background saving started by pid 28
redis | 28:C 17 Nov 2023 04:55:58.018 * DB saved on disk
redis | 28:C 17 Nov 2023 04:55:58.018 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis | 1:M 17 Nov 2023 04:55:58.115 * Background saving terminated with success
redis | 1:M 17 Nov 2023 05:55:59.064 * 1 changes in 3600 seconds. Saving...
redis | 1:M 17 Nov 2023 05:55:59.064 * Background saving started by pid 29
redis | 29:C 17 Nov 2023 05:55:59.067 * DB saved on disk
redis | 29:C 17 Nov 2023 05:55:59.067 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis | 1:M 17 Nov 2023 05:55:59.164 * Background saving terminated with success
redis | 1:M 17 Nov 2023 06:56:00.036 * 1 changes in 3600 seconds. Saving...
redis | 1:M 17 Nov 2023 06:56:00.036 * Background saving started by pid 30
redis | 30:C 17 Nov 2023 06:56:00.040 * DB saved on disk
redis | 30:C 17 Nov 2023 06:56:00.040 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis | 1:M 17 Nov 2023 06:56:00.136 * Background saving terminated with success
redis | 1:M 17 Nov 2023 07:56:01.054 * 1 changes in 3600 seconds. Saving...
redis | 1:M 17 Nov 2023 07:56:01.054 * Background saving started by pid 31
redis | 31:C 17 Nov 2023 07:56:01.058 * DB saved on disk
redis | 31:C 17 Nov 2023 07:56:01.058 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis | 1:M 17 Nov 2023 07:56:01.154 * Background saving terminated with success
redis | 1:M 17 Nov 2023 08:56:02.024 * 1 changes in 3600 seconds. Saving...
redis | 1:M 17 Nov 2023 08:56:02.025 * Background saving started by pid 32
redis | 32:C 17 Nov 2023 08:56:02.028 * DB saved on disk
redis | 32:C 17 Nov 2023 08:56:02.029 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis | 1:M 17 Nov 2023 08:56:02.125 * Background saving terminated with success
redis | 1:M 17 Nov 2023 09:56:03.064 * 1 changes in 3600 seconds. Saving...
redis | 1:M 17 Nov 2023 09:56:03.064 * Background saving started by pid 33
redis | 33:C 17 Nov 2023 09:56:03.067 * DB saved on disk
redis | 33:C 17 Nov 2023 09:56:03.068 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis | 1:M 17 Nov 2023 09:56:03.164 * Background saving terminated with success
redis | 1:M 17 Nov 2023 10:56:04.062 * 1 changes in 3600 seconds. Saving...
redis | 1:M 17 Nov 2023 10:56:04.062 * Background saving started by pid 34
redis | 34:C 17 Nov 2023 10:56:04.066 * DB saved on disk
redis | 34:C 17 Nov 2023 10:56:04.066 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis | 1:M 17 Nov 2023 10:56:04.163 * Background saving terminated with success
redis | 1:M 17 Nov 2023 11:56:05.034 * 1 changes in 3600 seconds. Saving...
redis | 1:M 17 Nov 2023 11:56:05.034 * Background saving started by pid 35
worker | /usr/local/lib/python3.11/site-packages/celery/platforms.py:840: SecurityWarning: You're running the worker with superuser privileges: this is
worker | absolutely not recommended!
worker |
worker | Please specify a different user using the --uid option.
worker |
worker | User information: uid=0 euid=0 gid=0 egid=0
worker |
worker | warnings.warn(SecurityWarning(ROOT_DISCOURAGED.format(
worker |
worker | -------------- celery@c365f830436b v5.2.7 (dawn-chorus)
worker | --- ***** -----
worker | -- ******* ---- Linux-5.15.0-86-generic-x86_64-with-glibc2.31 2023-11-16 19:00:58
worker | - *** --- * ---
worker | - ** ---------- [config]
worker | - ** ---------- .> app: celery_worker:0x7ff576fa7dd0
worker | - ** ---------- .> transport: redis://redis:6379/0
worker | - ** ---------- .> results: redis://redis:6379/0
worker | - *** --- * --- .> concurrency: 2 (prefork)
worker | -- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
worker | --- ***** -----
worker | -------------- [queues]
worker | .> celery exchange=celery(direct) key=celery
worker |
worker |
worker | [tasks]
worker | . celery_task.create_embedding_for_document
worker | . celery_worker.remove_onboarding_more_than_x_days_task
worker | . process_crawl_and_notify
worker | . process_file_and_notify
worker |
worker | [2023-11-16 19:00:59,130: INFO/MainProcess] Connected to redis://redis:6379/0
worker | [2023-11-16 19:00:59,135: INFO/MainProcess] mingle: searching for neighbors
worker | [2023-11-16 19:01:00,158: INFO/MainProcess] mingle: all alone
worker | [2023-11-16 19:01:00,168: INFO/MainProcess] celery@c365f830436b ready.
worker | [2023-11-16 19:01:04,271: INFO/MainProcess] Events of group {task} enabled by remote.
worker | [2023-11-17 00:00:00,008: INFO/MainProcess] Task celery_worker.remove_onboarding_more_than_x_days_task[46261aef-9722-48cf-9bfe-7869445ce991] received
worker | [2023-11-17 00:00:00,477: INFO/ForkPoolWorker-2] HTTP Request: DELETE https://kmcofyegyrjeparmwfzg.supabase.co/rest/v1/onboardings?creation_time=lt.2023-11-10%2000%3A00%3A00.034599 "HTTP/1.1 200 OK"
worker | [2023-11-17 00:00:00,482: INFO/ForkPoolWorker-2] Task celery_worker.remove_onboarding_more_than_x_days_task[46261aef-9722-48cf-9bfe-7869445ce991] succeeded in 0.47273045600013575s: None
worker | [2023-11-18 00:00:00,002: INFO/MainProcess] Task celery_worker.remove_onboarding_more_than_x_days_task[a8b739ee-7ff7-45b9-b3c1-9ceff81974e4] received
worker | [2023-11-18 00:00:00,323: INFO/ForkPoolWorker-2] HTTP Request: DELETE https://kmcofyegyrjeparmwfzg.supabase.co/rest/v1/onboardings?creation_time=lt.2023-11-11%2000%3A00%3A00.016979 "HTTP/1.1 200 OK"
worker | [2023-11-18 00:00:00,324: INFO/ForkPoolWorker-2] Task celery_worker.remove_onboarding_more_than_x_days_task[a8b739ee-7ff7-45b9-b3c1-9ceff81974e4] succeeded in 0.3214102079946315s: None
redis | 35:C 17 Nov 2023 11:56:05.038 * DB saved on disk
redis | 35:C 17 Nov 2023 11:56:05.038 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis | 1:M 17 Nov 2023 11:56:05.135 * Background saving terminated with success
redis | 1:M 17 Nov 2023 12:56:06.006 * 1 changes in 3600 seconds. Saving...
redis | 1:M 17 Nov 2023 12:56:06.006 * Background saving started by pid 36
redis | 36:C 17 Nov 2023 12:56:06.010 * DB saved on disk
redis | 36:C 17 Nov 2023 12:56:06.010 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis | 1:M 17 Nov 2023 12:56:06.106 * Background saving terminated with success
redis | 1:M 17 Nov 2023 13:56:07.054 * 1 changes in 3600 seconds. Saving...
redis | 1:M 17 Nov 2023 13:56:07.054 * Background saving started by pid 37
redis | 37:C 17 Nov 2023 13:56:07.058 * DB saved on disk
redis | 37:C 17 Nov 2023 13:56:07.058 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis | 1:M 17 Nov 2023 13:56:07.154 * Background saving terminated with success
redis | 1:M 17 Nov 2023 14:56:08.099 * 1 changes in 3600 seconds. Saving...
redis | 1:M 17 Nov 2023 14:56:08.099 * Background saving started by pid 38
redis | 38:C 17 Nov 2023 14:56:08.103 * DB saved on disk
redis | 38:C 17 Nov 2023 14:56:08.103 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis | 1:M 17 Nov 2023 14:56:08.200 * Background saving terminated with success
redis | 1:M 17 Nov 2023 15:56:09.014 * 1 changes in 3600 seconds. Saving...
redis | 1:M 17 Nov 2023 15:56:09.015 * Background saving started by pid 39
redis | 39:C 17 Nov 2023 15:56:09.018 * DB saved on disk
redis | 39:C 17 Nov 2023 15:56:09.018 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis | 1:M 17 Nov 2023 15:56:09.115 * Background saving terminated with success
redis | 1:M 17 Nov 2023 16:56:10.014 * 1 changes in 3600 seconds. Saving...
redis | 1:M 17 Nov 2023 16:56:10.014 * Background saving started by pid 40
redis | 40:C 17 Nov 2023 16:56:10.017 * DB saved on disk
redis | 40:C 17 Nov 2023 16:56:10.018 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis | 1:M 17 Nov 2023 16:56:10.114 * Background saving terminated with success
redis | 1:M 17 Nov 2023 17:56:11.031 * 1 changes in 3600 seconds. Saving...
redis | 1:M 17 Nov 2023 17:56:11.032 * Background saving started by pid 41
redis | 41:C 17 Nov 2023 17:56:11.035 * DB saved on disk
redis | 41:C 17 Nov 2023 17:56:11.035 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis | 1:M 17 Nov 2023 17:56:11.132 * Background saving terminated with success
redis | 1:M 17 Nov 2023 18:56:12.001 * 1 changes in 3600 seconds. Saving...
redis | 1:M 17 Nov 2023 18:56:12.001 * Background saving started by pid 42
redis | 42:C 17 Nov 2023 18:56:12.005 * DB saved on disk
redis | 42:C 17 Nov 2023 18:56:12.006 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis | 1:M 17 Nov 2023 18:56:12.102 * Background saving terminated with success
redis | 1:M 17 Nov 2023 19:56:13.046 * 1 changes in 3600 seconds. Saving...
redis | 1:M 17 Nov 2023 19:56:13.046 * Background saving started by pid 43
redis | 43:C 17 Nov 2023 19:56:13.049 * DB saved on disk
redis | 43:C 17 Nov 2023 19:56:13.050 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis | 1:M 17 Nov 2023 19:56:13.146 * Background saving terminated with success
redis | 1:M 17 Nov 2023 20:56:14.047 * 1 changes in 3600 seconds. Saving...
redis | 1:M 17 Nov 2023 20:56:14.047 * Background saving started by pid 44
redis | 44:C 17 Nov 2023 20:56:14.051 * DB saved on disk
redis | 44:C 17 Nov 2023 20:56:14.051 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis | 1:M 17 Nov 2023 20:56:14.148 * Background saving terminated with success
redis | 1:M 17 Nov 2023 21:56:15.091 * 1 changes in 3600 seconds. Saving...
redis | 1:M 17 Nov 2023 21:56:15.092 * Background saving started by pid 45
redis | 45:C 17 Nov 2023 21:56:15.095 * DB saved on disk
redis | 45:C 17 Nov 2023 21:56:15.095 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis | 1:M 17 Nov 2023 21:56:15.192 * Background saving terminated with success
redis | 1:M 17 Nov 2023 22:56:16.012 * 1 changes in 3600 seconds. Saving...
redis | 1:M 17 Nov 2023 22:56:16.012 * Background saving started by pid 46
redis | 46:C 17 Nov 2023 22:56:16.015 * DB saved on disk
redis | 46:C 17 Nov 2023 22:56:16.016 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis | 1:M 17 Nov 2023 22:56:16.112 * Background saving terminated with success
redis | 1:M 17 Nov 2023 23:56:17.081 * 1 changes in 3600 seconds. Saving...
redis | 1:M 17 Nov 2023 23:56:17.081 * Background saving started by pid 47
redis | 47:C 17 Nov 2023 23:56:17.084 * DB saved on disk
redis | 47:C 17 Nov 2023 23:56:17.085 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis | 1:M 17 Nov 2023 23:56:17.181 * Background saving terminated with success
redis | 1:M 18 Nov 2023 00:56:18.052 * 1 changes in 3600 seconds. Saving...
redis | 1:M 18 Nov 2023 00:56:18.053 * Background saving started by pid 48
redis | 48:C 18 Nov 2023 00:56:18.056 * DB saved on disk
redis | 48:C 18 Nov 2023 00:56:18.057 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis | 1:M 18 Nov 2023 00:56:18.153 * Background saving terminated with success
redis | 1:M 18 Nov 2023 01:56:19.049 * 1 changes in 3600 seconds. Saving...
redis | 1:M 18 Nov 2023 01:56:19.049 * Background saving started by pid 49
redis | 49:C 18 Nov 2023 01:56:19.053 * DB saved on disk
redis | 49:C 18 Nov 2023 01:56:19.054 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis | 1:M 18 Nov 2023 01:56:19.150 * Background saving terminated with success
redis | 1:M 18 Nov 2023 02:56:20.023 * 1 changes in 3600 seconds. Saving...
redis | 1:M 18 Nov 2023 02:56:20.024 * Background saving started by pid 50
redis | 50:C 18 Nov 2023 02:56:20.027 * DB saved on disk
redis | 50:C 18 Nov 2023 02:56:20.028 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis | 1:M 18 Nov 2023 02:56:20.124 * Background saving terminated with success
redis | 1:M 18 Nov 2023 03:56:21.089 * 1 changes in 3600 seconds. Saving...
redis | 1:M 18 Nov 2023 03:56:21.089 * Background saving started by pid 51
redis | 51:C 18 Nov 2023 03:56:21.092 * DB saved on disk
redis | 51:C 18 Nov 2023 03:56:21.093 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis | 1:M 18 Nov 2023 03:56:21.189 * Background saving terminated with success
redis | 1:M 18 Nov 2023 04:56:22.038 * 1 changes in 3600 seconds. Saving...
redis | 1:M 18 Nov 2023 04:56:22.038 * Background saving started by pid 52
redis | 52:C 18 Nov 2023 04:56:22.041 * DB saved on disk
redis | 52:C 18 Nov 2023 04:56:22.042 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis | 1:M 18 Nov 2023 04:56:22.138 * Background saving terminated with success
redis | 1:M 18 Nov 2023 05:56:23.005 * 1 changes in 3600 seconds. Saving...
redis | 1:M 18 Nov 2023 05:56:23.005 * Background saving started by pid 53
redis | 53:C 18 Nov 2023 05:56:23.009 * DB saved on disk
redis | 53:C 18 Nov 2023 05:56:23.010 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis | 1:M 18 Nov 2023 05:56:23.106 * Background saving terminated with success
redis | 1:M 18 Nov 2023 06:56:24.094 * 1 changes in 3600 seconds. Saving...
redis | 1:M 18 Nov 2023 06:56:24.095 * Background saving started by pid 54
redis | 54:C 18 Nov 2023 06:56:24.098 * DB saved on disk
redis | 54:C 18 Nov 2023 06:56:24.099 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis | 1:M 18 Nov 2023 06:56:24.195 * Background saving terminated with success
redis | 1:M 18 Nov 2023 07:56:25.022 * 1 changes in 3600 seconds. Saving...
redis | 1:M 18 Nov 2023 07:56:25.022 * Background saving started by pid 55
redis | 55:C 18 Nov 2023 07:56:25.026 * DB saved on disk
redis | 55:C 18 Nov 2023 07:56:25.026 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis | 1:M 18 Nov 2023 07:56:25.123 * Background saving terminated with success
redis | 1:M 18 Nov 2023 08:56:26.009 * 1 changes in 3600 seconds. Saving...
redis | 1:M 18 Nov 2023 08:56:26.009 * Background saving started by pid 56
redis | 56:C 18 Nov 2023 08:56:26.013 * DB saved on disk
redis | 56:C 18 Nov 2023 08:56:26.013 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis | 1:M 18 Nov 2023 08:56:26.110 * Background saving terminated with success
redis | 1:M 18 Nov 2023 09:56:27.041 * 1 changes in 3600 seconds. Saving...
redis | 1:M 18 Nov 2023 09:56:27.041 * Background saving started by pid 57
redis | 57:C 18 Nov 2023 09:56:27.045 * DB saved on disk
redis | 57:C 18 Nov 2023 09:56:27.045 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis | 1:M 18 Nov 2023 09:56:27.142 * Background saving terminated with success
redis | 1:M 18 Nov 2023 10:56:28.039 * 1 changes in 3600 seconds. Saving...
redis | 1:M 18 Nov 2023 10:56:28.039 * Background saving started by pid 58
redis | 58:C 18 Nov 2023 10:56:28.043 * DB saved on disk
redis | 58:C 18 Nov 2023 10:56:28.043 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis | 1:M 18 Nov 2023 10:56:28.140 * Background saving terminated with success
redis | 1:M 18 Nov 2023 11:56:29.004 * 1 changes in 3600 seconds. Saving...
redis | 1:M 18 Nov 2023 11:56:29.004 * Background saving started by pid 59
redis | 59:C 18 Nov 2023 11:56:29.007 * DB saved on disk
redis | 59:C 18 Nov 2023 11:56:29.008 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis | 1:M 18 Nov 2023 11:56:29.104 * Background saving terminated with success
redis | 1:M 18 Nov 2023 12:56:30.052 * 1 changes in 3600 seconds. Saving...
redis | 1:M 18 Nov 2023 12:56:30.052 * Background saving started by pid 60
redis | 60:C 18 Nov 2023 12:56:30.055 * DB saved on disk
redis | 60:C 18 Nov 2023 12:56:30.056 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis | 1:M 18 Nov 2023 12:56:30.152 * Background saving terminated with success
redis | 1:M 18 Nov 2023 13:56:31.074 * 1 changes in 3600 seconds. Saving...
redis | 1:M 18 Nov 2023 13:56:31.074 * Background saving started by pid 61
redis | 61:C 18 Nov 2023 13:56:31.078 * DB saved on disk
redis | 61:C 18 Nov 2023 13:56:31.078 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis | 1:M 18 Nov 2023 13:56:31.175 * Background saving terminated with success
redis | 1:M 18 Nov 2023 14:56:32.002 * 1 changes in 3600 seconds. Saving...
redis | 1:M 18 Nov 2023 14:56:32.002 * Background saving started by pid 62
redis | 62:C 18 Nov 2023 14:56:32.006 * DB saved on disk
redis | 62:C 18 Nov 2023 14:56:32.006 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis | 1:M 18 Nov 2023 14:56:32.103 * Background saving terminated with success
traefik | time="2023-11-16T19:00:54Z" level=info msg="Configuration loaded from flags."
traefik | time="2023-11-17T15:27:01Z" level=error msg="Error while Peeking first byte: read tcp 172.18.0.2:443->118.123.105.93:47976: read: connection timed out"
traefik | time="2023-11-17T16:44:01Z" level=error msg="Error while Peeking first byte: read tcp 172.18.0.2:443->121.62.19.203:57936: read: connection timed out"
web | yarn run v1.22.19
web | $ next start
web | - ready started server on [::]:3000, url: http://localhost:3000
web | react-i18next:: You will need to pass in an i18next instance by using initReactI18next
web | i18next: languageChanged en
web | i18next: initialized {
web | debug: true,
web | initImmediate: true,
web | ns: [ 'translation' ],
web | defaultNS: 'translation',
web | fallbackLng: [ 'en' ],
web | fallbackNS: false,
web | supportedLngs: false,
web | nonExplicitSupportedLngs: false,
web | load: 'all',
web | preload: false,
web | simplifyPluralSuffix: true,
web | keySeparator: '.',
web | nsSeparator: ':',
web | pluralSeparator: '_',
web | contextSeparator: '_',
web | partialBundledLanguages: false,
web | saveMissing: false,
web | updateMissing: false,
web | saveMissingTo: 'fallback',
web | saveMissingPlurals: true,
web | missingKeyHandler: false,
web | missingInterpolationHandler: false,
web | postProcess: false,
web | postProcessPassResolved: false,
web | returnNull: false,
web | returnEmptyString: true,
web | returnObjects: false,
web | joinArrays: false,
web | returnedObjectHandler: false,
web | parseMissingKeyHandler: false,
web | appendNamespaceToMissingKey: false,
web | appendNamespaceToCIMode: false,
web | overloadTranslationOptionHandler: [Function: handle],
web | interpolation: {
web | escapeValue: false,
web | format: [Function: bound format],
web | prefix: '{{',
web | suffix: '}}',
web | formatSeparator: ',',
web | unescapePrefix: '-',
web | nestingPrefix: '$t(',
web | nestingSuffix: ')',
web | nestingOptionsSeparator: ',',
web | maxReplaces: 1000,
web | skipOnVariables: true
web | },
web | lng: 'en',
web | resources: {
web | en: {
web | brain: [Object],
web | chat: [Object],
web | config: [Object],
web | contact: [Object],
web | explore: [Object],
web | home: [Object],
web | invitation: [Object],
web | login: [Object],
web | logout: [Object],
web | monetization: [Object],
web | translation: [Object],
web | upload: [Object],
web | user: [Object],
web | delete_or_unsubscribe_from_brain: [Object],
web | knowledge: {},
web | external_api_definition: [Object]
web | },
web | es: {
web | brain: [Object],
web | chat: [Object],
web | config: [Object],
web | contact: [Object],
web | explore: [Object],
web | home: [Object],
web | invitation: [Object],
web | login: [Object],
web | logout: [Object],
web | monetization: [Object],
web | translation: [Object],
web | upload: [Object],
web | user: [Object],
web | delete_or_unsubscribe_from_brain: [Object],
web | knowledge: {},
web | external_api_definition: [Object]
web | },
web | fr: {
web | brain: [Object],
web | chat: [Object],
web | config: [Object],
web | contact: [Object],
web | explore: [Object],
web | home: [Object],
web | invitation: [Object],
web | login: [Object],
web | logout: [Object],
web | monetization: [Object],
web | translation: [Object],
web | upload: [Object],
web | user: [Object],
web | delete_or_unsubscribe_from_brain: [Object],
web | knowledge: {},
web | external_api_definition: [Object]
web | },
web | ptbr: {
web | brain: [Object],
web | chat: [Object],
web | config: [Object],
web | contact: [Object],
web | explore: [Object],
web | home: [Object],
web | invitation: [Object],
web | login: [Object],
web | logout: [Object],
web | monetization: [Object],
web | translation: [Object],
web | upload: [Object],
web | user: [Object],
web | delete_or_unsubscribe_from_brain: [Object],
web | knowledge: {},
web | external_api_definition: [Object]
web | },
web | ru: {
web | brain: [Object],
web | chat: [Object],
web | config: [Object],
web | contact: [Object],
web | explore: [Object],
web | home: [Object],
web | invitation: [Object],
web | login: [Object],
web | logout: [Object],
web | monetization: [Object],
web | translation: [Object],
web | upload: [Object],
web | user: [Object],
web | delete_or_unsubscribe_from_brain: [Object],
web | knowledge: {},
web | external_api_definition: [Object]
web | },
web | zh_cn: {
web | brain: [Object],
web | chat: [Object],
web | config: [Object],
web | contact: [Object],
web | explore: [Object],
web | home: [Object],
web | invitation: [Object],
web | login: [Object],
web | logout: [Object],
web | monetization: [Object],
web | translation: [Object],
web | upload: [Object],
web | user: [Object],
web | delete_or_unsubscribe_from_brain: [Object],
web | knowledge: {},
web | external_api_definition: [Object]
web | }
web | },
web | ignoreJSONStructure: true
web | }
redis | 1:M 18 Nov 2023 15:56:33.089 * 1 changes in 3600 seconds. Saving...
redis | 1:M 18 Nov 2023 15:56:33.090 * Background saving started by pid 63
redis | 63:C 18 Nov 2023 15:56:33.093 * DB saved on disk
redis | 63:C 18 Nov 2023 15:56:33.094 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis | 1:M 18 Nov 2023 15:56:33.190 * Background saving terminated with success
Twitter / LinkedIn details
No response
About this issue
- Original URL
- State: closed
- Created 7 months ago
- Comments: 17 (1 by maintainers)
š¤
Yes, the environment variables
GOOGLE_APPLICATION_CREDENTIALS,GOOGLE_CLOUD_PROJECT,RESEND_API_KEY, andRESEND_CONTACT_SALES_TOare indeed mandatory for the operation of Quivr. They are used to authenticate your application with Google Cloud services and the Resend service, as well as to identify your project and specify the recipientās email address for sales inquiries. Hereās how you can set them:GOOGLE_APPLICATION_CREDENTIALS: This should be set to the path of the JSON file that contains your service account key. This is used to authenticate your application with Google Cloud services.GOOGLE_CLOUD_PROJECT: This should be set to the ID of your Google Cloud project. This is used to identify your project when interacting with Google Cloud services.RESEND_API_KEY: This should be set to your Resend API key. This is used to authenticate your application with the Resend service.RESEND_CONTACT_SALES_TO: This should be set to the recipientās email address for sales inquiries using the āRESENDā service.Hereās an example of how these variables could be set:
Please replace the values after the equals sign (=) with your actual values.
In addition to these, there are other environment variables in the backend/.env file that are crucial for Quivrās operation, such as
SUPABASE_URL,SUPABASE_SERVICE_KEY,OPENAI_API_KEY,JWT_SECRET_KEY,CELERY_BROKER_URL,RESEND_EMAIL_ADDRESS,RESEND_CONTACT_SALES_FROM,PREMIUM_MAX_BRAIN_NUMBER,PREMIUM_MAX_BRAIN_SIZE, andPREMIUM_DAILY_CHAT_CREDIT. These should also be correctly set according to their respective purposes.For more information, you can refer to the .backend_env.example and env.md files in the Quivr repository.
Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a š if this is helpful and š if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
š¤
Based on the information youāve provided and the context from the Quivr repository, there are a few things that could be causing the issue with the core chat service.
Firstly, the error message āNo chat_id providedā suggests that the
chat_idis not being passed correctly to theupdate_chatfunction. This could be due to a problem in the code where this function is being called. You might want to check the part of your code where youāre calling this function and ensure that thechat_idis being correctly passed.Secondly, the issue could be related to missing or incorrect environment variables. The environment variables in the .env files are crucial for the correct operation of the Quivr application. For example, the
SUPABASE_URLandSUPABASE_SERVICE_KEYare used to configure the connection to the Supabase backend service, and theOPENAI_API_KEYis used to configure the OpenAI service. If any of these variables are missing or incorrect, it could cause issues with the core chat service.In a similar issue found in the Quivr repository, the problem was related to missing API keys for the analytics tools. You might want to check your .env files and ensure that all the necessary API keys are present and correct. You can get these keys from the respective service providers.
Lastly, itās worth noting that the
PG_DATABASE_URLin your Backend/.env file is set tonotimplementedyet. If youāre using a PostgreSQL database, youāll need to replace this with the correct database URL.I hope this helps! If youāre still having trouble, could you provide more information about the exact error message youāre seeing and the steps youāre taking when the error occurs? This will help in diagnosing the problem.
Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a š if this is helpful and š if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
š¤
From the logs youāve provided, it seems like there are a few files that the server is trying to access but canāt find, hence the 404 errors. These files include ā/favicon.icoā, ā/Public/home/js/check.jsā, and ā/static/admin/javascript/hetong.jsā.
The ā/favicon.icoā is a standard file that browsers look for when loading a website. If your application doesnāt need a favicon, this error can be safely ignored. However, if you want to get rid of this error, you can add a favicon.ico file to your serverās root directory.
The ā/Public/home/js/check.jsā and ā/static/admin/javascript/hetong.jsā files seem to be JavaScript files that your application is trying to load. If these files are necessary for your application, you should ensure that they exist in the specified paths. If they are not necessary, you might want to check your applicationās code to see why itās trying to load these files and possibly remove those references if they are not needed.
As for the error message āError occurred while fetching your chatsā, itās hard to say exactly what the issue is without more information. It could be a problem with the serverās connection to the database, an issue with the chat data itself, or a bug in the code that fetches the chat data. You might want to check the part of your application that handles chat data to see if you can find any issues.
Regarding the 405 Method Not Allowed errors, these are occurring because your server is receiving HEAD requests but doesnāt have a handler set up for them. HEAD is a type of HTTP request method that asks for a response identical to that of a GET request, but without the response body. If your application doesnāt need to handle HEAD requests, these errors can be safely ignored. If you do want to handle HEAD requests, youāll need to set up a handler for them in your serverās code.
I hope this helps! If you have any more information or if thereās anything else youāre unsure about, feel free to ask.
Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a š if this is helpful and š if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
š¤
Hello @dosubot[bot],
Nice to meet you! Iām Dosu, a friendly bot here to assist you with your Quivr-related issues and queries. While youāre waiting for a human maintainer, Iām here to help you with bug troubleshooting, answering questions, and even guiding you on how to contribute to the project. Letās get started!
To check the memory usage of the Redis service in Quivr, you can use the
docker statscommand followed by the ID of your Redis container. This command will provide real-time statistics about the Docker container, including memory usage. Hereās how you can do it:Replace
<container_id>with the ID of your Redis container.If you find that the Redis service is using more memory than expected, you can limit its memory usage by adding a
mem_limitdirective under theredisservice in your docker-compose file. Hereās an example of how to do it:In this example, the memory usage of the Redis service is limited to 500 megabytes. You can adjust this value according to your needs.
As for the default memory allocation for the Redis service in the Quivr codebase, I wasnāt able to find this information in the repository. Itās possible that this information is available elsewhere or I may have missed it. If you have any more questions or need further clarification, feel free to ask.
Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a š if this is helpful and š if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
š¤
Based on the information provided in the Quivr repository, there are no specific configuration settings for the Redis service, including memory requirements. The Redis service is defined in the docker-compose.dev.yml file with a basic configuration that doesnāt include any specific memory settings.
However, the warning about memory overcommit youāre seeing in your logs could potentially cause issues with the backend-core service if Redis runs out of memory. Redis, like many data stores, requires enough memory to store all the data youāre using it for. If it runs out of memory, it could start evicting data or even fail to write new data, which could cause issues with any service that relies on it.
To check the memory requirements of the Redis service, you can use the
INFO memorycommand in Redis, which provides information about memory usage. This can give you an idea of how much memory Redis is currently using and whether you need to allocate more.If you find that Redis is using more memory than you have available, you may need to adjust your Redis configuration to better suit your memory constraints. This could involve setting a maxmemory limit, enabling data eviction policies, or optimizing your data structures for memory efficiency.
Please note that these are general Redis performance tuning strategies and may not be applicable to your specific use case. You may need to adjust these strategies based on your specific application requirements and infrastructure constraints.
I hope this helps! If you have any more questions or need further clarification, feel free to ask.
Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a š if this is helpful and š if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
š¤
Based on the information youāve provided and the context from the Quivr repository, the āConnection refusedā error could be due to several reasons:
The backend service might not be running or might not have started successfully. You can check the logs of the
backend-coreservice to see if it started successfully and is running on port 5050. Use the commanddocker logs backend-coreto check the logs.There might be a network issue. The
backend-coreservice might not be reachable from thefrontendservice. You can check the network configuration in your Docker Compose file. Use the commanddocker network inspect <network-name>to check the network configuration.The
backend-coreservice might not be listening on port 5050. You can check the application code or the Dockerfile of thebackend-coreservice to see if itās configured to listen on port 5050.In the Quivr codebase, there is a specific configuration that needs to be set for the backend service to listen on port 5050. In the
main.pyfile, the uvicorn.run() function should be set to listen on port 5050:Also, in the
docker-compose.ymlfile, thebackend-coreservice should be configured to expose and map to port 5050:And in the Dockerfile, the application should be set to run on port 5050:
Please check these configurations and let me know if you still encounter the issue.
Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a š if this is helpful and š if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
š¤
To confirm that
NEXT_PUBLIC_BACKEND_URLis correctly pointing to your backend service, you should check the.frontend_env.examplefile in the root of the project. This file contains theNEXT_PUBLIC_BACKEND_URLvariable, which should be set to the URL where your backend service is running. In your case, itās currently set tohttps://bot.house360.co.jp:5050/.To verify that the backend service is running and accessible, you can use a tool like
curlorwgetin your terminal, or a web-based tool like Postman. Send a request to the URL specified inNEXT_PUBLIC_BACKEND_URLand check the response. If the service is running and accessible, you should receive a successful HTTP response.Hereās an example using
curl:If the service is running, you should see an HTTP/1.1 200 OK (or similar) response. If not, you might see a different status code, or no response at all, indicating that the service is not accessible.
For more information, you can refer to the env.md and .frontend_env.example files in the Quivr repository.
Regarding the Dockerfile for the backend-core service, itās structured as follows:
DEV_MODEand sets it as an environment variable./code.DEV_MODEis set totrue, it also installsdebugpy.uvicornwith the specified options.You can find the Dockerfile for the backend-core service in the backend/Dockerfile file in the Quivr repository.
Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a š if this is helpful and š if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
š¤
From the information youāve provided, there are several potential issues that could be causing the errors youāre experiencing. Letās go through them one by one:
404 Not Found and 405 Method Not Allowed errors: These errors typically indicate that the server is unable to find the requested resource or the HTTP method used is not allowed. This could be due to incorrect routing or misconfiguration in your backend service. Please ensure that the routes and methods in your backend service match those being requested by the frontend.
Memory overcommit warnings from the Redis service: This warning is usually thrown when the system does not have enough memory to fulfill a request by Redis. You might want to consider increasing the memory allocated to your Docker containers or optimizing your application to use less memory.
Running worker with superuser privileges: Itās generally not recommended to run services with superuser privileges due to potential security risks. You might want to consider running your services with a non-root user.
Regarding your FRONTEND .env file configuration, it seems to be correctly set up. However, please ensure that all the URLs and keys are correct and accessible. Also, make sure that the
NEXT_PUBLIC_BACKEND_URLis correctly pointing to your backend service and that the service is running and accessible.Please note that without access to your backend service configuration and the specific error messages, itās difficult to provide a more precise diagnosis. If you could provide more details about your backend service configuration and the specific error messages, it would be helpful in identifying the root cause of the issues.
Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a š if this is helpful and š if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
š¤
Hello @xiaoqib,
Nice to meet you! Iām Dosu, an AI assistant here to help you with your issues, answer your questions, and guide you to be a contributor to our Quivr repository. While we wait for a human maintainer to assist you further, feel free to ask me anything. Looking forward to our interaction!
From your description, it seems like there are a few issues that need to be addressed.
Firstly, the āError occurred while fetching your chatsā message suggests that there might be an issue with the chat_router in the backend-core service. This router is responsible for handling chat-related requests. The 404 Not Found and 405 Method Not Allowed errors indicate that the server is unable to find the requested resource or the HTTP method used is not supported for the requested resource. This could be due to incorrect endpoints being hit or the wrong HTTP methods being used. You might want to check your applicationās API calls to ensure they are correct.
Secondly, the warnings about memory overcommit in the Redis service suggest that your server might be running out of memory. Redis saves changes to the database every 3600 seconds as per your configuration, and if the changes are too large to fit into memory, it could cause issues. You might want to consider reducing the save interval or increasing the serverās memory if possible.
Lastly, running the worker with superuser privileges is generally not recommended due to potential security risks. Itās advisable to run services with the least privileges necessary. You might want to consider changing this in your configuration.
Regarding the interaction between the backend-core service and the Redis service, as per the provided context, the backend-core service depends on the Redis service as indicated in the docker-compose.dev.yml file. This means that the backend-core service will not start until the Redis service is up and running. However, the exact nature of the interaction (e.g., data exchange, function calls, etc.) is not specified in the provided context. For more detailed information about how backend-core interacts with Redis, you would need to look at the application code within the backend service.
I hope this helps! If you have any more questions or need further clarification, feel free to ask.
Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a š if this is helpful and š if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.