superset: Cache warm-ups never succeed

Globally: the cache warm-up tasks launched by Celery workers all silently fail. Indeed, they perform GETs on the main server’s URL without providing the required authentication. However, dashboards may not be loaded without being logged in.

Related bugs:

  • unit tests on this feature miss the error
  • the documentation should mention that the Celery worker needs the --beat flag to listen on CeleryBeat schedules (cf docker-compose.yml configuration)

At stake: long dashboard load times for our users, or outdated dashboards.

Main files to be fixed:

  • superset/tasks/cache.py

Expected results

When the Celery worker logs this (notice 'errors': []):

superset-worker_1  | [2020-04-20 13:05:00,299: INFO/ForkPoolWorker-3] Task cache-warmup[73c09754-4dcb-4674-9ac2-087b04b6e209] 
                     succeeded in 0.1351924880000297s: 
                     {'success': [
                         'http://superset:8088/superset/explore/?form_data=%7B%22slice_id%22%3A%2031%7D', 
                         'http://superset:8088/superset/explore/?form_data=%7B%22slice_id%22%3A%2032%7D', 
                         'http://superset:8088/superset/explore/?form_data=%7B%22slice_id%22%3A%2033%7D'], 
                     'errors': []}

… we would expect to have something (more or less) like this in the Superset server logs:

superset_1         | 172.20.0.6 - - [2020-04-20 13:05:00,049] "POST /superset/explore_json/?form_data=%7B%22slice_id%22%3A HTTP/1.1" 
                     200 738 "http://superset:8088/superset/dashboard/1/" "python-urllib2"

Of course, we also hope to have a bunch of items in the Redis logs, and that loading dashboards is lightning-quick.

Actual results

But we get these logs instead, which show there is a 302 redirect to the login page, followed by a 200 on the login page. This redirect is interpreted as a success by the tests.

superset_1         | 172.20.0.6 - - [20/Apr/2020 08:12:00] "GET /superset/explore/?form_data=%7B%22slice_id%22%3A%2030%7D HTTP/1.1" 
                     302 -
superset_1         | INFO:werkzeug:172.20.0.6 - - [20/Apr/2020 08:12:00] "GET /superset/explore/?form_data=%7B%22slice_id%22%3A%2030%7D HTTP/1.1" 
                     302 -
superset_1         | 172.20.0.6 - - [20/Apr/2020 08:12:00] "GET /login/?next=http%3A%2F%2Fsuperset%3A8088%2Fsuperset%2Fexplore%2F%3Fform_data%3D%257B%2522slice_id%2522%253A%252030%257D HTTP/1.1" 
                     200 -

(I added a few line returns)

In the Redis, here is the only stored key:

$ docker-compose exec redis redis-cli
127.0.0.1:6379> KEYS *
1) "_kombu.binding.celery"

Last, the dashboards take time loading the data on the first connection.

Screenshots

None

How to reproduce the bug

I had to patch the master branch to get this to work. In particular, I have to admit it was not very clear to me whether the config was read from file docker/pythonpath_dev/superset_config.py or file superset/config.py. So I kind of adapted superset/config.py and copied it over to the pythonpath one (which looks like it is read by the celery worker, but not the server).

Anyway, this reproduces the bug:

  1. $ docker system prune --all to remove all dangling images, exited containers and volumes.
  2. $ git checkout master && git pull origin master
  3. $ wget -O configs.patch https://gist.githubusercontent.com/Pinimo/c339ea828974d2141423b6ae64192aa4/raw/e449c97c11f81f7270d6e0b2369d55ec41b079a9/0001-bug-Patch-master-to-reproduce-sweetly-the-cache-warm.patch && git apply configs.patch
    This will apply patches to master to make the scenario work out neatly, in particular add the --beat flag and specify a cache warmup task on all dashboards every minute.
  4. $ docker-compose up -d
  5. Wait for the containers to be built and up.
  6. $ docker-compose logs superset-worker | grep cache-warmup
  7. $ docker-compose logs superset | grep slice
  8. $ docker-compose exec redis redis-cli then type KEYS *

Environment

(please complete the following information):

  • superset version: 0.36.0
  • python version: dockerized
  • node.js version: dockerized
  • npm version: dockerized

Checklist

  • I have checked the superset logs for python stacktraces and included it here as text if there are any.
  • I have reproduced the issue with at least the latest released version of superset.
  • I have checked the issue tracker for the same issue and I haven’t found one similar.

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 6
  • Comments: 27 (9 by maintainers)

Commits related to this issue

Most upvoted comments

Any news or workarounds for avoiding the 302 to the login endpoint?

I still run into this issue using latest docker image (warm-up succeeds on worker, superset logs show redirect to login, no caches refreshed). Not being able to warm-up caches periodically feels like a missing vital feature.

@ajwhite @betodealmeida I sent a PR to address this issue, which is working in my environment.

@mistercrunch What I see from the previous commits, previously the route used to cache warmup in cache.py get_url was /explore_json. Any reason it was changed to /explore ?

Mmmh, maybe I’m missing something, but it seems like we shouldn’t have to go through the web server to do this.

Refactoring / mimicking what explore_json does might be an option. https://github.com/apache/incubator-superset/blob/master/superset/views/core.py#L525-L536

Yet another draft solution:

  1. Simpler than (2): create a user/password for the Celery worker on db-inits, with a very specific caching role. I find it important that this role should never be able to actually extract any data (so as not to care too much for the password being stolen), just to ping the server and get it to cache the data. It would even be possible to modify the @login_required decorator to add the constraint:
    • "if the user belongs only to group __cache_worker, then
      • he should never POST or PUT
      • he should never see any data he requested
      • but his data should be sent out to Redis just like for anybody"