kuby-core: Error: uninitialized constant Kuby::KubeDB with Sidekiq or Redis gems

Adding Sidekiq or Redis are resulting with the following error:

RAILS_MASTER_KEY=some-key bundle exec kuby -e production deploy
Error: uninitialized constant Kuby::KubeDB

Gemfile

gem "kuby-redis", "~> 0.1.0"
gem "kuby-sidekiq", "~> 0.3.0"

kuby.rb file

kubernetes do
      ***
      
      add_plugin :redis do
        instance(:my_rails_cache)
      end
      
      add_plugin :sidekiq
end

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Comments: 16

Most upvoted comments

Impressive work @camertron! Thanks for putting so much effort into this!

Hey everyone, just wanted to jump back in here and let you know that new versions of kuby-redis, kuby-sidekiq, and kuby-core have been published. This is the “next big release” I was talking about, and it brings a whole bunch of features and fixes. Check out the full changelog entry for more information.

Wow, thanks for sharing @scart88!

For what it’s worth, I’ve been working a lot on getting the next big release of Kuby out the door, which includes upgrades to both the kuby-redis and kuby-sidekiq gems. I decided to use the Spotahome Redis operator, which supports failover and some other nice features. My hope is it will be pretty turnkey for those wanting to use Sidekiq or stand up a Rails cache.

Hey @denikus

Yes, I used bitnami/redis https://artifacthub.io/packages/helm/bitnami/redis.

You will need to have a PersistentVolume, and in my case I used storageClass: "do-block-storage" since I’m using DigitalOcean. You will need to configure your redis helm values, add your password, your storageClass, set your replica count, add your resources limits, etc…

I also used a PersistentVolume size of 1Gb for the master, and 1Gb for the replicas. If you have 1 master and 3 replicas, helm will generate 4 PersistentVolumeClaims on your storage. The default is 8Gb, so you might want to change that.

Here is tutorial on how to install bitnami/redis helm chart: https://phoenixnap.com/kb/kubernetes-redis#ftoc-heading-2

I discovered https://k8slens.dev/ which a great tool to see and manage your cluster, and it’s open-source. You don’t even need an account.

Here you can see all available bitnami/redis values you can change: https://artifacthub.io/packages/helm/bitnami/redis?modal=values

global:
  imageRegistry: ""
  ## E.g.
  ## imagePullSecrets:
  ##   - myRegistryKeySecretName
  ##
  imagePullSecrets: []
  storageClass: "do-block-storage"
  redis:
    password: "your-very-strong-password"
  resources:
    limits:
      cpu: 200m
      memory: 256Mi
    requests: {}
      cpu: 100m
      memory: 128Mi

Once you setup redis, you can access it from other pods inside your cluster. The default username is default and the password is what you set in the global. redis://default:your-password@redis-master.default.svc.cluster.local:6379/0

I added my Redis url inside Rails credentials and I created a sidekiq.rb initializer.

config/initializers/sidekiq.rb

sidekiq_url = if Rails.env.production?
  Rails.application.credentials.dig(:production, :REDIS_URL) ||  "redis://localhost:6379/1"
else 
  "redis://localhost:6379/1"
end


Sidekiq.configure_server do |config|
  config.redis = { url: sidekiq_url }
end

Sidekiq.configure_client do |config|
  config.redis = { url: sidekiq_url }
end

I also changed my cable.yml file

production:
  adapter: redis
  url: <%= Rails.application.credentials.dig(:production, :REDIS_URL) ||  "redis://localhost:6379/1" %>

If you are using hiredis and cache_store you will need to use the same REDIS_URL in your:

development.rb or production.rb

config.cache_store = :redis_cache_store, {driver: :hiredis, url: Rails.application.credentials[:REDIS_URL] || "redis://localhost:6379/1" }
config.session_store :redis_session_store, key: "_session_app_production", serializer: :json,
  redis: {
    driver: :hiredis,
    expire_after: 1.year,
    ttl: 1.year,
    url: Rails.application.credentials[:REDIS_URL] || "redis://localhost:6379/6"
  }

At this step, you should have Redis configured and ready to use on your cluster. Now, you will need to deploy the sidekiq worker. Unfortunately I wasn’t able to deploy it with Kuby, so I used a custom yaml file.

sidekiq.yml

And you would install it like this: kubectl apply -f sidekiq.yml, however you will need to make this compatible with your app, cluster, namespace, etc… This is just an example.

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: worker
  namespace: your-app-namespace
  labels:
    role: worker
spec:
  revisionHistoryLimit: 0
  replicas: 1
  selector:
    matchLabels:
      app: app-worker
      role: web
  template:
    metadata:
      labels:
        app: app-worker
    spec:
      containers:
      - name: app-worker
        image: your-registry-url
        imagePullPolicy: Always
        command: ["launcher"]
        args: ["bundle", "exec", "sidekiq"]
        envFrom:
        - configMapRef:
            name: env
        - secretRef:
            name: your-app-secrets
        env:
        resources:
          requests:
            cpu: "500m"
            memory: "500Mi"
          limits:
            cpu: "1000m"
            memory: "1000Mi" 
      initContainers:
      - name: migration-check
        image: your-registry-url
        imagePullPolicy: Always
        command: ["launcher"]
        args: ["rake", "db:abort_if_pending_migrations"]
        envFrom:
        - configMapRef:
            name: env # for example app-example-config
        - secretRef:
            name: your-app-secrets # for example app-example-secrets
      imagePullSecrets:
      - name: your-reg-secrets

You can run bundle exec kuby -e production resources, look for Deployment pod with the role web, duplicate it and make it look like in my previous example.

Maybe @camertron can help a little bit more with the sidekiq.yml file and how can it be configured directly inside kuby.rb file

I hope it make sense.