charts: Artifactory helm chart not working

This is a request for help! I’ve been struggling with this for a couple of days and nothing I seem to try works. The nginx pods have a minimum availability error and I get a 404 error when trying to access artifactory.

Helm version: 2.13.1 Kubernetes version:

  • client: v1.13.4
  • server: 1.11.7-gke.12

Chart version: 7.13.0

Run using the command helm install --name artifactory --version 7.13.0 -f values.yml jfrog/artifactory

and this is the values file:

# This is a YAML-formatted file.

# Beware when changing values here. You should know what you are doing!
# Access the values with {{ .Values.key.subkey }}

# Common
initContainerImage: "alpine:3.8"

# For supporting pulling from private registries
imagePullSecrets:

## Role Based Access Control
## Ref: https://kubernetes.io/docs/admin/authorization/rbac/
rbac:
  create: true
  role:
    ## Rules to create. It follows the role specification
    rules:
    - apiGroups:
      - ''
      resources:
      - services
      - endpoints
      - pods
      verbs:
      - get
      - watch
      - list

## Service Account
## Ref: https://kubernetes.io/docs/admin/service-accounts-admin/
##
serviceAccount:
  create: true
  ## The name of the ServiceAccount to use.
  ## If not set and create is true, a name is generated using the fullname template
  name:

ingress:
  enabled: true
  defaultBackend:
    enabled: true
  # Used to create an Ingress record.
  hosts:
    - artifactory.kube.jlr-ddc.com
  annotations:
    kubernetes.io/ingress.class: nginx
    certmanager.k8s.io/cluster-issuer: "letsencrypt-prod"
  labels:
   traffic-type: external
  # traffic-type: internal
  # tls:
  # # Secrets must be manually created in the namespace.
  # - secretName: '{{ deployment_name }}-secret'
  #   hosts:
  #     - artifactory.kube.jlr-ddc.com

logger:
  image:
    repository: busybox
    tag: 1.30

# Artifactory
artifactory:
  name: artifactory
  image:
    repository: "docker.bintray.io/jfrog/artifactory-oss"
    # repository: "docker.bintray.io/jfrog/artifactory-pro"
    # Note that by default we use appVersion to get image tag
    # version:
    pullPolicy: Always

  # Sidecar containers for tailing Artifactory logs
  loggers: []
  # - request.log
  # - event.log
  # - binarystore.log
  # - request_trace.log
  # - access.log
  # - artifactory.log
  # - build_info_migration.log

  # Sidecar containers for tailing Tomcat (catalina) logs
  catalinaLoggers: []
  # - catalina.log
  # - host-manager.log
  # - localhost.log
  # - manager.log

  ## Add custom init containers
  customInitContainers: |
  #  - name: "custom-setup"
  #    image: "{{ .Values.initContainerImage }}"
  #    imagePullPolicy: "{{ .Values.artifactory.image.pullPolicy }}"
  #    command:
  #      - 'sh'
  #      - '-c'
  #      - 'touch {{ .Values.artifactory.persistence.mountPath }}/example-custom-setup'
  #    volumeMounts:
  #      - mountPath: "{{ .Values.artifactory.persistence.mountPath }}"
  #        name: artifactory-volume

  ## Artifactory license secret.
  ## If artifactory.license.secret is passed, it will be mounted as
  ## ARTIFACTORY_HOME/etc/artifactory.lic and loaded at run time.
  ## The dataKey should be the name of the secret data key created.
  license:
    secret:
    dataKey:
  ## Create configMap with artifactory.config.import.xml and security.import.xml and pass name of configMap in following parameter
  configMapName:
  masterKey: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
  ## Alternatively, you can use a pre-existing secret with a key called master-key by specifying masterKeySecretName
  # masterKeySecretName:

  ## Extra pre-start command to install JDBC driver for MySql/MariaDb/Oracle
  # preStartCommand: "curl -L -o /opt/jfrog/artifactory/tomcat/lib/mysql-connector-java-5.1.41.jar https://jcenter.bintray.com/mysql/mysql-connector-java/5.1.41/mysql-connector-java-5.1.41.jar"
  ## Extra post-start command to run extra commands after container starts
  # postStartCommand:

  ## Extra environment variables that can be used to tune Artifactory to your needs.
  ## Uncomment and set value as needed
  extraEnvironmentVariables:
  # - name: SERVER_XML_ARTIFACTORY_PORT
  #   value: "8081"
  # - name: SERVER_XML_ARTIFACTORY_MAX_THREADS
  #   value: "200"
  # - name: SERVER_XML_ACCESS_MAX_THREADS
  #   value: "50"
  # - name: SERVER_XML_ARTIFACTORY_EXTRA_CONFIG
  #   value: ""
  # - name: SERVER_XML_ACCESS_EXTRA_CONFIG
  #   value: ""
  # - name: SERVER_XML_EXTRA_CONNECTOR
  #   value: ""
  # - name: DB_POOL_MAX_ACTIVE
  #   value: "100"
  # - name: DB_POOL_MAX_IDLE
  #   value: "10"

  annotations: {}

  service:
    name: artifactory
    type: ClusterIP
    annotations: {}
  externalPort: 8081
  internalPort: 8081
  internalPortReplicator: 6061
  externalPortReplicator: 6061
  uid: 1030
  ## The following settings are to configure the frequency of the liveness and readiness probes
  livenessProbe:
    enabled: false
    initialDelaySeconds: 180
    failureThreshold: 10
    timeoutSeconds: 10
    periodSeconds: 10
    successThreshold: 1

  readinessProbe:
    enabled: false
    initialDelaySeconds: 60
    failureThreshold: 10
    timeoutSeconds: 10
    periodSeconds: 10
    successThreshold: 1
  persistence:
    mountPath: "/var/opt/jfrog/artifactory"
    enabled: true
    ## A manually managed Persistent Volume and Claim
    ## Requires persistence.enabled: true
    ## If defined, PVC must be created manually before volume will be bound with the name e.g `artifactory`
    # existingClaim:

    accessMode: ReadWriteOnce
    size: 20Gi
    maxCacheSize: 50000000000


    ## Set the persistence storage type. This will apply the matching binarystore.xml to Artifactory config
    ## Supported types are:
    ## file-system (default)
    ## nfs
    ## google-storage
    ## aws-s3
    ## azure-blob
    type: file-system

    ## For artifactory.persistence.type nfs
    ## If using NFS as the shared storage, you must have a running NFS server that is accessible by your Kubernetes
    ## cluster nodes.
    ## Need to have the following set
    nfs:
      # Must pass actual IP of NFS server with '--set For artifactory.persistence.nfs.ip=${NFS_IP}'
      ip:
      haDataMount: "/data"
      haBackupMount: "/backup"
      dataDir: "/var/opt/jfrog/artifactory"
      backupDir: "/var/opt/jfrog/artifactory-backup"
      capacity: 200Gi
    ## For artifactory.persistence.type google-storage
    googleStorage:
      # Set a unique bucket name
      bucketName: "artifactory-gcp"
      identity:
      credential:
      path: "artifactory/filestore"
    ## For artifactory.persistence.type aws-s3
    ## IMPORTANT: Make sure S3 `endpoint` and `region` match! See https://docs.aws.amazon.com/general/latest/gr/rande.html
    awsS3:
      # Set a unique bucket name
      bucketName: "artifactory-aws"
      endpoint:
      region:
      roleName:
      identity:
      credential:
      path: "artifactory/filestore"
      refreshCredentials: true
      testConnection: false
      s3AwsVersion: AWS4-HMAC-SHA256
      ## Additional properties to set on the s3 provider
      properties: {}
      #  httpclient.max-connections: 100
    ## For artifactory.persistence.type azure-blob
    azureBlob:
      accountName:
      accountKey:
      endpoint:
      containerName:
      testConnection: false
    ## artifactory data Persistent Volume Storage Class
    ## If defined, storageClassName: <storageClass>
    ## If set to "-", storageClassName: "", which disables dynamic provisioning
    ## If undefined (the default) or set to null, no storageClassName spec is
    ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
    ##   GKE, AWS & OpenStack)
    ##
    # storageClass: "-"
    ## Annotations for the Persistent Volume Claim
    annotations: {}
  ## Uncomment the following resources definitions or pass them from command line
  ## to control the cpu and memory resources allocated by the Kubernetes cluster
  resources:
   requests:
     memory: "4Gi"
     cpu: "2"
   limits:
     memory: "8Gi"
     cpu: "6"
  ## The following Java options are passed to the java process running Artifactory.
  ## You should set them according to the resources set above
  javaOpts:
   xms: "2g"
   xmx: "4g"
  #  other: ""
  nodeSelector: {}

  tolerations: []

  affinity: {}

  ## Artifactory Replicator is available only for Enterprise Plus
  replicator:
    enabled: false
    publicUrl:

# Nginx
nginx:
  enabled: true
  name: nginx
  replicaCount: 1
  uid: 104
  gid: 107
  image:
    repository: "docker.bintray.io/jfrog/nginx-artifactory-pro"
    # Note that by default we use appVersion to get image tag
    # version:
    pullPolicy: IfNotPresent

  # Sidecar containers for tailing Nginx logs
  loggers: []
  # - access.log
  # - error.log

  service:
    ## For minikube, set this to NodePort, elsewhere use LoadBalancer
    type: LoadBalancer
    ## For supporting whitelist on the Nginx LoadBalancer service
    ## Set this to a list of IP CIDR ranges
    ## Example: loadBalancerSourceRanges: ['10.10.10.5/32', '10.11.10.5/32']
    ## or pass from helm command line
    ## Example: helm install ... --set nginx.service.loadBalancerSourceRanges='{10.10.10.5/32,10.11.10.5/32}'
    loadBalancerSourceRanges: []
    annotations: {}
    ## Provide static ip address
    loadBalancerIP:
    ## There are two available options: “Cluster” (default) and “Local”.
    externalTrafficPolicy: Cluster
  externalPortHttp: 80
  internalPortHttp: 80
  externalPortHttps: 443
  internalPortHttps: 443
  internalPortReplicator: 6061
  externalPortReplicator: 6061
  ## The following settings are to configure the frequency of the liveness and readiness probes
  livenessProbe:
    enabled: true
    initialDelaySeconds: 60
    failureThreshold: 10
    timeoutSeconds: 10
    periodSeconds: 10
    successThreshold: 1

  readinessProbe:
    enabled: true
    initialDelaySeconds: 60
    failureThreshold: 10
    timeoutSeconds: 10
    periodSeconds: 10
    successThreshold: 1

  ## The SSL secret that will be used by the Nginx pod
  # tlsSecretName: chart-example-tls
  env:
    # artUrl: "http://artifactory:8081/artifactory"
    ssl: true
    skipAutoConfigUpdate: false
  ## Custom ConfigMap for nginx.conf
  customConfigMap:
  ## Custom ConfigMap for artifactory-ha.conf
  customArtifactoryConfigMap:
  persistence:
    mountPath: "/var/opt/jfrog/nginx"
    enabled: false
    ## A manually managed Persistent Volume and Claim
    ## Requires persistence.enabled: true
    ## If defined, PVC must be created manually before volume will be bound
    # existingClaim:

    accessMode: ReadWriteOnce
    size: 5Gi
    ## nginx data Persistent Volume Storage Class
    ## If defined, storageClassName: <storageClass>
    ## If set to "-", storageClassName: "", which disables dynamic provisioning
    ## If undefined (the default) or set to null, no storageClassName spec is
    ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
    ##   GKE, AWS & OpenStack)
    ##
    # storageClass: "-"
  resources: {}
  #  requests:
  #    memory: "250Mi"
  #    cpu: "100m"
  #  limits:
  #    memory: "250Mi"
  #    cpu: "500m"
  nodeSelector: {}

  tolerations: []

  affinity: {}

## Configuration values for the postgresql dependency
## ref: https://github.com/kubernetes/charts/blob/master/stable/postgresql/README.md
##
postgresql:
  enabled: true
  imageTag: "9.6.11"
  postgresUser: "artifactory"
  postgresPassword:
  postgresDatabase: "artifactory"
  postgresConfig:
    maxConnections: "1500"
  persistence:
    enabled: true
    size: 50Gi
  service:
    port: 5432
  resources: {}
  #  requests:
  #    memory: "512Mi"
  #    cpu: "100m"
  #  limits:
  #    memory: "1Gi"
  #    cpu: "500m"

## If NOT using the PostgreSQL in this chart (postgresql.enabled=false),
## specify custom database details here or leave empty and Artifactory will use embedded derby
database:
  type:
  host:
  port:
  ## If you set the url, leave host and port empty
  url:
  ## If you would like this chart to create the secret containing the db
  ## password, use these values
  user:
  password:
  ## If you have existing Kubernetes secrets containing db credentials, use
  ## these values
  secrets: {}
  #  user:
  #    name: "rds-artifactory"
  #    key: "db-user"
  #  password:
  #    name: "rds-artifactory"
  #    key: "db-password"
  #  url:
  #    name: "rds-artifactory"
  #    key: "db-url"

Any help would be greatly appreciated!

Is this a request for help?:


Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Version of Helm and Kubernetes:

Which chart:

What happened:

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know:

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 20 (10 by maintainers)

Most upvoted comments

@jrushto1 From the log it’s clear that your Artifactory is not running properly. It says Artifactory was unable to talk to the database using the password provided. Looks like you have reinstalled Artifactory without removing PVCs. If this is a fresh Instance I recommend to reinstall clearing PVCs. If not get correct DB password and perform helm upgrade with --set postgresql.postgresPassword=$PASSWORD

@jainishshah17 Thanks! I found the issue on this thread https://github.com/jfrog/charts/issues/63#issuecomment-441125905. The artifactory-volume-artifactory-artifactory-0 pvc does stick around after running helm del --purge artifactory I can now successfully launch artifactory! Thanks for your help! I was beginning to pull my hair out.