kubernetes: Unable to create a persistent volume with a default storage class
Kubernetes version (use kubectl version):
Client Version: version.Info{Major:“1”, Minor:“6”, GitVersion:“v1.6.1”, GitCommit:“b0b7a323cc5a4a2019b2e9520c21c7830b7f708e”, GitTreeState:“clean”, BuildDate:“2017-04-03T20:44:38Z”, GoVersion:“go1.7.5”, Compiler:“gc”, Platform:“linux/amd64”} Server Version: version.Info{Major:“1”, Minor:“6”, GitVersion:“v1.6.0”, GitCommit:“fff5156092b56e6bd60fff75aad4dc9de6b6ef37”, GitTreeState:“clean”, BuildDate:“2017-03-28T16:24:30Z”, GoVersion:“go1.7.5”, Compiler:“gc”, Platform:“linux/amd64”}
Environment:
-
Cloud provider or hardware configuration: model name : Intel® Xeon® CPU X5650 @ 2.67GHz Mem: 144886 2581 137083 67 5222 141591
-
OS (e.g. from /etc/os-release): NAME=“Virtuozzo Storage” VERSION=“2.0.0” ID=“vstorage” ID_LIKE=“rhel fedora” VERSION_ID=“7” PRETTY_NAME=“Virtuozzo Storage release 2.0.0 (6)” ANSI_COLOR=“0;31” CPE_NAME=“cpe:/o:virtuozzoproject:vz:7” HOME_URL=“http://www.virtuozzo.com” BUG_REPORT_URL=“http://www.virtuozzo.com/support/”
-
Kernel (e.g.
uname -a): Linux s21.int 3.10.0-327.36.1.vz7.20.18.banner #1 SMP Fri Mar 10 16:12:31 MSK 2017 x86_64 x86_64 x86_64 GNU/Linux -
Install tools:
-
Others:
What happened:
I have a default storage class and a claim where a storage class isn’t specified.
[root@s21 vzstorage-pd]# kubectl describe storageclass default Name: default IsDefaultClass: Yes Annotations: storageclass.beta.kubernetes.io/is-default-class=true Provisioner: kubernetes.io/virtuozzo-storage Parameters: volumePath=/mnt/vstorage/kube/ Events: <none>
[root@s21 vzstorage-pd]# cat claim-default.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: vz-test-claim spec: accessModes: - ReadWriteOnce - ReadOnlyMany resources: requests: storage: 1Gi
When this claim is added to the cluster, the external-storage provisioner returns an error: E0411 16:44:58.442149 245309 controller.go:414] Claim “default/vz-test-claim”: StorageClass “” not found
What you expected to happen:
I expect that a provisioner will get the claim with the default storage class
How to reproduce it (as minimally and precisely as possible):
Install one of external provisioners and try to use a default storage class https://github.com/kubernetes-incubator/external-storage
Anything else we need to know: The DefaultStorageClass pluging is enabled:
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds
I found that other people have the same problem and discussed about it on the sig-storage chanel.
mawong [3:39 PM] 
hmm might be worth creating an issue since there are lot sof folks with this issue
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Reactions: 16
- Comments: 36 (17 by maintainers)
I am having this problem too with nfs-provisioner. What I’ve discovered is that the problem only happens if you first create a PVC and then run the provisioner. All PVCs that are created after the provisioner is deployed will succeed.
Here’s the failing PVC log:
When I deleted the PVC and recreated it using the same template:
In the provisioner logs, it was first failing then it succeeded:
By the way, I created a Helm chart to deploy nfs-provisioner, might be useful for testing: https://github.com/IlyaSemenov/nfs-provisioner-chart
@MagicJohnJang
I wholly disagree. Kubernetes is a state resolution engine. Define your desired state and eventually you have that state. I should not be required to create a StorageClass primitive with a specific annotation before any other primitive in order to achieve consistent behavior. I should be able to spin up a vanilla cluster,
kubectl apply -f .and have a working application. This behavior is inconsistent with the rest of kubernetes and its philosophies and should be changed.I confirm that it works if the storage class exists prior to creating a PVC. After all, I mentioned the same in my first comment: “What I’ve discovered is that the problem only happens if you first create a PVC and then run the provisioner.”
This is however I believe a typical scenario for newcomers on newly created clusters. You deploy an app (with a Helm chart or manually) which creates a PVC for itself, but it fails to come up. You then read logs and discover that you’re missing a provisioner, you create a default provisioner, but it still does not help and shows cryptic messages 🤷. It’s far from obvious that the default provisioner will not “pick up” previously created PVC which have been set to use the default provisioner, and that you need to recreate all PVCs.
Issues go stale after 90d of inactivity. Mark the issue as fresh with
/remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.If this issue is safe to close now please do so with
/close.Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Unbound PVCs should be safe to backfill
Unfortunately this dependency between storage classes and PVCs breaks the resource model of Kubernetes where everything converges after a while. I don’t know whether there are any backwards-compatibility concerns but a controller that sets the default storage class in PVCs that do not use it sounds good to me.
/remove-lifecycle stale
PVC with storageclass == nil is used to indicate the default, whereas storageclass == “” means no storageclass. But the filling in of the default storageclass is done in the admission controller on PVC create, so if you create/set your default storageclass after creating the PVC, the storageclass field won’t be updated afterwards. I think we could possibly have a long running controller do the same thing as the admission controller, filling in the storageclass name with the default storageclass, however, we need to consider backwards compatibility scenarios.
Maybe it’s ok. Who’s going to have an existing pending PVC with nil storageclass from before the release (1.6?) that default storageclasses was introduced?
same problem ! nfs-provisioner What is the new solution? thanks 💐
UPDATE:Successfully created pvc using extended nfs mode.Solved @avagin @IlyaSemenov File1: Define nfs-client deployment
File2: Define storageclass
File3: example stateful Using pvctemplates
Check.
The reason for the failure was because I did not choose to use
storageClassNameinvolumeClaimTemplates. @verult thanks Hope to help everyone !