-
-
Save ShyamsundarR/f16d32e3edd5b38df50e90106674a943 to your computer and use it in GitHub Desktop.
| --- | |
| apiVersion: v1 | |
| kind: PersistentVolume | |
| metadata: | |
| # Can be anything, but has to be matched at line 47 | |
| # Also should avoid conflicts with existing PV names in the namespace | |
| name: preprov-pv-cephfs-01 | |
| spec: | |
| accessModes: | |
| - ReadWriteMany | |
| capacity: | |
| storage: 5Gi | |
| csi: | |
| driver: rook-ceph.cephfs.csi.ceph.com | |
| nodeStageSecretRef: | |
| name: rook-ceph-csi | |
| namespace: rook-ceph | |
| volumeAttributes: | |
| clusterID: rook-ceph | |
| fsName: myfs | |
| # The key "staticVolume" states this is pre-provisioned | |
| # NOTE: This was "preProvisionedVolume: "true"" in Ceph-CSI versions 1.0 and below | |
| staticVolume: "true" | |
| # Path of the PV on the CephFS filesystem | |
| rootPath: /staticpvs/pv-1 | |
| # Can be anything, need not match PV name, or volumeName in PVC | |
| # Retained as the same for simplicity and uniquness | |
| volumeHandle: preprov-pv-cephfs-01 | |
| # Reclaim policy must be "retain" as, | |
| # deletion of pre-provisioned volumes is not supported | |
| persistentVolumeReclaimPolicy: Retain | |
| volumeMode: Filesystem | |
| claimRef: | |
| # Name should match "claimName" in PVC claim section | |
| name: csi-cephfs-pvc-preprov | |
| namespace: default | |
| --- | |
| apiVersion: v1 | |
| kind: PersistentVolumeClaim | |
| metadata: | |
| name: csi-cephfs-pvc-preprov | |
| spec: | |
| accessModes: | |
| - ReadWriteMany | |
| resources: | |
| requests: | |
| storage: 5Gi | |
| volumeName: preprov-pv-cephfs-01 | |
| --- | |
| apiVersion: v1 | |
| kind: Pod | |
| metadata: | |
| name: csicephfs-preprov-demo-pod | |
| spec: | |
| containers: | |
| - image: busybox | |
| name: busybox | |
| command: | |
| - sleep | |
| - "3600" | |
| imagePullPolicy: IfNotPresent | |
| volumeMounts: | |
| - name: mypvc | |
| mountPath: /mnt | |
| volumes: | |
| - name: mypvc | |
| persistentVolumeClaim: | |
| claimName: csi-cephfs-pvc-preprov |
@ShyamsundarR Thanks for the clarification. Manually creating a secret that matches the form using the
client.adminkey fromceph auth lsworked for me. Do you still want my logs?
No, thank you.
I am just wondering if there is a way to more declaratively deploy a CephFS shared filesystem + mount a volume to a Pod that gives access to at a fixed path within cephfs, with ceph-csi. The extra step to get / create a client key in Ceph and then create the correct secret in Kubernetes API bugs me a little bit.
The step to create a client key and a secret is a choice for security conscious setups. You could use a single secret across all static PVs that are created. This secret can be in any namespace, but the namespace needs to be reflected in here.
The above reduces it to a single secret creation step.
I believe this was previously possible with flexVolume or with
provisionVolume: false?
I am not aware of how flexVolume worked, so not commenting on the same.
With provisionVolume: false the method of using static PVs was to request for one dynamically, and the provisioner detecting it as a pre-provisioned PV and hence using the same secret as that used by the CSI plugins. The entire provisioning step (and hence de-provisioning) was superfluous, and the intention was to go with kubernetes based static PV definitions instead.
@ShyamsundarR Thanks for the clarification. Manually creating a secret that matches the form using the
client.adminkey fromceph auth lsworked for me. Do you still want my logs?I am just wondering if there is a way to more declaratively deploy a CephFS shared filesystem + mount a volume to a Pod that gives access to at a fixed path within cephfs, with ceph-csi. The extra step to get / create a client key in Ceph and then create the correct secret in Kubernetes API bugs me a little bit.
I believe this was previously possible with flexVolume or with
provisionVolume: false?