Using US3 in UK8S
UK8S supports the use of US3 object storage as persistent storage volumes in clusters.
UK8S version supported: greater than 1.14.6(created after September 17, 2019)
Must Read for US3 Usage
US3 object storage is suitable for users to upload and download static data files, such as videos, pictures and other files.
If your business has high requirements for read and write performance, such as real-time fast writing logs, it is recommended to use UDisk or UFS as the persistent storage of the UK8S cluster. US3 cannot provide functionality just like the local filesystem.
⚠️ For the CSI UFile version before 21.09.1, when the CSI Pod is restarted abnormally, it will cause the mount point of the pod using US3/UFile on the node to fail. If your business is using US3/UFile, please make sure to check the current version and upgrade according to the CSI Upgrade Document as soon as possible. If you have any questions, please contact our technical support.
Manually Deploy CSI
For those clusters that do not have pre-installed US3 CSI, please execute the following commands to deploy:
Cluster Version 1.14~1.20
kubectl apply -f https://docs.surfercloud.com/uk8s/yaml/volume/us3.21.11.2/csi-controller.yml
kubectl apply -f https://docs.surfercloud.com/uk8s/yaml/volume/us3.21.11.2/csi-node.yml
kubectl apply -f https://docs.surfercloud.com/uk8s/yaml/volume/us3.21.11.2/rbac-controller.yml
kubectl apply -f https://docs.surfercloud.com/uk8s/yaml/volume/us3.21.11.2/rbac-node.yml
Cluster Version 1.22 and Above
kubectl apply -f https://docs.surfercloud.com/uk8s/yaml/volume/us3.23.09.12_v1.22/csi-controller.yml
kubectl apply -f https://docs.surfercloud.com/uk8s/yaml/volume/us3.23.09.12_v1.22/csi-node.yml
kubectl apply -f https://docs.surfercloud.com/uk8s/yaml/volume/us3.23.09.12_v1.22/rbac-controller.yml
kubectl apply -f https://docs.surfercloud.com/uk8s/yaml/volume/us3.23.09.12_v1.22/rbac-node.yml
Supported Regions for UK8S to Mount US3 (Updated Continuously)
UK8S has already supported the mounting of US3. For specific supported regions, please check US3 Access Domain Names
I. Create US3 Authorization Secret
Because US3 has regional attributes and operation permission control, we need to manually create Secret and StorageCLass.
First, we create an object storage directory in the US3 console in advance and generate an authorization token (Token) for this directory, as shown in the figure:
For troubleshooting on Token creation and management, you may refer to this guide.
Create a Secret in Kubernetes for this token, as shown below:
apiVersion: v1
kind: Secret
metadata:
name: us3-secret
namespace: kube-system
stringData:
accessKeyID: TOKEN_9a6ec9fd-9cb7-4510-8ded-xxxxxxxx # Non-account public key, for US3 token public key.
secretAccessKey: c429c8e5-e4e6-4366-bf93-xxxxxx # Non-account private key, for US3 token private key.
endpoint: http://internal.s3-cn-bj.ufileos.com
Field explanation:
accessKeyID: US3 public key
secretAccessKey: US3 private key
endpoint: Corresponding regional access to S3 service URL. For specific details, please refer to US3 Access Domain Names
For the corresponding regional service URL, refer to the Supported Regions for UK8S to Mount US3 (Updated Continuously) section. Use of internal network address is recommended.
II. Create Storage Class
Next, we create the StorageClass. As you can see below, in this StorageClass we have defined US3’s backet and associated it with the Secret we created in the previous step.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-ufile
provisioner: ufile.csi.ucloud.cn
parameters:
bucket: csis3-bucketname # Pre-registered US3 Bucket
path: /csis3-dirname/ # It represents the directory structure relative to the root file of the Bucket when mounting, default is/ (supported in version 23.09.12 and later).
csi.storage.k8s.io/node-publish-secret-name: us3-secret # Associated with the Secret created in the previous step
csi.storage.k8s.io/node-publish-secret-namespace: kube-system
III. Create Persistent Storage Volume Claim (PVC)
The storage space of a single US3 Bucket theoretically has no upper limit, so the capacity parameters in PV and PVC have no actual significance. The requests information here will not actually take effect.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: logs3-c
spec:
storageClassName: csi-ufile
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
IV. Use PVC in Pod
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: uhub.surfercloud.com/ucloud/nginx:latest
ports:
- containerPort: 80
volumeMounts:
- name: test
mountPath: /data
volumes:
- name: test
persistentVolumeClaim:
claimName: logs3-claim