众客华禹

搜索
查看: 582|回复: 0

Centos7二进制部署k8s-v1.20.2 ipvs版本(NFS持久化存储)

[复制链接]

70

主题

70

帖子

378

积分

管理员

Rank: 9Rank: 9Rank: 9

积分
378
发表于 2021-12-30 14:27:08 | 显示全部楼层 |阅读模式
一、k8s集群配置nfs存储
1、创建pv有两种方式:
1)集群管理员通过手动方式静态创建应用所需要的PV
2)用户手动创建PVC并由Provisioner组件动态创建对应的PV
2、静态创建存储卷
1)集群管理员创建NFS PV
2)用户创建PVC
3)用户创建应用,并使用第二步创建的PVC
3、动态创建存储卷
1)集群管理员只需要保证环境中有NFS相关的storageclass即可
2)用户创建PVC,此处PVC的storageClassName指定为上面的NFS的storageclass名称
3)用户创建应用。并使用第二步创建的PVC
4、对比
动态创建存储卷比静态创建存储卷,少了集群管理员的干预
动态创建存储卷,要求集群中部署有nfs-client-provisioner以及对应的storageclass
二、环境准备
1、nfs服务端

  1. #所有服务端节点安装nfs

  2. 这里的服务端:192.168.112.138
  3. yum install -y nfs-utils
  4. systemctl enable nfs-server rpcbind --now
  5. #创建nfs共享目录,授权
  6. mkdir -p /data/nfs-volume && chmod -R 777 /data/nfs-volume
  7. #写入exports
  8. cat > /etc/exports << EOF
  9. /data/nfs-volume 192.168.112.0/24(rw,sync,no_root_squash)
  10. EOF
  11. #重新加载配置
  12. systemctl reload nfs-server
  13. #使用如下命令进行验证
  14. showmount -e 192.168.112.138
复制代码
2、nfs客户端
  1. yum install -y nfs-utils
  2. systemctl enable nfs-server rpcbind --now
  3. #使用如下命令进行验证
  4. showmount -e 192.168.112.138
  5. df -h|tail -1
复制代码
3、配置客户端开机自动挂载
  1. mkdir /opt/nfs-volume
  2. cat >> /etc/fstab << EOF
  3. 192.168.112.138:/data/nfs-volume          /opt/nfs-volume         nfs     soft,timeo=1    0 0
  4. EOF
  5. mout -a  #读取/etc/fstab,实现自动挂在
  6. mount -t nfs 192.168.112.138:/data/nfs-volume /opt/nfs-volume 手动挂载
复制代码
三、k8s配置nfs持久化存储(静态创建存储卷)
1、创建pv资源,编辑pv资源的配置文件

  1. vim nfs-pv.yaml
  2. apiVersion: v1
  3. kind: PersistentVolume
  4. metadata:
  5.   name: nfs-pv001
  6. spec:
  7.   capacity:
  8.     storage: 5Gi
  9.   #指定访问模式
  10.   accessModes: #访问模式有三种,ReadWriteOnce,ReadOnlyMany,ReadWriteMany
  11.     - ReadWriteMany
  12.   #指定pv的回收策略,即pvc资源释放后的事件,回收策略有三种:Retain,Recyle,Delete
  13.   persistentVolumeReclaimPolicy: Retain
  14.   #指定pv的class为nfs,相当于为pv分类,pvc将指定class申请pv
  15.   storageClassName: nfs           #注意此处的修改
  16.   #指定pv为nfs服务器上对应的目录
  17.   nfs:
  18.     path: /data/nfs-volume               #在NFS文件系统上创建的共享文件目录
  19.     server: 192.168.112.138              #NFS服务器ip地址

  20. #创建pv
  21. [root@k8s-master nfs]# kubectl apply -f nfs-pv.yaml
复制代码
2、创建pvc资源,编辑pvc资源配置文件
  1. vim nfs-pvc.yaml
  2. apiVersion: v1
  3. kind: PersistentVolumeClaim
  4. metadata:
  5.   name: nfs-pvc001
  6.   namespace: nfs-pv-pvc
  7. spec:
  8.   accessModes:
  9.     - ReadWriteMany
  10.   storageClassName: nfs #注意此处的修改
  11.   resources:
  12.     requests:
  13.       storage: 5Gi

  14. #创建pvc
  15. [root@k8s-master nfs]# kubectl create namespace nfs-pv-pvc   创建用到的命名空间
  16. [root@k8s-master nfs]# kubectl apply -f nfs-pvc.yaml
复制代码
3、查看创建的资源
  1. [root@k8s-master nfs]# kubectl get pv -A
  2. NAME        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                   STORAGECLASS   REASON   AGE
  3. nfs-pv001   5Gi        RWX            Retain           Bound    nfs-pv-pvc/nfs-pvc001   nfs                     10s
  4. [root@k8s-master nfs]# kubectl get pvc -A
  5. NAMESPACE    NAME         STATUS   VOLUME      CAPACITY   ACCESS MODES   STORAGECLASS   AGE
  6. nfs-pv-pvc   nfs-pvc001   Bound    nfs-pv001   5Gi        RWX            nfs            10s
复制代码
4、创建测试资源
  1. vim nginx-apline.yaml
  2. apiVersion: apps/v1
  3. kind: Deployment
  4. metadata:
  5.   name: nginx-deployment
  6.   namespace: nfs-pv-pvc
  7.   labels:
  8.     app: nginx
  9. spec:
  10.   replicas: 2   #注意此处的修改
  11.   selector:
  12.     matchLabels:
  13.       app: nginx
  14.   template:
  15.     metadata:
  16.       labels:
  17.         app: nginx
  18.     spec:
  19.       containers:
  20.       - name: nginx
  21.         image: nginx:alpine
  22.         imagePullPolicy: IfNotPresent    #镜像拉去策略,Always-总是拉取;IfNotPresent-默认值,本地有则使用本地镜像,不拉取;Never-只使用本地镜像,从不拉取
  23.         ports:
  24.         - containerPort: 80
  25.         volumeMounts:
  26.         - name: nfs-pvc
  27.           mountPath: "/usr/share/nginx/html"         
  28.       restartPolicy: Always    #容器的重启策略,如果restartPolicy不写,默认为:Always
  29.       volumes:
  30.       - name: nfs-pvc
  31.         persistentVolumeClaim:
  32.           claimName: nfs-pvc001     #与pvc名字一样
  33. ---
  34. apiVersion: v1
  35. kind: Service
  36. metadata:
  37.    name: my-svc-nginx-alpine
  38.    namespace: nfs-pv-pvc
  39. spec:
  40.    type: ClusterIP
  41.    selector:
  42.      app: nginx
  43.    ports:
  44.    - protocol: TCP
  45.      port: 80
  46.      targetPort: 80

  47. #创建
  48. [root@k8s-master nfs]# kubectl apply -f nginx-apline.yaml
复制代码
5、验证
  1. #nfs服务端创建测试文件
  2. echo "2021-09-03" > /data/nfs-volume/index/html
  3. #k8s集群查看
  4. [root@k8s-master nfs]# kubectl get pod -n nfs-pv-pvc -o custom-columns=':metadata.name'
  5. nginx-deployment-799b74d8dc-8nrx9
  6. nginx-deployment-799b74d8dc-9mf9c
  7. [root@k8s-master nfs]# kubectl exec -it nginx-deployment-799b74d8dc-8nrx9 -n nfs-pv-pvc -- cat /usr/share/nginx/html/index.html
  8. 2021-09-03
  9. #访问pod的IP验证
  10. [root@k8s-master nfs]# kubectl get pod -n nfs-pv-pvc -o custom-columns=':status.podIP' |xargs curl
  11. 2021-09-03
  12. 2021-09-03
  13. #访问pod的IP验证
  14. [root@k8s-master nfs]# curl 172.16.169.166      
  15. 2021-09-03
  16. #访问svc的IP验证
  17. [root@k8s-master nfs]# curl 10.255.73.238        
  18. 2021-09-03
复制代码
四、k8s配置nfs持久化存储(动态创建存储卷)
1、获取工程

  1. git clone https://github.com/kubernetes-retired/external-storage.git
  2. cd ~/external-storage/nfs-client/deploy
复制代码
2、修改配置文件
  1. mkdir my-nfs-client-provisioner && cd my-nfs-client-provisioner
  2. [root@k8s-master my-nfs-client-provisioner]# cat rbac.yaml
  3. apiVersion: v1
  4. kind: ServiceAccount
  5. metadata:
  6.   name: nfs-client-provisioner
  7.   # replace with namespace where provisioner is deployed
  8.   namespace: default
  9. ---
  10. kind: ClusterRole
  11. apiVersion: rbac.authorization.k8s.io/v1
  12. metadata:
  13.   name: nfs-client-provisioner-runner
  14. rules:
  15.   - apiGroups: [""]
  16.     resources: ["persistentvolumes"]
  17.     verbs: ["get", "list", "watch", "create", "delete"]
  18.   - apiGroups: [""]
  19.     resources: ["persistentvolumeclaims"]
  20.     verbs: ["get", "list", "watch", "update"]
  21.   - apiGroups: ["storage.k8s.io"]
  22.     resources: ["storageclasses"]
  23.     verbs: ["get", "list", "watch"]
  24.   - apiGroups: [""]
  25.     resources: ["events"]
  26.     verbs: ["create", "update", "patch"]
  27. ---
  28. kind: ClusterRoleBinding
  29. apiVersion: rbac.authorization.k8s.io/v1
  30. metadata:
  31.   name: run-nfs-client-provisioner
  32. subjects:
  33.   - kind: ServiceAccount
  34.     name: nfs-client-provisioner
  35.     # replace with namespace where provisioner is deployed
  36.     namespace: default
  37. roleRef:
  38.   kind: ClusterRole
  39.   name: nfs-client-provisioner-runner
  40.   apiGroup: rbac.authorization.k8s.io
  41. ---
  42. kind: Role
  43. apiVersion: rbac.authorization.k8s.io/v1
  44. metadata:
  45.   name: leader-locking-nfs-client-provisioner
  46.   # replace with namespace where provisioner is deployed
  47.   namespace: default
  48. rules:
  49.   - apiGroups: [""]
  50.     resources: ["endpoints"]
  51.     verbs: ["get", "list", "watch", "create", "update", "patch"]
  52. ---
  53. kind: RoleBinding
  54. apiVersion: rbac.authorization.k8s.io/v1
  55. metadata:
  56.   name: leader-locking-nfs-client-provisioner
  57.   # replace with namespace where provisioner is deployed
  58.   namespace: default
  59. subjects:
  60.   - kind: ServiceAccount
  61.     name: nfs-client-provisioner
  62.     # replace with namespace where provisioner is deployed
  63.     namespace: default
  64. roleRef:
  65.   kind: Role
  66.   name: leader-locking-nfs-client-provisioner
  67.   apiGroup: rbac.authorization.k8s.io

  68. [root@k8s-master my-nfs-client-provisioner]# cat class.yaml
  69. apiVersion: storage.k8s.io/v1
  70. kind: StorageClass
  71. metadata:
  72.   name: managed-nfs-storage
  73. provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
  74. parameters:
  75.   archiveOnDelete: "false"

  76. [root@k8s-master my-nfs-client-provisioner]# cat deployment.yaml
  77. apiVersion: apps/v1
  78. kind: Deployment
  79. metadata:
  80.   name: nfs-client-provisioner
  81.   labels:
  82.     app: nfs-client-provisioner
  83.   # replace with namespace where provisioner is deployed
  84.   namespace: default
  85. spec:
  86.   replicas: 1
  87.   strategy:
  88.     type: Recreate
  89.   selector:
  90.     matchLabels:
  91.       app: nfs-client-provisioner
  92.   template:
  93.     metadata:
  94.       labels:
  95.         app: nfs-client-provisioner
  96.     spec:
  97.       serviceAccountName: nfs-client-provisioner
  98.       containers:
  99.         - name: nfs-client-provisioner
  100.           image: quay.io/external_storage/nfs-client-provisioner:latest
  101.           volumeMounts:
  102.             - name: nfs-client-root
  103.               mountPath: /persistentvolumes
  104.           env:
  105.             - name: PROVISIONER_NAME
  106.               value: fuseim.pri/ifs
  107.             - name: NFS_SERVER
  108.               value: 192.168.112.138    #注意此处的修改为nfs的服务端IP
  109.             - name: NFS_PATH
  110.               value: /data/nfs-volume/  #注意此处的修改为nfs的服务端共享目录
  111.       volumes:
  112.         - name: nfs-client-root
  113.           nfs:
  114.             server: 192.168.112.138     #注意此处的修改为nfs的服务端IP
  115.             path: /data/nfs-volume      #注意此处的修改为nfs的服务端共享目录

  116. [root@k8s-master my-nfs-client-provisioner]# cat test-claim.yaml
  117. kind: PersistentVolumeClaim
  118. apiVersion: v1
  119. metadata:
  120.   name: test-claim
  121.   annotations:
  122.     volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
  123. spec:
  124.   accessModes:
  125.     - ReadWriteMany
  126.   resources:
  127.     requests:
  128.       storage: 1024Mi

  129. [root@k8s-master my-nfs-client-provisioner]# cat test-pod.yaml
  130. kind: Pod
  131. apiVersion: v1
  132. metadata:
  133.   name: test-pod
  134. spec:
  135.   containers:
  136.   - name: test-pod
  137.     image: gcr.io/google_containers/busybox:1.24
  138.     command:
  139.       - "/bin/sh"
  140.     args:
  141.       - "-c"
  142.       - "touch /mnt/SUCCESS && exit 0 || exit 1"
  143.     volumeMounts:
  144.       - name: nfs-pvc
  145.         mountPath: "/mnt"
  146.   restartPolicy: "Never"
  147.   volumes:
  148.     - name: nfs-pvc
  149.       persistentVolumeClaim:
  150.         claimName: test-claim

  151. 配置说明:
  152. kind: ServiceAccount       定义一个服务账户,该账户负责向集群申请资源
  153. kind: ClusterRole          定义集群角色
  154. kind: ClusterRoleBinding   集群角色与服务账户绑定
  155. kind: Role                 角色
  156. kind: RoleBinding          角色与服务账户绑定

  157. kubectl apply -f .         应用
复制代码
3、验证
nfs%E5%8A%A8%E6%80%81%E6%88%AA%E5%9B%BE.png
4、如果没有自动创建pv,nfs-client-provisioner日志报如下错误
  1. E0304 06:18:05.352939       1 controller.go:1004] provision "default/diss-db-pvc" class "managed-nfs-storage": unexpected error getting claim reference: selfLink was empty, can't make reference
  2. I0304 06:18:06.365388       1 controller.go:987] provision "default/diss-db-pvc" class "managed-nfs-storage": started
  3. E0304 06:18:06.371904       1 controller.go:1004] provision "default/diss-db-pvc" class "managed-nfs-storage": unexpected error getting claim reference: selfLink was empty, can't make reference
  4. I0304 06:23:09.410514       1 controller.go:987] provision "default/diss-db-pvc" class "managed-nfs-storage": started
  5. E0304 06:23:09.416387       1 controller.go:1004] provision "default/diss-db-pvc" class "managed-nfs-storage": unexpected error getting claim reference: selfLink was empty, can't make reference
  6. I0304 06:23:16.933814       1 controller.go:987] provision "default/diss-db-pvc" class "managed-nfs-storage": started
  7. E0304 06:23:16.937994       1 controller.go:1004] provision "default/diss-db-pvc" class "managed-nfs-storage": unexpected error getting claim reference: selfLink was empty, can't make reference
  8. I0304 06:33:06.365740       1 controller.go:987] provision "default/diss-db-pvc" class "managed-nfs-storage": started
  9. E0304 06:33:06.369275       1 controller.go:1004] provision "default/diss-db-pvc" class "managed-nfs-storage": unexpected error getting claim reference: selfLink was empty, can't make reference
  10. I0304 06:48:06.365940       1 controller.go:987] provision "default/diss-db-pvc" class "managed-nfs-storage": started
  11. E0304 06:48:06.369685       1 controller.go:1004] provision "default/diss-db-pvc" class "managed-nfs-storage": unexpected error getting claim reference: selfLink was empty, can't make reference
复制代码
这是因为kubernetes 1.20以上版本 禁用了 selfLink
解决办法:
  1. 在kube-apiserver.yaml配置文件里面添加
  2. --feature-gates=RemoveSelfLink=false
  3. 然后重启kube-apiserver服务即可
复制代码
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

快速回复 返回顶部 返回列表