Centos7部署ceph-14.2.22(nautilus)版集群(k8s对接外部ceph存储)

一、k8s对接外部ceph存储

1、k8s对接ceph存储的六种方式

  1. 直接使用ceph的文件系统
  2. 直接使用ceph的块存储
  3. 使用社区提供的cephfs做持久化数据卷
  4. 使用社区提供的RBD做pod的持久化存储
  5. 使用官方ceph-csi的cephfs方式
  6. 使用官方ceph-csi的rbd方式

2、分为三大类

1)社区

  • cephfs做持久化存储
  • RBD做pod的持久化存储

2)官方ceph-csi

  • cephfs方式
  • rbd方式

3)直接使用ceph的文件系统或者块存储

3、k8s可以通过两种方式使用ceph做volume:cephfs和rbd

1.一个ceph集群仅支持一个cephFS
2.cephfs方式支持k8s的pv的3种访问模式ReadWriteOnce、ReadOnlyMany、ReadWriteMany
3.RBD支持ReadWriteOnce、ReadOnlyMany两种方式
注意:访问模式只是能力描述,并不是强制执行,对于没有按PVC声明的方式使用pv,存储提供者应该负责访问时的运行错误。例如如果设置PVC的访问模式为ReadOnlyMany,pod挂载后依然可写,如果需要真正的不可写,申请pvc是需要指定readOnly:true参数

二、直接使用ceph的块存储(直接使用静态PV)

1、k8s使用volume的方法

1.直接方式:volume->backend
2.静态配备:volume->PersistentVolumeClaim->PersistenVolume->Backend
3.动态配备:volume->PersistentVolumeClaim->StorageClass->Backend

2、静态PV(rbd)方式

1.所有k8s节点安装依赖组件

cat >  /etc/yum.repos.d/ceph.repo << EOF
[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/x86_64/
gpgcheck=0
priority=1

[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch/
gpgcheck=0
priority=1

[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMS
gpgcheck=0
priority=1
EOF

cat > /etc/yum.repos.d/epel.repo << EOF
[epel]
name=Extra Packages for Enterprise Linux 7 - $basearch
baseurl=http://mirrors.aliyun.com/epel/7/$basearch
failovermethod=priority
enabled=1
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
 
[epel-debuginfo]
name=Extra Packages for Enterprise Linux 7 - $basearch - Debug
baseurl=http://mirrors.aliyun.com/epel/7/$basearch/debug
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=0
 
[epel-source]
name=Extra Packages for Enterprise Linux 7 - $basearch - Source
baseurl=http://mirrors.aliyun.com/epel/7/SRPMS
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=0
EOF

2.配置免密

ssh-copy-id 192.168.112.110
ssh-copy-id 192.168.112.111
ssh-copy-id 192.168.112.112

3.同步配置文件

ceph-deploy --overwrite-conf admin ceph1 ceph2 ceph3 k8s-master k8s-node1 k8s-node2

4.创建存储池并开启rbd功能

#创建kube池给k8s
ceph osd pool create kube 128 128

5.创建ceph用户,提供给k8s使用

#查看ceph集群中的认证用户及相关的key
ceph auth list

#删除集群中的一个认证用户
ceph auth del osd.0(不执行,需要再执行)
#创建集群用户
ceph auth get-or-create client.kube mon 'allow r' osd 'allow class-read object_prefix rbd_children,allow rwx pool=kube'

6.创建secret资源

ceph auth get-key client.admin | base64
ceph auth get-key client.kube | base64

#base64单向加密,k8s不以明文方式存储账号密码
mkdir jtpv && cd jtpv
cat > ceph-admin-secret.yaml << EOF apiVersion: v1 kind: Secret metadata: name: ceph-admin-secret namespace: default data: key: QVFEOHhqVmhBMXE5THhBQXRFbzZZUWtkbzRjREdkbG9kdGl6NHc9PQ== #(admin的key) type: kubernetes.io/rbd EOF cat > ceph-kube-secret.yaml << EOF
apiVersion: v1
kind: Secret
metadata:
  name: ceph-kube-secret
  namespace: default
data:
  key: QVFEZzZUNWhXS09OQnhBQXdtQmxlWlozWmtJVDRUWm1tUXUrcXc9PQ==  #(kube的key) 
type:
  kubernetes.io/rbd
EOF

7.创建pv

cat > pv.yaml << EOF
apiVersion: v1
kind: PersistentVolume
metadata:
  name: ceph-pv-test
spec:
  capacity:
    storage: 5Gi
  #指定访问模式
  accessModes: #访问模式有三种,ReadWriteOnce,ReadOnlyMany,ReadWriteMany
    - ReadWriteOnce
  rbd:
    monitors:
      - 192.168.112.130:6789
      - 192.168.112.131:6789
      - 192.168.112.132:6789
    pool: kube
    image: ceph-image
    user: admin
    secretRef:
      name: ceph-admin-secret
    fsType: ext4
    readOnly: false
  #指定pv的回收策略,即pvc资源释放后的事件,回收策略有三种:Retain,Recyle,Delete
  persistentVolumeReclaimPolicy: Retain
EOF

8.发布创建

[root@k8s-master jtpv]# kubectl apply -f pv.yaml 
persistentvolume/ceph-pv-test created
[root@k8s-master jtpv]# kubectl get  pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                STORAGECLASS          REASON   AGE
ceph-pv-test                               5Gi        RWO            Retain           Available                                                       9s
[root@k8s-master jtpv]# kubectl describe pv ceph-pv-test
Name:            ceph-pv-test
Labels:          
Annotations:     
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    
Status:          Available
Claim:           
Reclaim Policy:  Retain
Access Modes:    RWO
VolumeMode:      Filesystem
Capacity:        5Gi
Node Affinity:   
Message:         
Source:
    Type:          RBD (a Rados Block Device mount on the host that shares a pod's lifetime)
    CephMonitors:  [192.168.112.130:6789 192.168.112.131:6789 192.168.112.132:6789]
    RBDImage:      ceph-image                 #未创建
    FSType:        ext4
    RBDPool:       kube
    RadosUser:     admin
    Keyring:       /etc/ceph/keyring
    SecretRef:     &SecretReference{Name:ceph-admin-secret,Namespace:,}
    ReadOnly:      false
Events:

9.创建镜像,划出一块空间给他用

#创建一个名为ceph-image的镜像,大小5G
rbd create -p kube -s 5G ceph-image
#查看镜像
rbd ls -p kube
#查看详情
rbd info ceph-image -p kube
rbd image 'ceph-image':
	size 5 GiB in 1280 objects
	order 22 (4 MiB objects)
	snapshot_count: 0
	id: d9aa5d2335ec
	block_name_prefix: rbd_data.d9aa5d2335ec
	format: 2
	features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
	op_features: 
	flags: 
	create_timestamp: Mon Sep 13 14:31:46 2021
	access_timestamp: Mon Sep 13 14:31:46 2021
	modify_timestamp: Mon Sep 13 14:31:46 2021

10.创建一个pvc

cat > pvc.yaml << EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ceph-test-claim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
EOF
kubectl apply -f pvc.yaml

11.创建测试pod

cat > pod.yaml << EOF
apiVersion: v1
kind: Pod
metadata:
  name: ceph-pod
spec:
  containers:
  - name: test-pod
    image: busybox:1.24
    command: ["sleep","60000"]
    volumeMounts:
    - name: pvc
      mountPath: "/usr/share/busybox"
      readOnly: false
  volumes:
    - name: pvc
      persistentVolumeClaim:
        claimName: ceph-test-claim
EOF
kubectl apply -f pod.yaml
#验证
kubectl exec -it ceph-pod -- df -h|grep /dev/rbd0

12.解决报错
如果pod详情报这个错

k8s对接ceph报错
MountVolume.WaitForAttach failed for volume “ceph-pv-test” : rbd: map failed exit status 6, rbd output: rbd: sysfs write failed
RBD image feature set mismatch. You can disable features unsupported by the kernel with “rbd feature disable kube/ceph-image object-map fast-diff deep-flatten”.
In some cases useful info is found in syslog – try “dmesg | tail”.
rbd: map failed: (6) No such device or address
原因:内核3.10版本只支持layering,其他特性需要高版本内核支持
layering: 支持分层
striping: 支持条带化 v2
exclusive-lock: 支持独占锁
object-map: 支持对象映射(依赖 exclusive-lock )
fast-diff: 快速计算差异(依赖 object-map )
deep-flatten: 支持快照扁平化操作
journaling: 支持记录 IO 操作(依赖独占锁)

解决办法:你可以禁用内核不支持的特性。
rbd feature disable kube/ceph-image object-map fast-diff deep-flatten
© 版权声明
THE END
喜欢就支持一下吧
点赞1 分享
评论 抢沙发
头像
欢迎您留下宝贵的见解!
提交
头像

昵称

取消
昵称表情代码图片

    暂无评论内容