众客华禹

搜索
查看: 526|回复: 0

Centos7 使用脚本二进制安装k8s

[复制链接]

70

主题

70

帖子

378

积分

管理员

Rank: 9Rank: 9Rank: 9

积分
378
发表于 2021-12-30 14:35:20 | 显示全部楼层 |阅读模式
网上找到一个大佬写的shell脚本,用二进制方式安装k8s集群,做好了系统层面的优化,直接用于生产没问题,本人已测试,没问题
一、安装准备
1、部署网络架构图

2、下载shell脚本
  1. <a href="https://github.com/bogeit/LearnK8s/blob/main/k8s_install_new.sh" target="_blank">https://github.com/bogeit/LearnK8s/blob/main/k8s_install_new.sh</a>
  2. #!/bin/bash
  3. # auther: boge
  4. # descriptions:  the shell scripts will use ansible to deploy K8S at binary for siample

  5. # 传参检测
  6. [ $# -ne 6 ] && echo -e "Usage: $0 rootpasswd netnum nethosts cri cni k8s-cluster-name\nExample: bash $0 bogedevops 10.0.1 201\ 202\ 203\ 204 [containerd|docker] [calico|flannel] test\n" && exit 11

  7. # 变量定义
  8. export release=3.0.0
  9. export k8s_ver=v1.19.7  # v1.20.2, v1.19.7, v1.18.15, v1.17.17
  10. rootpasswd=$1
  11. netnum=$2
  12. nethosts=$3
  13. cri=$4
  14. cni=$5
  15. clustername=$6
  16. if ls -1v ./kubeasz*.tar.gz &>/dev/null;then software_packet="$(ls -1v ./kubeasz*.tar.gz )";else software_packet="";fi
  17. pwd="/etc/kubeasz"


  18. # deploy机器升级软件库
  19. if cat /etc/redhat-release &>/dev/null;then
  20.     yum update -y
  21. else
  22.     apt-get update && apt-get upgrade -y && apt-get dist-upgrade -y
  23.     [ $? -ne 0 ] && apt-get -yf install
  24. fi

  25. # deploy机器检测python环境
  26. python2 -V &>/dev/null
  27. if [ $? -ne 0 ];then
  28.     if cat /etc/redhat-release &>/dev/null;then
  29.         yum install gcc openssl-devel bzip2-devel
  30.         wget <a href="https://www.python.org/ftp/python/2.7.16/Python-2.7.16.tgz" target="_blank">https://www.python.org/ftp/python/2.7.16/Python-2.7.16.tgz</a>
  31.         tar xzf Python-2.7.16.tgz
  32.         cd Python-2.7.16
  33.         ./configure --enable-optimizations
  34.         make altinstall
  35.         ln -s /usr/bin/python2.7 /usr/bin/python
  36.         cd -
  37.     else
  38.         apt-get install -y python2.7 && ln -s /usr/bin/python2.7 /usr/bin/python
  39.     fi
  40. fi

  41. # deploy机器设置pip安装加速源
  42. if [[ $clustername != 'aws' ]]; then
  43. mkdir ~/.pip
  44. cat > ~/.pip/pip.conf </dev/null;then
  45.     yum install git python-pip sshpass -y
  46.     [ -f ./get-pip.py ] && python ./get-pip.py || {
  47.     wget <a href="https://bootstrap.pypa.io/2.7/get-pip.py" target="_blank">https://bootstrap.pypa.io/2.7/get-pip.py</a> && python get-pip.py
  48.     }
  49. else
  50.     apt-get install git python-pip sshpass -y
  51.     [ -f ./get-pip.py ] && python ./get-pip.py || {
  52.     wget <a href="https://bootstrap.pypa.io/2.7/get-pip.py" target="_blank">https://bootstrap.pypa.io/2.7/get-pip.py</a> && python get-pip.py
  53.     }
  54. fi
  55. python -m pip install --upgrade "pip < 21.0" pip -V pip install --no-cache-dir ansible netaddr # 在deploy机器做其他node的ssh免密操作 for host in `echo "${nethosts}"` do echo "============ ${netnum}.${host} ==========="; if [[ ${USER} == 'root' ]];then [ ! -f /${USER}/.ssh/id_rsa ] &&\ ssh-keygen -t rsa -P '' -f /${USER}/.ssh/id_rsa else [ ! -f /home/${USER}/.ssh/id_rsa ] &&\ ssh-keygen -t rsa -P '' -f /home/${USER}/.ssh/id_rsa fi sshpass -p ${rootpasswd} ssh-copy-id -o StrictHostKeyChecking=no ${USER}@${netnum}.${host} if cat /etc/redhat-release &>/dev/null;then
  56.         ssh -o StrictHostKeyChecking=no ${USER}@${netnum}.${host} "yum update -y"
  57.     else
  58.         ssh -o StrictHostKeyChecking=no ${USER}@${netnum}.${host} "apt-get update && apt-get upgrade -y && apt-get dist-upgrade -y"
  59.         [ $? -ne 0 ] && ssh -o StrictHostKeyChecking=no ${USER}@${netnum}.${host} "apt-get -yf install"
  60.     fi
  61. done


  62. # deploy机器下载k8s二进制安装脚本

  63. if [[ ${software_packet} == '' ]];then
  64.     curl -C- -fLO --retry 3 <a href="https://github.com/easzlab/kubeasz/releases/download/" target="_blank">https://github.com/easzlab/kubeasz/releases/download/</a>${release}/ezdown
  65.     sed -ri "s+^(K8S_BIN_VER=).*$+\1${k8s_ver}+g" ezdown
  66.     chmod +x ./ezdown
  67.     # 使用工具脚本下载
  68.     ./ezdown -D && ./ezdown -P
  69. else
  70.     tar xvf ${software_packet} -C /etc/
  71.     chmod +x ${pwd}/{ezctl,ezdown}
  72. fi

  73. # 初始化一个名为my的k8s集群配置

  74. CLUSTER_NAME="$clustername"
  75. ${pwd}/ezctl new ${CLUSTER_NAME}
  76. if [[ $? -ne 0 ]];then
  77.     echo "cluster name [${CLUSTER_NAME}] was exist in ${pwd}/clusters/${CLUSTER_NAME}."
  78.     exit 1
  79. fi

  80. if [[ ${software_packet} != '' ]];then
  81.     # 设置参数,启用离线安装
  82.     sed -i 's/^INSTALL_SOURCE.*$/INSTALL_SOURCE: "offline"/g' ${pwd}/clusters/${CLUSTER_NAME}/config.yml
  83. fi


  84. # to check ansible service
  85. ansible all -m ping

  86. #---------------------------------------------------------------------------------------------------




  87. #修改二进制安装脚本配置 config.yml

  88. sed -ri "s+^(CLUSTER_NAME<img src="static/image/smiley/default/smile.gif" border="0" smilieid="1" alt=":)">.*$+\1 "${CLUSTER_NAME}"+g" ${pwd}/clusters/${CLUSTER_NAME}/config.yml

  89. ## k8s上日志及容器数据存独立磁盘步骤(参考阿里云的)

  90. [ ! -d /var/lib/container ] && mkdir -p /var/lib/container/{kubelet,docker}

  91. ## cat /etc/fstab     
  92. # UUID=105fa8ff-bacd-491f-a6d0-f99865afc3d6 /                       ext4    defaults        1 1
  93. # /dev/vdb /var/lib/container/ ext4 defaults 0 0
  94. # /var/lib/container/kubelet /var/lib/kubelet none defaults,bind 0 0
  95. # /var/lib/container/docker /var/lib/docker none defaults,bind 0 0

  96. ## tree -L 1 /var/lib/container
  97. # /var/lib/container
  98. # ├── docker
  99. # ├── kubelet
  100. # └── lost+found

  101. # docker data dir
  102. DOCKER_STORAGE_DIR="/var/lib/container/docker"
  103. sed -ri "s+^(STORAGE_DIR<img src="static/image/smiley/default/smile.gif" border="0" smilieid="1" alt=":)">.*$+STORAGE_DIR: "${DOCKER_STORAGE_DIR}"+g" ${pwd}/clusters/${CLUSTER_NAME}/config.yml
  104. # containerd data dir
  105. CONTAINERD_STORAGE_DIR="/var/lib/container/containerd"
  106. sed -ri "s+^(STORAGE_DIR<img src="static/image/smiley/default/smile.gif" border="0" smilieid="1" alt=":)">.*$+STORAGE_DIR: "${CONTAINERD_STORAGE_DIR}"+g" ${pwd}/clusters/${CLUSTER_NAME}/config.yml
  107. # kubelet logs dir
  108. KUBELET_ROOT_DIR="/var/lib/container/kubelet"
  109. sed -ri "s+^(KUBELET_ROOT_DIR<img src="static/image/smiley/default/smile.gif" border="0" smilieid="1" alt=":)">.*$+KUBELET_ROOT_DIR: "${KUBELET_ROOT_DIR}"+g" ${pwd}/clusters/${CLUSTER_NAME}/config.yml
  110. if [[ $clustername != 'aws' ]]; then
  111.     # docker aliyun repo
  112.     REG_MIRRORS="https://pqbap4ya.mirror.aliyuncs.com"
  113.     sed -ri "s+^REG_MIRRORS:.*$+REG_MIRRORS: \'["${REG_MIRRORS}"]\'+g" ${pwd}/clusters/${CLUSTER_NAME}/config.yml
  114. fi
  115. # [docker]信任的HTTP仓库
  116. sed -ri "s+127.0.0.1/8+${netnum}.0/24+g" ${pwd}/clusters/${CLUSTER_NAME}/config.yml
  117. # disable dashboard auto install
  118. sed -ri "s+^(dashboard_install<img src="static/image/smiley/default/smile.gif" border="0" smilieid="1" alt=":)">.*$+\1 "no"+g" ${pwd}/clusters/${CLUSTER_NAME}/config.yml


  119. # 融合配置准备
  120. CLUSEER_WEBSITE="${CLUSTER_NAME}k8s.gtapp.xyz"
  121. lb_num=$(grep -wn '^MASTER_CERT_HOSTS:' ${pwd}/clusters/${CLUSTER_NAME}/config.yml |awk -F: '{print $1}')
  122. lb_num1=$(expr ${lb_num} + 1)
  123. lb_num2=$(expr ${lb_num} + 2)
  124. sed -ri "${lb_num1}s+.*$+  - "${CLUSEER_WEBSITE}"+g" ${pwd}/clusters/${CLUSTER_NAME}/config.yml
  125. sed -ri "${lb_num2}s+(.*)$+#\1+g" ${pwd}/clusters/${CLUSTER_NAME}/config.yml

  126. # node节点最大pod 数
  127. MAX_PODS="120"
  128. sed -ri "s+^(MAX_PODS<img src="static/image/smiley/default/smile.gif" border="0" smilieid="1" alt=":)">.*$+\1 ${MAX_PODS}+g" ${pwd}/clusters/${CLUSTER_NAME}/config.yml



  129. # 修改二进制安装脚本配置 hosts
  130. # clean old ip
  131. sed -ri '/192.168.1.1/d' ${pwd}/clusters/${CLUSTER_NAME}/hosts
  132. sed -ri '/192.168.1.2/d' ${pwd}/clusters/${CLUSTER_NAME}/hosts
  133. sed -ri '/192.168.1.3/d' ${pwd}/clusters/${CLUSTER_NAME}/hosts
  134. sed -ri '/192.168.1.4/d' ${pwd}/clusters/${CLUSTER_NAME}/hosts

  135. # 输入准备创建ETCD集群的主机位
  136. echo "enter etcd hosts here (example: 203 202 201) ↓"
  137. read -p "" ipnums
  138. for ipnum in `echo ${ipnums}`
  139. do
  140.     echo $netnum.$ipnum
  141.     sed -i "/\[etcd/a $netnum.$ipnum"  ${pwd}/clusters/${CLUSTER_NAME}/hosts
  142. done

  143. # 输入准备创建KUBE-MASTER集群的主机位
  144. echo "enter kube-master hosts here (example: 202 201) ↓"
  145. read -p "" ipnums
  146. for ipnum in `echo ${ipnums}`
  147. do
  148.     echo $netnum.$ipnum
  149.     sed -i "/\[kube_master/a $netnum.$ipnum"  ${pwd}/clusters/${CLUSTER_NAME}/hosts
  150. done

  151. # 输入准备创建KUBE-NODE集群的主机位
  152. echo "enter kube-node hosts here (example: 204 203) ↓"
  153. read -p "" ipnums
  154. for ipnum in `echo ${ipnums}`
  155. do
  156.     echo $netnum.$ipnum
  157.     sed -i "/\[kube_node/a $netnum.$ipnum"  ${pwd}/clusters/${CLUSTER_NAME}/hosts
  158. done

  159. # 配置容器运行时CNI
  160. case ${cni} in
  161.     flannel)
  162.     sed -ri "s+^CLUSTER_NETWORK=.*$+CLUSTER_NETWORK="${cni}"+g" ${pwd}/clusters/${CLUSTER_NAME}/hosts
  163.     ;;
  164.     calico)
  165.     sed -ri "s+^CLUSTER_NETWORK=.*$+CLUSTER_NETWORK="${cni}"+g" ${pwd}/clusters/${CLUSTER_NAME}/hosts
  166.     ;;
  167.     *)
  168.     echo "cni need be flannel or calico."
  169.     exit 11
  170. esac

  171. # 配置K8S的ETCD数据备份的定时任务
  172. if cat /etc/redhat-release &>/dev/null;then
  173.     if ! grep -w '94.backup.yml' /var/spool/cron/root &>/dev/null;then echo "00 00 * * * `which ansible-playbook` ${pwd}/playbooks/94.backup.yml &> /dev/null" >> /var/spool/cron/root;else echo exists ;fi
  174.     chown root.crontab /var/spool/cron/root
  175.     chmod 600 /var/spool/cron/root
  176. else
  177.     if ! grep -w '94.backup.yml' /var/spool/cron/crontabs/root &>/dev/null;then echo "00 00 * * * `which ansible-playbook` ${pwd}/playbooks/94.backup.yml &> /dev/null" >> /var/spool/cron/crontabs/root;else echo exists ;fi
  178.     chown root.crontab /var/spool/cron/crontabs/root
  179.     chmod 600 /var/spool/cron/crontabs/root
  180. fi
  181. rm /var/run/cron.reboot
  182. service crond restart




  183. #---------------------------------------------------------------------------------------------------
  184. # 准备开始安装了
  185. rm -rf ${pwd}/{dockerfiles,docs,.gitignore,pics,dockerfiles} &&\
  186. find ${pwd}/ -name '*.md'|xargs rm -f
  187. read -p "Enter to continue deploy k8s to all nodes >>>" YesNobbb

  188. # now start deploy k8s cluster
  189. cd ${pwd}/

  190. # to prepare CA/certs & kubeconfig & other system settings
  191. ${pwd}/ezctl setup ${CLUSTER_NAME} 01
  192. sleep 1
  193. # to setup the etcd cluster
  194. ${pwd}/ezctl setup ${CLUSTER_NAME} 02
  195. sleep 1
  196. # to setup the container runtime(docker or containerd)
  197. case ${cri} in
  198.     containerd)
  199.     sed -ri "s+^CONTAINER_RUNTIME=.*$+CONTAINER_RUNTIME="${cri}"+g" ${pwd}/clusters/${CLUSTER_NAME}/hosts
  200.     ${pwd}/ezctl setup ${CLUSTER_NAME} 03
  201.     ;;
  202.     docker)
  203.     sed -ri "s+^CONTAINER_RUNTIME=.*$+CONTAINER_RUNTIME="${cri}"+g" ${pwd}/clusters/${CLUSTER_NAME}/hosts
  204.     ${pwd}/ezctl setup ${CLUSTER_NAME} 03
  205.     ;;
  206.     *)
  207.     echo "cri need be containerd or docker."
  208.     exit 11
  209. esac
  210. sleep 1
  211. # to setup the master nodes
  212. ${pwd}/ezctl setup ${CLUSTER_NAME} 04
  213. sleep 1
  214. # to setup the worker nodes
  215. ${pwd}/ezctl setup ${CLUSTER_NAME} 05
  216. sleep 1
  217. # to setup the network plugin(flannel、calico...)
  218. ${pwd}/ezctl setup ${CLUSTER_NAME} 06
  219. sleep 1
  220. # to setup other useful plugins(metrics-server、coredns...)
  221. ${pwd}/ezctl setup ${CLUSTER_NAME} 07
  222. sleep 1
  223. # [可选]对集群所有节点进行操作系统层面的安全加固  <a href="https://github.com/dev-sec/ansible-os-hardening" target="_blank">https://github.com/dev-sec/ansible-os-hardening</a>
  224. #ansible-playbook roles/os-harden/os-harden.yml
  225. #sleep 1
  226. cd `dirname ${software_packet:-/tmp}`


  227. k8s_bin_path='/opt/kube/bin'


  228. echo "-------------------------  k8s version list  ---------------------------"
  229. ${k8s_bin_path}/kubectl version
  230. echo
  231. echo "-------------------------  All Healthy status check  -------------------"
  232. ${k8s_bin_path}/kubectl get componentstatus
  233. echo
  234. echo "-------------------------  k8s cluster info list  ----------------------"
  235. ${k8s_bin_path}/kubectl cluster-info
  236. echo
  237. echo "-------------------------  k8s all nodes list  -------------------------"
  238. ${k8s_bin_path}/kubectl get node -o wide
  239. echo
  240. echo "-------------------------  k8s all-namespaces's pods list   ------------"
  241. ${k8s_bin_path}/kubectl get pod --all-namespaces
  242. echo
  243. echo "-------------------------  k8s all-namespaces's service network   ------"
  244. ${k8s_bin_path}/kubectl get svc --all-namespaces
  245. echo
  246. echo "-------------------------  k8s welcome for you   -----------------------"
  247. echo

  248. # you can use k alias kubectl to siample
  249. echo "alias k=kubectl && complete -F __start_kubectl k" >> ~/.bashrc

  250. # get dashboard url
  251. ${k8s_bin_path}/kubectl cluster-info|grep dashboard|awk '{print $NF}'|tee -a /root/k8s_results

  252. # get login token
  253. ${k8s_bin_path}/kubectl -n kube-system describe secret $(${k8s_bin_path}/kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')|grep 'token:'|awk '{print $NF}'|tee -a /root/k8s_results
  254. echo
  255. echo "you can look again dashboard and token info at  >>> /root/k8s_results <<<" #echo ">>>>>>>>>>>>>>>>> You can excute command [ source ~/.bashrc ] <<<<<<<<<<<<<<<<<<<<" echo ">>>>>>>>>>>>>>>>> You need to excute command [ reboot ] to restart all nodes <<<<<<<<<<<<<<<<<<<<"
  256. rm -f $0
  257. [ -f ${software_packet} ] && rm -f ${software_packet}
  258. #rm -f ${pwd}/roles/deploy/templates/${USER_NAME}-csr.json.j2
  259. #sed -ri "s+${USER_NAME}+admin+g" ${pwd}/roles/prepare/tasks/main.yml
复制代码
3、下载安装包,和脚本放在同一个目录下
  1. https://github.com/bogeit/LearnK8s/blob/main/download_url
复制代码
4、添加阿里云yum源
  1. wget -O /etc/yum.repos.d/CentOS-Base.repo <a href="https://mirrors.aliyun.com/repo/Centos-7.repo" target="_blank">https://mirrors.aliyun.com/repo/Centos-7.repo</a>
  2. wget -O /etc/yum.repos.d/epel.repo <a href="http://mirrors.aliyun.com/repo/epel-7.repo" target="_blank">http://mirrors.aliyun.com/repo/epel-7.repo</a>
复制代码
二、开始安装
1、执行脚本

  1. bash k8s_install_new.sh admin123 10.0.1 201\ 202\ 203\ 204 docker calico k8s-cluster
复制代码
参数说明:
admin123:节点root的账号密码
10.0.1:节点的网络位
201\..:节点IP的主机位
docker:指定容器运行时是docker
calico:CNI为calico
k8s-cluster:K8S集群名称为k8s-cluster
2、执行过程中输入参数说明
  1. # 脚本基本是自动化的,除了下面几处提示按要求复制粘贴下,再回车即可
  2. # 输入准备创建ETCD集群的主机位,复制  203 202 201 粘贴并回车
  3. echo "enter etcd hosts here (example: 203 202 201) ↓"

  4. # 输入准备创建KUBE-MASTER集群的主机位,复制  202 201 粘贴并回车
  5. echo "enter kube-master hosts here (example: 202 201) ↓"

  6. # 输入准备创建KUBE-NODE集群的主机位,复制  204 203 粘贴并回车
  7. echo "enter kube-node hosts here (example: 204 203) ↓"

  8. # 这里会提示你是否继续安装,没问题的话直接回车即可
  9. Enter to continue deploy k8s to all nodes >>>

  10. # 安装完成后重新加载下环境变量以实现kubectl命令补齐
  11. . ~/.bashrc
复制代码
三、k8s添加插件
1、下载插件kubecolor、kubens
  1. mv kubecolor、kubens /usr/local/bin
  2. chmod +x /usr/local/bin/*
复制代码
2、添加环境变量
  1. source <(kubectl completion bash)
  2. command -v kubecolor >/dev/null 2>&1 && alias kubectl="kubecolor"
复制代码
3、标记节点不可部署
  1. kubectl get nodes
  2. NAME           STATUS                     ROLES    AGE   VERSION
  3. 192.168.1.10   Ready,SchedulingDisabled   master   22m   v1.20.2
  4. 192.168.1.11   Ready,SchedulingDisabled   master   22m   v1.20.2
  5. 192.168.1.12   Ready                      node     18m   v1.20.2
复制代码
4、标记节点可部署
  1. [root@k8s-master ~]# kubectl uncordon 192.168.1.10
  2. node/192.168.1.10 uncordoned
  3. [root@k8s-master ~]# kubectl uncordon 192.168.1.11
  4. node/192.168.1.11 uncordoned
  5. [root@k8s-master ~]# kubectl get nodes
  6. NAME           STATUS   ROLES    AGE   VERSION
  7. 192.168.1.10   Ready    master   76m   v1.20.2
  8. 192.168.1.11   Ready    master   76m   v1.20.2
  9. 192.168.1.12   Ready    node     72m   v1.20.2
复制代码
5、安装k8s tab命令补全
  1. yum install -y bash-completion
  2. source /usr/share/bash-completion/bash_completion
  3. source <(kubectl completion bash)
复制代码

回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

快速回复 返回顶部 返回列表