k8s加入新的master节点出现etcd检查失败
背景:
昨天在建立好新的集群后,出现了新的问题,其中的一台master节点无法正常工作。虽然可以正常使用,但是就出现了单点故障,今天在修复时出现了etcd健康检查自检没通过。
Yesterday, after a new cluster was established, a new problem a problem occurred, and one of the master nodes did not work properly. Although can be used normally, but there is a single point of failure, today in the repair of the etcd health check self-test failed.
对加入集群中时,出现如下报错:
When you join a cluster, the following error occurs
提示 etcd 监控检查失败,查看一下Kubernetes 集群中的 kubeadm 配置信息。
Prompt the etcd monitoring check to fail and review the kubeadm configuration information in the Kubernetes cluster.
1 | \[root@master-01 ~\]# kubectl describe configmaps kubeadm-config -n kube-system |
因为集群搭建的时候,etcd是镜像的方式,在master02上面出现问题后,进行剔除完成后,etcd还是在存储在每个master上面,所以重新添加的时候会得知健康检查失败。
Because when the cluster is built, etcd is mirrored, after the problem on master02, after the cull is completed, etcd is still stored on top of each master, so when you add again, you will learn that the health check failed.
这时就需要进入容器内部进行手动删除这个etcd了,首先获取集群中的etcd pod列表看一下,并进入内部给一个sh窗口。
At this point you need to go inside the container to manually delete this etcd, first get the list of etcd pods in the cluster to see, and go inside to give a sh window
1 | \[root@master-01 ~\]# kubectl get pods -n kube-system | grep etcd |
进入容器后,执行如下操作:
After entering the container, do the following
1 | \## 配置环境 |
查看列表并删除已不存在的master
View the list and remove the master that no longer exists
再次进行加入master,即可成功。
Join master again and you’ll be successful
高新科技园