k8s-v1.18.1证书逾期处理处罚

分享
源码 2024-9-18 05:11:52 60 0 来自 中国
好久没开的k8s测试情况,本日打开发现在master节点检察node发现node2 notready  状态
在node2节点检察发现kubelet制止运行了
kubelet报错:
part of the existing bootstrap client certificate is expired: 2022-06-04通过检察/etc/kubernetes/kubelet.conf 发现证书路径/var/lib/kubelet/pki/kubelet-client-current.pem
cat /etc/kubernetes/kubelet.confapiVersion: v1clusters:- cluster:    certificate-authority-data: LS0tLS1......UtLS0tLQo=    server: https://192.168.100.201:6443  name: default-clustercontexts:- context:    cluster: default-cluster    namespace: default    user: default-auth  name: default-contextcurrent-context: default-contextkind: Configpreferences: {}users:- name: default-auth  user:    client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem    client-key: /var/lib/kubelet/pki/kubelet-client-current.pem然后切换到/var/lib/kubelet/pki/ 路径下检察证书日期
cd /var/lib/kubelet/pkill总用量 20-rw------- 1 root root 1061 9月  14 2020 kubelet-client-2020-09-14-18-00-01.pem-rw------- 1 root root 1061 6月   4 2021 kubelet-client-2021-06-04-19-03-23.pem-rw------- 1 root root 1066 6月  10 11:00 kubelet-client-2022-06-10-11-00-15.pemlrwxrwxrwx 1 root root   59 6月  10 11:00 kubelet-client-current.pem -> /var/lib/kubelet/pki/kubelet-client-2021-06-04-19-03-23.pem-rw-r--r-- 1 root root 2144 9月  14 2020 kubelet.crt-rw------- 1 root root 1679 9月  14 2020 kubelet.key可以看出kubelet-client-current.pem指向的是kubelet-client-2021-06-04-19-03-23.pem  现在是2022-06-10 以是证书已颠末期了。
在node2上检察证书有用期

# openssl x509 -in /var/lib/kubelet/pki/kubelet-client-current.pem -noout -text | grep Not            Not Before: Jun  4 10:58:23 2021 GMT            Not After : Jun  4 10:58:23 2022 GMT由于我的 master节点和node1节点都正常;
我可以用之前的kubeadm.yaml设置文件重新天生下证书
#备份之前的证书# cp -rp /etc/kubernetes /etc/kubernetes.bak#天生新的证书# kubeadm alpha certs renew all --config=kubeadm.yamlW0610 09:24:36.851093   26346 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewedcertificate for serving the Kubernetes API renewedcertificate the apiserver uses to access etcd renewedcertificate for the API server to connect to kubelet renewedcertificate embedded in the kubeconfig file for the controller manager to use renewedcertificate for liveness probes to healthcheck etcd renewedcertificate for etcd nodes to communicate with each other renewedcertificate for serving etcd renewedcertificate for the front proxy client renewedcertificate embedded in the kubeconfig file for the scheduler manager to use renewed#备份之前的设置文件# mkdir  /root/backconf# mv /etc/kubernetes/*.conf    /root/backconf/# ll backconf/总用量 32-rw------- 1 root root 5451 6月  10 09:24 admin.conf-rw------- 1 root root 5491 6月  10 09:24 controller-manager.conf-rw------- 1 root root 5463 9月   1 2021 kubelet.conf-rw------- 1 root root 5439 6月  10 09:24 scheduler.conf#重新天生设置文件# kubeadm init phase kubeconfig all --config kubeadm.yamlW0610 09:26:59.426236   27497 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io][kubeconfig] Using kubeconfig folder "/etc/kubernetes"[kubeconfig] Writing "admin.conf" kubeconfig file[kubeconfig] Writing "kubelet.conf" kubeconfig file[kubeconfig] Writing "controller-manager.conf" kubeconfig file[kubeconfig] Writing "scheduler.conf" kubeconfig file# ll /etc/kubernetes/总用量 52-rw------- 1 root root 5451 6月  10 09:27 admin.conf-rw-r--r-- 1 root root 1025 3月  23 2021 ca.crt-rw-r--r-- 1 root root 3117 3月  23 2021 cert.pfx-rw-r--r-- 1 root root 1082 3月  23 2021 client.crt-rw-r--r-- 1 root root 1679 3月  23 2021 client.key-rw------- 1 root root 5487 6月  10 09:27 controller-manager.conf-rw------- 1 root root 5459 6月  10 09:27 kubelet.confdrwxr-xr-x 2 root root  113 10月  6 2021 manifestsdrwxr-xr-x 3 root root 4096 9月  14 2020 pki-rw------- 1 root root 5439 6月  10 09:27 scheduler.conf# 将新天生的admin.conf文件覆盖掉.kube/config文件:mv $HOME/.kube/config $HOME/.kube/config.oldcp -i /etc/kubernetes/admin.conf $HOME/.kube/configchown $(id -u)(id -g) $HOME/.kube/configchmod 644 $HOME/.kube/config# 重启kube-apiserver,kube-controller,kube-scheduler,etcd这4个容器:(一定要ps -a要不有可能服务容器没启动)# docker ps -a | grep -v pause | grep -E "etcd|scheduler|controller|apiserver" | awk '{print $1}' | awk '{print "docker","restart",$1}' | bash# 各节点重启kubelet或相干组件:systemctl restart kubeletmaster节点就更新完成了,然后获取token在更新slave节点时要用

# kubeadm token create --print-join-commandW0610 09:40:30.975578    2435 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]kubeadm join 192.168.100.201:6443 --token 6co5f1.g8wnog41jopfchp8     --discovery-token-ca-cert-hash sha256:8adf630dbe900681db88950f0877faa7be4308f6fd837029ab7e9e41dd0eafd6# kubeadm token listTOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS6co5f1.g8wnog41jopfchp8   23h         2022-06-11T09:40:31+08:00   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-tokennode节点添加进集群(需删除原先kubelet设置文件,否则参加失败)

先备份下设置文件的存放目次
cp -r /etc/kubernetes /etc/kubernetes.bak# ll /etc/kubernetes*/etc/kubernetes:总用量 4-rw------- 1 root root 1856 9月  14 2020 kubelet.confdrwxr-xr-x 2 root root    6 4月   9 2020 manifestsdrwxr-xr-x 2 root root   20 9月  14 2020 pki/etc/kubernetes.bak:总用量 4-rw------- 1 root root 1856 6月  10 10:58 kubelet.confdrwxr-xr-x 2 root root    6 6月  10 10:58 manifestsdrwxr-xr-x 2 root root   20 6月  10 10:58 pki然后删除旧的kubelet设置文件
#  rm -rf /etc/kubernetes/kubelet.conf#  rm -rf /etc/kubernetes/pki/ca.crt# rm -rf /etc/kubernetes/bootstrap-kubelet.conf     #这个文件我没有# systemctl stop kubelet# systemctl status kubelet● kubelet.service - kubelet: The Kubernetes Node Agent   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)  Drop-In: /usr/lib/systemd/system/kubelet.service.d           └─10-kubeadm.conf   Active: inactive (dead) since 五 2022-06-10 09:38:04 CST; 1h 20min ago     Docs: https://kubernetes.io/docs/  Process: 31448 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=0/SUCCESS) Main PID: 31448 (code=exited, status=0/SUCCESS)6月 10 09:37:59 node2 kubelet[31448]: E0610 09:37:59.469934   31448 reflector.go:178] object-"loki"/"loki": Failed to list *v1.Secret: secrets "loki" is forb...this object6月 10 09:37:59 node2 kubelet[31448]: W0610 09:37:59.676710   31448 status_manager.go:572] Failed to update status for pod "loki-0_loki(e0ea4379-7e48-4107-83...\"Initializ6月 10 09:38:00 node2 kubelet[31448]: W0610 09:38:00.077588   31448 status_manager.go:572] Failed to update status for pod "sentinel-0_default(49b3d865-37ae-...type\":\"In6月 10 09:38:00 node2 kubelet[31448]: W0610 09:38:00.476110   31448 status_manager.go:572] Failed to update status for pod "usercenter-deployment-7bf4744f58-...ementOrder/6月 10 09:38:00 node2 kubelet[31448]: W0610 09:38:00.877862   31448 status_manager.go:572] Failed to update status for pod "getaway-deployment-6595fb8444-ztf...ntOrder/con6月 10 09:38:02 node2 kubelet[31448]: I0610 09:38:02.721843   31448 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach6月 10 09:38:02 node2 kubelet[31448]: I0610 09:38:02.849726   31448 kubelet_node_status.go:70] Attempting to register node node26月 10 09:38:02 node2 kubelet[31448]: E0610 09:38:02.859581   31448 kubelet_node_status.go:92] Unable to register node "node2" with API server: nodes "node2"...ode "node2"6月 10 09:38:04 node2 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...6月 10 09:38:04 node2 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.Hint: Some lines were ellipsized, use -l to show in full.node2重新参加集群

# kubeadm join 192.168.100.201:6443 --token 6co5f1.g8wnog41jopfchp8     --discovery-token-ca-cert-hash sha256:8adf630dbe900681db88950f0877faa7be4308f6fd837029ab7e9e41dd0eafd6W0610 11:00:11.849573    5754 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.[preflight] Running pre-flight checks[preflight] Reading configuration from the cluster...[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Starting the kubelet[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:* Certificate signing request was sent to apiserver and a response was received.* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.验证结果

[root@master ~]# kubectl get nodeNAME     STATUS   ROLES    AGE    VERSIONmaster   Ready    master   633d   v1.18.1node1    Ready    <none>   633d   v1.18.1node2    Ready    <none>   633d   v1.18.1[root@master ~]# kubectl get pods -n kube-systemNAME                                        READY   STATUS    RESTARTS   AGEcoredns-7ff77c879f-629sv                    1/1     Running   15         633dcoredns-7ff77c879f-hk25m                    1/1     Running   15         633ddefault-http-backend-55fb564b-rrddj         1/1     Running   3          146detcd-master                                 1/1     Running   15         633dkube-apiserver-master                       1/1     Running   8          386dkube-controller-manager-master              1/1     Running   7          281dkube-flannel-ds-amd64-g885t                 1/1     Running   15         633dkube-flannel-ds-amd64-nm5xp                 1/1     Running   14         633dkube-flannel-ds-amd64-zd56s                 1/1     Running   15         633dkube-proxy-rdf9s                            1/1     Running   16         633dkube-proxy-rsm5n                            1/1     Running   14         633dkube-proxy-wc7zr                            1/1     Running   15         633dkube-scheduler-master                       1/1     Running   17         633dkube-state-metrics-99d76dd5d-srlvt          1/1     Running   8          300dmetrics-server-7b75fd6bfb-4prml             1/1     Running   9          386dnginx-ingress-controller-5cf88d6db5-mqp8c   1/1     Running   3          146d
您需要登录后才可以回帖 登录 | 立即注册

Powered by CangBaoKu v1.0 小黑屋藏宝库It社区( 冀ICP备14008649号 )

GMT+8, 2024-10-19 04:32, Processed in 0.177244 second(s), 32 queries.© 2003-2025 cbk Team.

快速回复 返回顶部 返回列表