升级k8s集群到v1.14.1
kubernetes
Lastmod: 2020-09-20

k8s v1.14.1都已经发布了,为了学习搭建的k8s集群(v1.13.3)需要升级到最新版。虽然有官方升级文档,但是自己原来是手动搭建的是基于CoreOS的k8s集群[搭建访问见以前的文章],虽然步骤大概相同,但是个别细节不太一样。

备份

cp -r /opt/bin /opt/bin_v1.13.3

准备新版命令行文件

下面的命令在国内不能访问,请使用梯子

RELEASE=v1.14.1
mkdir -p /opt/bin_v1.14.1
cd /opt/bin_v1.14.1
curl -L --remote-name-all https://storage.googleapis.com/kubernetes-release/release/${RELEASE}/bin/linux/amd64/{kubeadm,kubelet,kubectl}
chmod +x {kubeadm,kubelet,kubectl}

可以通过查看版本判断下载是否正确

/opt/bin_v1.14.1/kubeadm version
/opt/bin_v1.14.1/kubectl version
/opt/bin_v1.14.1/kubelet --version

升级master节点

更新kubeadm

cp /opt/bin_v1.14.1/kubeadm /opt/bin/
kubeadm version

升级前的check

sudo kubeadm upgrade plan

输出类似:

因为本人操作的时候没有保存,所以下面的信息可能和真实的不太一样,真实的会因为网络问题,报一些warn,但是实际没有什么问题

[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.13.3
[upgrade/versions] kubeadm version: v1.14.0

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
Kubelet     2 x v1.13.3   v1.14.0

Upgrade to the latest version in the v1.13 series:

COMPONENT            CURRENT   AVAILABLE
API Server           v1.13.3   v1.14.0
Controller Manager   v1.13.3   v1.14.0
Scheduler            v1.13.3   v1.14.0
Kube Proxy           v1.13.3   v1.14.0
CoreDNS              1.2.6     1.3.1
Etcd                 3.2.24    3.3.10

You can now apply the upgrade by executing the following command:

        kubeadm upgrade apply v1.14.0

_____________________________________________________________________

开始升级

sudo kubeadm upgrade apply v1.14.x

更新kubectl

cp /opt/bin_v1.14.1/kubectl /opt/bin/

更新kubelet

sudo systemctl stop kubelet
cp /opt/bin_v1.14.1/kubelet /opt/bin/
sudo systemctl start kubelet

升级worker节点

为了保证在升级时k8s集群上已经在跑的服务依然正常提供服务,最好是一次升级一个worker节点。这点在官方文档做了说明,但是我的k8s集群上暂时没有跑什么服务,所以这块还没有验证,准备在下次升级的到更新的版本是做一个验证。

更新kubeadm

cp /opt/bin_v1.14.1/kubeadm /opt/bin/
kubeadm version

下掉某个worker节点

kubectl drain $NODE --ignore-daemonsets

$NODE是节点名称,和kubectl get nodes中NAME对应

升级配置

sudo kubeadm upgrade node config --kubelet-version v1.14.1

更新kubectl

cp /opt/bin_v1.14.1/kubectl /opt/bin/

更新kubelet

sudo systemctl stop kubelet
cp /opt/bin_v1.14.1/kubelet /opt/bin/
sudo systemctl start kubelet

查看升级是否成功

kubectl get nodes

如果VERSION已经变为1.14.1,说明升级成功

某个worker节点上线

kubectl uncordon $NODE

$NODE是节点名称,和kubectl get nodes中NAME对应

按照以上步骤,将所有的worker节点都升级完成即可。

最终验证

kubectl get nodes

我的集群的输出变为了

NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   82d   v1.14.1
k8s-node-1   Ready    <none>   82d   v1.14.1
k8s-node-2   Ready    <none>   82d   v1.14.1
k8s-node-3   Ready    <none>   82d   v1.14.1

DONE!