일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | |||||
3 | 4 | 5 | 6 | 7 | 8 | 9 |
10 | 11 | 12 | 13 | 14 | 15 | 16 |
17 | 18 | 19 | 20 | 21 | 22 | 23 |
24 | 25 | 26 | 27 | 28 | 29 | 30 |
- Operating System
- log
- kubernetes
- AWS
- JavaScript
- ip
- Vision
- Spring
- kubeadm
- Kafka
- CVAT
- zookeeper
- java
- docker
- grafana
- jvm
- CSV
- MAC address
- helm
- Network
- PostgreSQL
- Trino
- Packet
- aws s3
- airflow
- OS
- EC2
- kubectl
- tcp
- Python
- Today
- Total
JUST WRITE
업그레이드해도 될까요? - Control Plane Upgrade 본문
업그레이드해도 될까요?
사내에 Kubernetes Cluster를 운영하고 있습니다.
저와 팀원들이 외부 프로젝트에 투입되어 있어서 Kubernetes Cluster가 방치되고 있었습니다.
프로젝트가 마무리되고 본사로 복귀하였습니다.
본사에서 근무하는 동안 사내 Kubernetes Cluster를 제대로 활용할 수 있도록 정리하려고 합니다.
첫 번째 작업으로 Kubernetes Cluster 중 Control Plane을 업그레이드하였습니다.
Control Plane 업그레이드
사내 Kubernetes Cluster는 Control Plane 3개, Worker Node 3개로 구성되어 있습니다.
각 노드 별 버전이 제각각이었습니다.
1.27.4 버전도 있고 1.28 버전인 노드도 존재하였습니다.
이번 작업으로 1.28.9 버전으로 업그레이드하여 각 노드의 버전을 통일하기로 하였습니다.
업그레이드 체크
사내 Kubernetes는 kubeadm으로 설치한 Cluster입니다.
각 Node의 OS는 Ubuntu 20.04 LTS입니다.
1.28.9 버전의 kubeadm 패키지 설치가 가능하도록 패키지 레포지토리를 설정합니다.
기존 패키지 레포지토리 설정을 확인해 보았습니다.
현재 설정으로는 kubeadm 1.28.2-00 버전까지 설치가 가능합니다.
$cat /etc/apt/sources.list.d/kubernetes.list
deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main
# 해당 레포지토리 설정으로 가능한 패키지 버전 확인
$ apt-cache madison kubeadm
kubeadm | 1.28.2-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
kubeadm | 1.28.1-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
kubeadm | 1.28.0-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
kubeadm | 1.27.6-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
kubeadm | 1.27.5-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
kubeadm | 1.27.4-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
kubeadm | 1.27.3-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
kubeadm | 1.27.2-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
kubeadm | 1.27.1-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
kubeadm | 1.27.0-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
kubeadm | 1.26.9-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
kubeadm | 1.26.8-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
kubeadm | 1.26.7-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
kubeadm | 1.26.6-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
kubeadm | 1.26.5-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
kubeadm | 1.26.4-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
kubeadm | 1.26.3-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
kubeadm | 1.26.2-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
kubeadm | 1.26.1-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
kubeadm | 1.26.0-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
kubeadm | 1.25.14-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
kubeadm 1.28.9 패키지로 업그레이드하려면 패키지 레포지토리 설정을 변경해야 합니다.
아래 링크에서 확인이 가능합니다.
# 패키지 레포지토리 키 설정
$ curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
# 패키지 레포지토리 설정
$ echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
Kubernetes 패키지 레포지토리는 현재 pkgs.k8s.io에 릴리즈 되고 있습니다.
기존 apt.kubernetes.io, yum.kubernetes.io에는 최신 버전이 릴리즈 안되고 있습니다.
최신 버전 패키지 레포지토리 설정은 공식 문서를 참고하시길 바랍니다.
설정한 패키지 레포지토리로 설치 가능한 kubeadm 버전 리스트를 확인합니다.
$ apt update
Hit:1 https://download.docker.com/linux/ubuntu focal InRelease
Hit:2 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.28/deb InRelease
Hit:3 http://kr.archive.ubuntu.com/ubuntu focal InRelease
Get:4 http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/1.18/xUbuntu_20.04 InRelease [1,631 B]
Hit:5 http://kr.archive.ubuntu.com/ubuntu focal-updates InRelease
Hit:6 http://kr.archive.ubuntu.com/ubuntu focal-backports InRelease
Get:7 https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_20.04 InRelease [1,642 B]
Hit:8 http://kr.archive.ubuntu.com/ubuntu focal-security InRelease
Fetched 3,273 B in 2s (1,996 B/s)
Reading package lists... Done
Building dependency tree
Reading state information... Done
72 packages can be upgraded. Run 'apt list --upgradable' to see them.
$ apt-cache madison kubeadm
kubeadm | 1.28.9-2.1 | https://pkgs.k8s.io/core:/stable:/v1.28/deb Packages
kubeadm | 1.28.8-1.1 | https://pkgs.k8s.io/core:/stable:/v1.28/deb Packages
kubeadm | 1.28.7-1.1 | https://pkgs.k8s.io/core:/stable:/v1.28/deb Packages
kubeadm | 1.28.6-1.1 | https://pkgs.k8s.io/core:/stable:/v1.28/deb Packages
kubeadm | 1.28.5-1.1 | https://pkgs.k8s.io/core:/stable:/v1.28/deb Packages
kubeadm | 1.28.4-1.1 | https://pkgs.k8s.io/core:/stable:/v1.28/deb Packages
kubeadm | 1.28.3-1.1 | https://pkgs.k8s.io/core:/stable:/v1.28/deb Packages
kubeadm 업그레이드
패키지 레포지토리 변경하고 kubeadm 1.28.9-2.1 버전으로 업그레이드가 가능해졌습니다.
안정성 문제로 패키지 업그레이드 잠금 설정했을 수도 있습니다.
잠금 해제를 하고 kubeadm 업그레이드를 진행합니다.
$ apt-mark unhold kubeadm
kubeadm was already not hold.
# kubeadm 업그레이드
$ apt-get install -y kubeadm='1.28.9-2.1'
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
cri-tools
The following packages will be upgraded:
cri-tools kubeadm
2 upgraded, 0 newly installed, 0 to remove and 70 not upgraded.
Need to get 29.7 MB of archives.
After this operation, 2,536 kB of additional disk space will be used.
Get:1 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.28/deb cri-tools 1.28.0-1.1 [19.6 MB]
Get:2 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.28/deb kubeadm 1.28.9-2.1 [10.1 MB]
Fetched 29.7 MB in 1s (34.0 MB/s)
(Reading database ... 137229 files and directories currently installed.)
Preparing to unpack .../cri-tools_1.28.0-1.1_amd64.deb ...
Unpacking cri-tools (1.28.0-1.1) over (1.26.0-00) ...
Preparing to unpack .../kubeadm_1.28.9-2.1_amd64.deb ...
Unpacking kubeadm (1.28.9-2.1) over (1.27.4-00) ...
dpkg: warning: unable to delete old directory '/etc/systemd/system/kubelet.service.d': Directory not empty
Setting up cri-tools (1.28.0-1.1) ...
Setting up kubeadm (1.28.9-2.1) ...
# kubeadm 버전 확인
$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"28", GitVersion:"v1.28.9", GitCommit:"587f5fe8a69b0d15b578eaf478f009247d1c5d47", GitTreeState:"clean", BuildDate:"2024-04-16T15:04:37Z", GoVersion:"go1.21.9", Compiler:"gc", Platform:"linux/amd64"}
Static Pod Upgrade
Control Plane과 Worker Node의 가장 큰 차이점은 Static Pod의 유무입니다.
Static Pod은 API 서버를 거치지 않고 kubelet에 바로 통제되는 Pod입니다.
주요 Static Pod은 아래와 같습니다.
- kube-apiserver
- kube-controller-manager
- kube-scheduler
- etcd
Control Plane을 업그레이드할 때는 위 Static Pod도 업그레이드해줘야 합니다.
뿐만 아니라 Control Plane에 있는 Coredns, kube-proxy도 업그레이드합니다.
kubeadm 명령어를 통해 Static Pod, Coredns, kube-proxy를 업그레이드할 수 있습니다.
$ kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.27.4
[upgrade/versions] kubeadm version: v1.28.9
I0424 01:31:35.020610 578940 version.go:256] remote version is much newer: v1.30.0; falling back to: stable-1.28
[upgrade/versions] Target version: v1.28.9
[upgrade/versions] Latest version in the v1.27 series: v1.27.13
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT TARGET
kubelet 2 x v1.27.4 v1.27.13
1 x v1.28.9 v1.27.13
3 x v1.30.0 v1.27.13
Upgrade to the latest version in the v1.27 series:
COMPONENT CURRENT TARGET
kube-apiserver v1.27.4 v1.27.13
kube-controller-manager v1.27.4 v1.27.13
kube-scheduler v1.27.4 v1.27.13
kube-proxy v1.27.4 v1.27.13
CoreDNS v1.10.1 v1.10.1
etcd 3.5.7-0 3.5.12-0
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.27.13
_____________________________________________________________________
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT TARGET
kubelet 2 x v1.27.4 v1.28.9
1 x v1.28.9 v1.28.9
3 x v1.30.0 v1.28.9
Upgrade to the latest stable version:
COMPONENT CURRENT TARGET
kube-apiserver v1.27.4 v1.28.9
kube-controller-manager v1.27.4 v1.28.9
kube-scheduler v1.27.4 v1.28.9
kube-proxy v1.27.4 v1.28.9
CoreDNS v1.10.1 v1.10.1
etcd 3.5.7-0 3.5.12-0
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.28.9
_____________________________________________________________________
The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.
API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io v1alpha1 v1alpha1 no
kubelet.config.k8s.io v1beta1 v1beta1 no
_____________________________________________________________________
plan 명령어를 통해 어떻게 업그레이드되는지 미리 확인할 수 있습니다.
현재 1.27.4에서 1.28.9로 업그레이드 진행하려고 합니다.
plan에서 1.27.4에서 1.27.13로 업그레이드하는 방안도 알려줍니다.
참고로 Control Plane은 메이저 버전을 한 단계씩 업그레이드해 줘야 합니다.
Static Pod 업그레이드 시 주의가 필요하기 때문입니다.
업그레이드 상황을 알았기 때문에 실제로 kubeadm 명령어를 통해 업그레이드를 진행합니다.
# kubeadm upgrade apply v1.28.9
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.28.9"
[upgrade/versions] Cluster version: v1.27.4
[upgrade/versions] kubeadm version: v1.28.9
[upgrade] Are you sure you want to proceed? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
W0424 01:32:41.975902 579387 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.28.9" (timeout: 5m0s)...
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-04-24-01-32-46/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 3 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests1039879791"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-04-24-01-32-46/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 3 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-04-24-01-32-46/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 3 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-04-24-01-32-46/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 3 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade/staticpods] Current and new manifests of kube-scheduler are equal, skipping upgrade
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config1953719542/config.yaml
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[upgrade/addons] skip upgrade addons because control plane instances [k8sm03.coxspace.biz] have not been upgraded
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.28.9". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
$ ls -all /etc/kubernetes/manifests/
total 24
drwx------ 2 root root 4096 Apr 25 09:11 ./
drwxr-xr-x 5 root root 4096 Apr 25 08:33 ../
-rw------- 1 root root 2432 Apr 24 02:46 etcd.yaml
-rw------- 1 root root 4042 Apr 25 09:09 kube-apiserver.yaml
-rw------- 1 root root 3543 Apr 25 09:09 kube-controller-manager.yaml
-rw-r--r-- 1 root root 0 Apr 17 18:31 .kubelet-keep
-rw------- 1 root root 1463 Apr 25 09:09 kube-scheduler.yaml
Static Pod의 설정 파일들이 갱신되는 것을 확인할 수 있습니다.
/etc/kubernetes/manifests 경로에 해당 yaml들을 확인할 수 있습니다.
kubelet kubectl 업그레이드
kubelet과 kubectl를 업그레이드합니다.
drain 명령어를 통해 Control plane에 있는 Pod을 다른 Node로 옮겨줍니다.
(일반적으로 Control plane에는 Working Pod을 스케줄링하지 않습니다)
$ kubectl drain k8sm02 --ignore-daemonsets
node/k8sm02 cordoned
Warning: ignoring DaemonSet-managed Pods: kube-flannel/kube-flannel-ds-v7xtc, kube-system/kube-proxy-wvpjj, monitor/prometheus-prometheus-node-exporter-q82x9
node/k8sm02 drained
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8sm01 Ready control-plane 41h v1.27.4
k8sm02 Ready,SchedulingDisabled control-plane 257d v1.27.4
k8sm03 Ready control-plane 257d v1.27.4
k8sw01 Ready <none> 257d v1.30.0
k8sw02 Ready <none> 186d v1.30.0
k8sw03 Ready <none> 257d v1.30.0
스케줄링이 안되도록 설정된 것을 확인할 수 있습니다.
이제 kubectl과 kubelet를 업그레이드합니다.
kubeadm과 같은 1.28.9-2.1로 업그레이드합니다.
$ apt-mark unhold kubelet kubectl
kubelet was already not hold.
kubectl was already not hold.
$ apt-get install -y kubelet='1.28.9-2.1' kubectl='1.28.9-2.1'
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be upgraded:
kubectl kubelet
2 upgraded, 0 newly installed, 0 to remove and 68 not upgraded.
Need to get 29.9 MB of archives.
After this operation, 3,903 kB of additional disk space will be used.
Get:1 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.28/deb kubectl 1.28.9-2.1 [10.3 MB]
Get:2 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.28/deb kubelet 1.28.9-2.1 [19.5 MB]
Fetched 29.9 MB in 1s (33.1 MB/s)
(Reading database ... 137236 files and directories currently installed.)
Preparing to unpack .../kubectl_1.28.9-2.1_amd64.deb ...
Unpacking kubectl (1.28.9-2.1) over (1.27.4-00) ...
Preparing to unpack .../kubelet_1.28.9-2.1_amd64.deb ...
Unpacking kubelet (1.28.9-2.1) over (1.27.4-00) ...
Setting up kubectl (1.28.9-2.1) ...
Setting up kubelet (1.28.9-2.1) ...
업그레이드를 완료하였으면 kubelet 서비스를 재시작합니다.
$ sudo systemctl daemon-reload
$ sudo systemctl restart kubelet
업그레이드한 Control plane도 다시 스케줄링되도록 설정합니다.
$ kubectl uncordon k8sm02
node/k8sm02 uncordoned
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8sm01 Ready control-plane 41h v1.27.4
k8sm02 Ready control-plane 257d v1.28.9
k8sm03 Ready control-plane 257d v1.27.4
k8sw01 Ready <none> 257d v1.30.0
k8sw02 Ready <none> 186d v1.30.0
k8sw03 Ready <none> 257d v1.30.0
정리
Kubernetes Cluster에서 Control Plane 업그레이드를 정리해 보았습니다.
Kubernetes에서 중요한 Pod이 실행되고 있는 노드라 Worker 노드보다 필요한 작업이 많았습니다.
제가 진행한 Kubernetes Cluster는 Control Plane이 다수(3개)였기 때문에 한결 수월하였습니다.
만약 1개의 Control Plane을 가진 Kubernetes Cluster를 업그레이드를 한다면 고려할 사항이 많습니다.
해당 사항에 대해서 정리할 기회가 있으면 따로 포스팅하도록 하겠습니다.
[참고사이트]
'MLOps > Kubernetes' 카테고리의 다른 글
Kubernetes에 GPU 노드 추가(2) - GPU Worker 노드 추가 (0) | 2024.06.05 |
---|---|
Kubernetes에 GPU 노드 추가(1) - GPU 노드 세팅 (0) | 2024.05.14 |
여기만 사용해! - 특정 Namespace 전용 User 생성 (0) | 2024.02.15 |
k8s 날 거부하지 마 - Certificate 만료 갱신 (0) | 2023.10.18 |
명령어 한 번에 Kubernetes 설치하기(2) - AWS ENI를 이용한 설치 (0) | 2023.10.04 |