kubectl get nodes 명령어 실행 시 계속 refused 오류가 나서 깔끔하게 삭제 후 재설치를 했습니다.

차라리 삭제하고 재설치하는 것이 더 빠르게 복구가 되네요.

 

자료가 없을 때 k8s 이슈 있을 때 사용하실때 사용하세요~

 

kubeadm reset
systemctl stop kubelet
systemctl stop docker
rm -rf /var/lib/cni/
rm -rf /var/lib/kubelet/*
rm -rf /var/lib/etcd
rm -rf /run/flannel
rm -rf /etc/cni
rm -rf /etc/kubernetes
rm -rf ~/.kube
apt purge kubeadm kubectl kubelet kubebernetes-cni
apt autoremove

reboot

 

'기술 노트 > kubernetes' 카테고리의 다른 글

k8s 설치하기 (ubuntu)  (3) 2024.11.13
kubeadm reset 오류  (0) 2024.11.03
kube init 재구성하는 방법  (0) 2024.11.03
pod가 terminating으로 남을 때  (0) 2024.11.02
node role label 변경  (0) 2024.10.31

k8s == k8(ubernete)s

처음 k8s가 무슨 의미인지 몰랐는데, 알고 보니.... 너무....

 

생각보다 많은 분들이 보시는 것 같아서 명령어를 붙여넣기 식으로 수정했습니다.

기본적으로 명령어는 root 로 실행하여 sudo 명령어를 사용하지 않았습니다.

설치 시 반드시 계정 유의하여 설치하세요.

 

공식 설치 문서도 참고하세요.

https://kubernetes.io/ko/docs/setup/production-environment/tools/kubeadm/

 

kubeadm으로 클러스터 구성하기

운영 수준의 컨테이너 오케스트레이션

kubernetes.io

환경 설정

ㅇ OS: Ubuntu 24.04.1 LTS

ㅇ node

  - k8s-master: 192.168.0.205

  - k8s-node1: 192.168.0.206

  - k8s-node2: 192.168.0.207

 

 

1. 공통 설정 (대상 node: k8s-master, k8s-node1, k8s-node2)

1. Swap 비활성화

#swap 비활성화
swapoff -a

#부팅 시 swap 비화설화 설정
sed -i 's/^\/swap/\#\/swap/g' /etc/fstab

 

2. 네트워크 모듈 및 sysctl 설정

# 네트워크 모듈 로드
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

modprobe overlay
modprobe br_netfilter

# sysctl 파라미터 설정
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# sysctl 파라미터 적용
sysctl --system

 

3. 컨테이너 설치

(저는 그냥 Docker 설치했습니다. 최근은 Docker에 포함되어 있는 Containerd를 사용한다고 하네요)

# Docker 설치 (containerd 포함)
apt install -y docker.io

# containerd 설정 파일 생성 및 수정
mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml

# containerd 설정 파일 중 SystemdCgroup를 false -> true로 수정
sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml

# containerd 서비스 재시작
systemctl restart containerd.service

 

4. 쿠버네티스 설치하기

# 쿠버네티스 apt 리포지터리를 사용하는 데 필요한 패키지를 설치
apt install -y apt-transport-https ca-certificates curl

# 구글 클라우드의 공객 사이닝 키를 다운로드 (저는 버전 1.29로 했습니다. 최신버전은 비추)
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

# apt 리포지터리 추가 (저는 버전 1.29로 했습니다. 최신버전은 비추)
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

# apt 리포지터리 업데이트
apt update

# 쿠버네티스 설치
apt install -y kubelet kubeadm kubectl

# 쿠버네티스 버전 고정
apt-mark hold kubelet kubeadm kubectl

 

 

2. 노드별 쿠버네티스 설정하기

1. 마스터 노트 설정 (대상 node: k8s-master)

# 쿠버네티스 초기화 및 pod 네트워크 설정
kubeadm init --apiserver-advertise-address=192.168.0.205 --pod-network-cidr=10.1.0.0/16

# 위 명령어 결과 값 중 마스터 노드 설정
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

 

2. 워커 노트 설정 (대상 node: k8s-node1, k8s-node2)

# kubeadm init --apiserver-advertise-address=192.168.0.205 --pod-network-cidr=10.1.0.0/16 실행 후 나온 설정 값 값
kubeadm join 192.168.0.205:6443 --token rv50n5.cj04tpgfr9eqm9x4 \
--discovery-token-ca-cert-hash sha256:cef9cb71a8fa8ac2943f6252b0f7c3777cbbd2a5b4134e96755c9f2cf3c5a250

 

더보기

참고자료1: master node 실행

root@k8s-master:~# kubeadm init --apiserver-advertise-address=192.168.0.205 --pod-network-cidr=10.1.0.0/16
I1113 00:17:29.745593    5069 version.go:256] remote version is much newer: v1.31.2; falling back to: stable-1.29
[init] Using Kubernetes version: v1.29.10
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W1113 00:17:46.621371    5069 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.8" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.205]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.0.205 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.0.205 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 5.501103 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: rv50n5.cj04tpgfr9eqm9x4
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.205:6443 --token rv50n5.cj04tpgfr9eqm9x4 \
--discovery-token-ca-cert-hash sha256:cef9cb71a8fa8ac2943f6252b0f7c3777cbbd2a5b4134e96755c9f2cf3c5a250 
root@k8s-master:~#

참고자료2: node1, node2에서 실행

root@k8s-node1:~# kubeadm join 192.168.0.205:6443 --token rv50n5.cj04tpgfr9eqm9x4 \
        --discovery-token-ca-cert-hash sha256:cef9cb71a8fa8ac2943f6252b0f7c3777cbbd2a5b4134e96755c9f2cf3c5a250
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

root@k8s-node1:~#

 

3. calico 설치 (대상 node: k8s-master)

아래 공식 사이트 기술문서 참고하세요.

https://docs.tigera.io/calico/latest/getting-started/kubernetes/quickstart

 

Quickstart for Calico on Kubernetes | Calico Documentation

Install Calico on a single-host Kubernetes cluster for testing or development in under 15 minutes.

docs.tigera.io

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.29.0/manifests/tigera-operator.yaml
wget https://raw.githubusercontent.com/projectcalico/calico/v3.29.0/manifests/custom-resources.yaml

# 아래의 명령어 중 --pod-network-cidr 설정한 주소 대역으로 변경
# kubeadm init --apiserver-advertise-address=192.168.0.205 --pod-network-cidr=10.1.0.0/16
sed -i 's/192.168.0.0\/16/10.1.0.0\/16/g' ./custom-resources.yaml

kubectl create -f ./custom-resources.yaml

 

'기술 노트 > kubernetes' 카테고리의 다른 글

쿠버네티스 삭제  (0) 2024.11.14
kubeadm reset 오류  (0) 2024.11.03
kube init 재구성하는 방법  (0) 2024.11.03
pod가 terminating으로 남을 때  (0) 2024.11.02
node role label 변경  (0) 2024.10.31

kubeadm reset 실행 시 아래와 같은 경우 로그가 있는 경우 

root@kube-master-node:~# kubeadm reset
W1103 01:58:36.260832    3368 preflight.go:56] [reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W1103 01:58:37.907191    3368 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
W1103 01:58:49.601400    3368 cleanupnode.go:105] [reset] Failed to remove containers: failed to stop running pod 6b252a18cbc126dfa84ab65398a3fafd8c286fcee46aa4fad81cd1c6b4c2a9f4: rpc error: code = Unknown desc = failed to stop container "93f58c2fc0ce6a5914f62c94a9c2b4ce8d734e265c882fd3694a5b2e25face7d": failed to kill container "93f58c2fc0ce6a5914f62c94a9c2b4ce8d734e265c882fd3694a5b2e25face7d": unknown error after kill: runc did not terminate successfully: exit status 1: unable to signal init: permission denied
: unknown
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
root@kube-master-node:~#

 

 

아래 명령어로 초기화 실행

root@kube-master-node:~# kubeadm reset --cri-socket unix:///run/containerd/containerd.sock -v 5
root@kube-master-node:~# kubeadm reset --cri-socket unix:///run/containerd/containerd.sock -v 5
I1103 02:01:20.182111    3547 reset.go:126] [reset] Could not obtain a client set from the kubeconfig file: /etc/kubernetes/admin.conf
W1103 02:01:20.182155    3547 preflight.go:56] [reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
I1103 02:01:22.840828    3547 removeetcdmember.go:60] [reset] Checking for etcd config
W1103 02:01:22.840847    3547 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
I1103 02:01:22.840896    3547 cleanupnode.go:65] [reset] Getting init system
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
I1103 02:01:22.844032    3547 cleanupnode.go:103] [reset] Removing Kubernetes-managed containers
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
root@kube-master-node:~#

'기술 노트 > kubernetes' 카테고리의 다른 글

쿠버네티스 삭제  (0) 2024.11.14
k8s 설치하기 (ubuntu)  (3) 2024.11.13
kube init 재구성하는 방법  (0) 2024.11.03
pod가 terminating으로 남을 때  (0) 2024.11.02
node role label 변경  (0) 2024.10.31
kubeadm init --apiserver-advertise-address=<마스터 노드 IP> --pod-network-cidr=<사용할 사설 IP>

 

 

'기술 노트 > kubernetes' 카테고리의 다른 글

k8s 설치하기 (ubuntu)  (3) 2024.11.13
kubeadm reset 오류  (0) 2024.11.03
pod가 terminating으로 남을 때  (0) 2024.11.02
node role label 변경  (0) 2024.10.31
minikube 설치하기  (0) 2023.09.25
# pod 이름 및 상태 확인
root@kube-master-node:~/demo# kubectl get all -o wide


# pod의 상태가 Terminating으로 되어 있고, 종료되지 않을 때
# 명령어: kubkubectl delete pods <파드 이름> --grace-period=0 --force
root@kube-master-node:~/demo# kubectl delete pods nginx-deployment-5d56d949b-ghfvd --grace-period=0 --force

 

파드 삭제 전

 

파드 삭제 후

'기술 노트 > kubernetes' 카테고리의 다른 글

kubeadm reset 오류  (0) 2024.11.03
kube init 재구성하는 방법  (0) 2024.11.03
node role label 변경  (0) 2024.10.31
minikube 설치하기  (0) 2023.09.25
macOS에서 kubernetes 설치하기  (0) 2023.09.25
root@kube-master-node:/kube# kubectl get no
NAME               STATUS   ROLES           AGE   VERSION
kube-master-node   Ready    control-plane   80m   v1.31.2
kube-worker1       Ready    <none>          54m   v1.31.2
kube-worker2       Ready    <none>          54m   v1.31.2
root@kube-master-node:/kube# kubectl label node kube-worker1 node-role.kubernetes.io/worker=
node/kube-worker1 labeled
root@kube-master-node:/kube# kubectl get no
NAME               STATUS   ROLES           AGE   VERSION
kube-master-node   Ready    control-plane   82m   v1.31.2
kube-worker1       Ready    worker          55m   v1.31.2
kube-worker2       Ready    <none>          55m   v1.31.2
root@kube-master-node:/kube# kubectl label node kube-worker2 node-role.kubernetes.io/worker=
node/kube-worker2 labeled
root@kube-master-node:/kube# kubectl get no
NAME               STATUS   ROLES           AGE   VERSION
kube-master-node   Ready    control-plane   82m   v1.31.2
kube-worker1       Ready    worker          56m   v1.31.2
kube-worker2       Ready    worker          56m   v1.31.2
root@kube-master-node:/kube# kubectl label node kube-master-node node-role.kubernetes.io/master=
node/kube-master-node labeled
root@kube-master-node:/kube# kubectl get no
NAME               STATUS   ROLES                  AGE   VERSION
kube-master-node   Ready    control-plane,master   82m   v1.31.2
kube-worker1       Ready    worker                 56m   v1.31.2
kube-worker2       Ready    worker                 56m   v1.31.2
root@kube-master-node:/kube#
root@kube-master-node:/kube# kubectl label node kube-worker1 node-role.kubernetes.io/worker-
node/kube-worker1 unlabeled
root@kube-master-node:/kube# kubectl get no
NAME               STATUS   ROLES                  AGE   VERSION
kube-master-node   Ready    control-plane,master   88m   v1.31.2
kube-worker1       Ready    <none>                 61m   v1.31.2
kube-worker2       Ready    worker                 61m   v1.31.2
root@kube-master-node:/kube# kubectl label node kube-worker1 node-role.kubernetes.io/worker=
node/kube-worker1 labeled
root@kube-master-node:/kube# kubectl get no
NAME               STATUS   ROLES                  AGE   VERSION
kube-master-node   Ready    control-plane,master   90m   v1.31.2
kube-worker1       Ready    worker                 63m   v1.31.2
kube-worker2       Ready    worker                 63m   v1.31.2
root@kube-master-node:/kube#

'기술 노트 > kubernetes' 카테고리의 다른 글

kubeadm reset 오류  (0) 2024.11.03
kube init 재구성하는 방법  (0) 2024.11.03
pod가 terminating으로 남을 때  (0) 2024.11.02
minikube 설치하기  (0) 2023.09.25
macOS에서 kubernetes 설치하기  (0) 2023.09.25

minikube는 로컬 컴퓨터에 kebernetes를 설치할 경우 로컬 쿠버네티스 클러스터를 쉽고 빠르게 세팅할 수 있는 도구 입니다.

 

1. 설치하기

curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64
sudo install minikube-darwin-amd64 /usr/local/bin/minikube

 

2. docker 설치하기 (설치 대안: docker, virtualbox, hyperkit, parallels, qemu2, podman)

docker desktop 설치

https://docs.docker.com/desktop/install/mac-install/

 

3. docker desktop 실행

 

4. 시작하기

minikube start

 

아래 사이트 참고하시기 바랍니다.

https://minikube.sigs.k8s.io/docs/start/

 

minikube start

minikube is local Kubernetes

minikube.sigs.k8s.io

https://minikube.sigs.k8s.io/docs/tutorials/docker_desktop_replacement/

 

Using minikube as Docker Desktop Replacement

Overview This guide will show you how to use minikube as a Docker Desktop replacement. Before You Begin This only works with the docker container runtime, not with containerd or crio. You need to start minikube with a VM driver instead of docker, such as h

minikube.sigs.k8s.io

 

'기술 노트 > kubernetes' 카테고리의 다른 글

kubeadm reset 오류  (0) 2024.11.03
kube init 재구성하는 방법  (0) 2024.11.03
pod가 terminating으로 남을 때  (0) 2024.11.02
node role label 변경  (0) 2024.10.31
macOS에서 kubernetes 설치하기  (0) 2023.09.25

1. 다운로드 하기

curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl"

 

2. 실행권한 부여하기

chmod +x ./kubectl

 

3. 경로 이동하기

sudo mv ./kubectl /usr/local/bin/kubectl
sudo chown root: /usr/local/bin/kubectl

 

4. 설치 파일 정리하기

rm kubectl kubectl.sha256

 

 

아래 사이트 참고하시어 설치하셔도 됩니다.

https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/

'기술 노트 > kubernetes' 카테고리의 다른 글

kubeadm reset 오류  (0) 2024.11.03
kube init 재구성하는 방법  (0) 2024.11.03
pod가 terminating으로 남을 때  (0) 2024.11.02
node role label 변경  (0) 2024.10.31
minikube 설치하기  (0) 2023.09.25

+ Recent posts