k8s == k8(ubernete)s

처음 k8s가 무슨 의미인지 몰랐는데, 알고 보니.... 너무....

 

생각보다 많은 분들이 보시는 것 같아서 명령어를 붙여넣기 식으로 수정했습니다.

기본적으로 명령어는 root 로 실행하여 sudo 명령어를 사용하지 않았습니다.

설치 시 반드시 계정 유의하여 설치하세요.

 

공식 설치 문서도 참고하세요.

https://kubernetes.io/ko/docs/setup/production-environment/tools/kubeadm/

 

kubeadm으로 클러스터 구성하기

운영 수준의 컨테이너 오케스트레이션

kubernetes.io

환경 설정

ㅇ OS: Ubuntu 24.04.1 LTS

ㅇ node

  - k8s-master: 192.168.0.205

  - k8s-node1: 192.168.0.206

  - k8s-node2: 192.168.0.207

 

 

1. 공통 설정 (대상 node: k8s-master, k8s-node1, k8s-node2)

1. Swap 비활성화

#swap 비활성화
swapoff -a

#부팅 시 swap 비화설화 설정
sed -i 's/^\/swap/\#\/swap/g' /etc/fstab

 

2. 네트워크 모듈 및 sysctl 설정

# 네트워크 모듈 로드
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

modprobe overlay
modprobe br_netfilter

# sysctl 파라미터 설정
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# sysctl 파라미터 적용
sysctl --system

 

3. 컨테이너 설치

(저는 그냥 Docker 설치했습니다. 최근은 Docker에 포함되어 있는 Containerd를 사용한다고 하네요)

# Docker 설치 (containerd 포함)
apt install -y docker.io

# containerd 설정 파일 생성 및 수정
mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml

# containerd 설정 파일 중 SystemdCgroup를 false -> true로 수정
sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml

# containerd 서비스 재시작
systemctl restart containerd.service

 

4. 쿠버네티스 설치하기

# 쿠버네티스 apt 리포지터리를 사용하는 데 필요한 패키지를 설치
apt install -y apt-transport-https ca-certificates curl

# 구글 클라우드의 공객 사이닝 키를 다운로드 (저는 버전 1.29로 했습니다. 최신버전은 비추)
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

# apt 리포지터리 추가 (저는 버전 1.29로 했습니다. 최신버전은 비추)
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

# apt 리포지터리 업데이트
apt update

# 쿠버네티스 설치
apt install -y kubelet kubeadm kubectl

# 쿠버네티스 버전 고정
apt-mark hold kubelet kubeadm kubectl

 

 

2. 노드별 쿠버네티스 설정하기

1. 마스터 노트 설정 (대상 node: k8s-master)

# 쿠버네티스 초기화 및 pod 네트워크 설정
kubeadm init --apiserver-advertise-address=192.168.0.205 --pod-network-cidr=10.1.0.0/16

# 위 명령어 결과 값 중 마스터 노드 설정
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

 

2. 워커 노트 설정 (대상 node: k8s-node1, k8s-node2)

# kubeadm init --apiserver-advertise-address=192.168.0.205 --pod-network-cidr=10.1.0.0/16 실행 후 나온 설정 값 값
kubeadm join 192.168.0.205:6443 --token rv50n5.cj04tpgfr9eqm9x4 \
--discovery-token-ca-cert-hash sha256:cef9cb71a8fa8ac2943f6252b0f7c3777cbbd2a5b4134e96755c9f2cf3c5a250

 

더보기

참고자료1: master node 실행

root@k8s-master:~# kubeadm init --apiserver-advertise-address=192.168.0.205 --pod-network-cidr=10.1.0.0/16
I1113 00:17:29.745593    5069 version.go:256] remote version is much newer: v1.31.2; falling back to: stable-1.29
[init] Using Kubernetes version: v1.29.10
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W1113 00:17:46.621371    5069 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.8" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.205]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.0.205 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.0.205 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 5.501103 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: rv50n5.cj04tpgfr9eqm9x4
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.205:6443 --token rv50n5.cj04tpgfr9eqm9x4 \
--discovery-token-ca-cert-hash sha256:cef9cb71a8fa8ac2943f6252b0f7c3777cbbd2a5b4134e96755c9f2cf3c5a250 
root@k8s-master:~#

참고자료2: node1, node2에서 실행

root@k8s-node1:~# kubeadm join 192.168.0.205:6443 --token rv50n5.cj04tpgfr9eqm9x4 \
        --discovery-token-ca-cert-hash sha256:cef9cb71a8fa8ac2943f6252b0f7c3777cbbd2a5b4134e96755c9f2cf3c5a250
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

root@k8s-node1:~#

 

3. calico 설치 (대상 node: k8s-master)

아래 공식 사이트 기술문서 참고하세요.

https://docs.tigera.io/calico/latest/getting-started/kubernetes/quickstart

 

Quickstart for Calico on Kubernetes | Calico Documentation

Install Calico on a single-host Kubernetes cluster for testing or development in under 15 minutes.

docs.tigera.io

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.29.0/manifests/tigera-operator.yaml
wget https://raw.githubusercontent.com/projectcalico/calico/v3.29.0/manifests/custom-resources.yaml

# 아래의 명령어 중 --pod-network-cidr 설정한 주소 대역으로 변경
# kubeadm init --apiserver-advertise-address=192.168.0.205 --pod-network-cidr=10.1.0.0/16
sed -i 's/192.168.0.0\/16/10.1.0.0\/16/g' ./custom-resources.yaml

kubectl create -f ./custom-resources.yaml

 

'기술 노트 > kubernetes' 카테고리의 다른 글

쿠버네티스 삭제  (0) 2024.11.14
kubeadm reset 오류  (0) 2024.11.03
kube init 재구성하는 방법  (0) 2024.11.03
pod가 terminating으로 남을 때  (0) 2024.11.02
node role label 변경  (0) 2024.10.31

+ Recent posts