使用kubeadm在Ubuntu 20.04上安装Kubernetes集群

时间:2020-02-23 14:30:20  来源:igfitidea点击:

Kubernetes是用于在本地服务器或者跨混合云环境大规模协调和管理Docker容器的工具。
Kubeadm是Kubernetes随附的工具,可帮助用户安装具有最佳实践实施能力的生产就绪的Kubernetes集群。
本教程将演示如何使用kubeadm在Ubuntu 20.04上安装Kubernetes集群。

Kubernetes集群的部署使用两种服务器类型:Master:Kubernetes Master是执行Kubernetes集群的Pod,复制控制器,服务,节点和其他组件的控制API调用的节点。
提供容器的运行时环境。
一组容器容器可以跨越多个节点。

可行设置的最低要求是:

内存:每台计算机2 GiB或者更多RAM CPU:控制平面计算机上至少2个CPU。
需要Internet连接以拉出容器(也可以使用私有注册表)群集中计算机之间的全网络连接–这是私有的还是公共的

在Ubuntu 20.04上安装Kubernetes集群

我的实验室设置包含三台服务器。
一台控制平面机和两个节点用于运行容器化工作负载。
我们可以添加更多节点以适合所需的用例和负载,例如,为HA使用三个控制平面节点。

服务器类型服务器主机名规格
Masterk8s-master01.theitroad.com4GB Ram,2vcpus
Workerk8s-worker01.theitroad.com4GB Ram,2vcpus
Workerk8s-worker02.theitroad.com4GB Ram,2vcpus

步骤1:安装Kubernetes服务器

在Ubuntu 20.04上配置要在Kubernetes部署中使用的服务器。
根据我们使用的虚拟化或者云环境,设置过程会有所不同。
服务器准备就绪后,请对其进行更新。

sudo apt update
sudo apt -y upgrade && sudo systemctl reboot

第2步:安装kubelet,kubeadm和kubectl

重新启动服务器后,将适用于Ubuntu 20.04的Kubernetes存储库添加到所有服务器。

sudo apt update
sudo apt -y install curl apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add 
echo "deb https://apt.kubernetes.io/kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

然后安装所需的软件包。

sudo apt update
sudo apt -y install vim git curl wget kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

通过检查kubectl的版本来确认安装。

$kubectl version --client && kubeadm version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2017-05-20T12:52:00Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
kubeadm version: &version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2017-05-20T12:49:29Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

步骤3:停用交换

关闭交换。

sudo sed -i '/swap/s/^\(.*\)$/#/g' /etc/fstab
sudo swapoff -a

配置sysctl。

sudo modprobe overlay
sudo modprobe br_netfilter
sudo tee /etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system

步骤4:安装容器运行时

为了在Pods中运行容器,Kubernetes使用容器运行时。
支持的容器运行时为:DockerCRI-OContainerd注意:我们必须一次选择一个运行时。
安装Docker运行时:

# Add repo and Install packages
sudo apt update
sudo apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add 
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt update
sudo apt install -y containerd.io docker-ce docker-ce-cli
# Create required directories
sudo mkdir -p /etc/systemd/system/docker.service.d
# Create daemon json config file
sudo tee /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF
# Start and enable Services
sudo systemctl daemon-reload 
sudo systemctl restart docker
sudo systemctl enable docker

安装CRI-O:

# Ensure you load modules
sudo modprobe overlay
sudo modprobe br_netfilter
# Set up required sysctl params
sudo tee /etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
# Reload sysctl
sudo sysctl --system
# Add repo
. /etc/os-release
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/x${NAME}_${VERSION_ID}//' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list"
wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/x${NAME}_${VERSION_ID}/Release.key -O- | sudo apt-key add 
sudo apt update
# Install CRI-O
sudo apt install cri-o-1.17
# Start and enable Service
sudo systemctl daemon-reload
sudo systemctl start crio
sudo systemctl enable crio

安装容器:

# Configure persistent loading of modules
sudo tee /etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF
# Load at runtime
sudo modprobe overlay
sudo modprobe br_netfilter
# Ensure sysctl params are set
sudo tee /etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
# Reload configs
sudo sysctl --system
# Install required packages
sudo apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates

# Add Docker repo
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add 
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
# Install containerd
sudo apt update
sudo apt install -y containerd.io
# Configure containerd and start service
sudo mkdir -p /etc/containerd
sudo su 
containerd config default  /etc/containerd/config.toml
# restart containerd
sudo systemctl restart containerd
sudo systemctl enable containerd

要使用systemd cgroup驱动程序,请在以下位置设置plugins.cri.systemd_cgroup = true /etc/containerd/config.toml
使用kubeadm时,为kubelet手动配置cgroup驱动程序

步骤5:初始化主节点

登录到用作主服务器的服务器,并确保已加载br_netfilter模块:

$lsmod | grep br_netfilter
br_netfilter           22256  0 
bridge                151336  2 br_netfilter,ebtable_broute

启用kubelet服务。

sudo systemctl enable kubelet

现在我们要初始化将运行控制平面组件的机器,该组件包括etcd(集群数据库)和API Server.Pull容器镜像:

$sudo kubeadm config images pull
[config/images] Pulled k8s.gcr.io/kube-apiserver:v1.18.3
[config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.18.3
[config/images] Pulled k8s.gcr.io/kube-scheduler:v1.18.3
[config/images] Pulled k8s.gcr.io/kube-proxy:v1.18.3
[config/images] Pulled k8s.gcr.io/pause:3.2
[config/images] Pulled k8s.gcr.io/etcd:3.4.3-0
[config/images] Pulled k8s.gcr.io/coredns:1.6.7

这些是基本的 kubeadm init用于引导群集的选项。

--control-plane-endpoint :  set the shared endpoint for all control-plane nodes. Can be DNS/IP
--pod-network-cidr : Used to set a Pod network add-on CIDR
--cri-socket : Use if have more than one container runtime to set runtime socket path
--apiserver-advertise-address : Set advertise address for this particular control-plane node's API server

设置群集终结点DNS名称或者将记录添加到/etc/hosts文件。

$sudo vim /etc/hosts
172.29.20.5 k8s-cluster.theitroad.com

创建集群:

sudo kubeadm init \
  --pod-network-cidr=192.168.0.0/16 \
  --control-plane-endpoint=k8s-cluster.theitroad.com

注意:如果网络中已经使用了192.168.0.0/16,则必须选择其他Pod网络CIDR,以替换上述命令中的192.168.0.0/16.

Docker/var/run/docker.sock
集装箱/run/containerd/containerd.sock
CRI-O/var/run/crio/crio.sock

我们可以选择根据需要将套接字文件传递给运行时并发布地址,这是我的初始化命令的输出。

....
[init] Using Kubernetes version: v1.18.3
[preflight] Running pre-flight checks
	[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster Jan not function correctly
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0611 22:34:23.276374    4726 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0611 22:34:23.278380    4726 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 8.008181 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master01.theitroad.com as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master01.theitroad.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: zoy8cq.6v349sx9ass8dzyj
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
  kubeadm join k8s-cluster.theitroad.com:6443 --token sr4l2l.2kvot0pfalh5o4ik \
    --discovery-token-ca-cert-hash sha256:c692fb047e15883b575bd6710779dc2c5af8073f7cab460abd181fd3ddb29a18 \
    --control-plane 
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join k8s-cluster.theitroad.com:6443 --token sr4l2l.2kvot0pfalh5o4ik \
    --discovery-token-ca-cert-hash sha256:c692fb047e15883b575bd6710779dc2c5af8073f7cab460abd181fd3ddb29a18

使用输出中的命令配置kubectl:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

检查集群状态:

$kubectl cluster-info
Kubernetes master is running at https://k8s-cluster.theitroad.com:6443
KubeDNS is running at https://k8s-cluster.theitroad.com:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

可以使用以下命令在安装输出中添加其他主节点:

kubeadm join k8s-cluster.theitroad.com:6443 --token sr4l2l.2kvot0pfalh5o4ik \
    --discovery-token-ca-cert-hash sha256:c692fb047e15883b575bd6710779dc2c5af8073f7cab460abd181fd3ddb29a18 \
    --control-plane

步骤6:在Master上安装网络插件

在本教程中,我们将使用印花棉布。
我们可以选择任何其他受支持的网络插件。

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

我们应该看到以下输出。

configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created

确认所有Pod都在运行:

$watch kubectl get pods --all-namespaces
NAMESPACE     NAME                                                         READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-76d4774d89-nfqrr                     1/1     Running   0          2m52s
kube-system   calico-node-kpprr                                            1/1     Running   0          2m52s
kube-system   coredns-66bff467f8-9bxgm                                     1/1     Running   0          7m43s
kube-system   coredns-66bff467f8-jgwln                                     1/1     Running   0          7m43s
kube-system   etcd-k8s-master01.theitroad.com                      1/1     Running   0          7m58s
kube-system   kube-apiserver-k8s-master01.theitroad.com            1/1     Running   0          7m58s
kube-system   kube-controller-manager-k8s-master01.theitroad.com   1/1     Running   0          7m58s
kube-system   kube-proxy-bt7ff                                             1/1     Running   0          7m43s
kube-system   kube-scheduler-k8s-master01.theitroad.com            1/1     Running   0          7m58s

确认主节点已准备就绪:

$kubectl get nodes -o wide
NAME           STATUS   ROLES    AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE           KERNEL-VERSION     CONTAINER-RUNTIME
k8s-master01   Ready    master   64m   v1.18.3   135.181.28.113   <none>        Ubuntu 20.04 LTS   5.4.0-37-generic   docker://19.3.11

步骤7:添加工作程序节点

准备好控制平面后,我们可以将工作节点添加到群集中以运行计划的工作负载。
如果端点地址不在DNS中,则将记录添加到/etc/hosts。

$sudo vim /etc/hosts
172.29.20.5 k8s-cluster.theitroad.com

给出的join命令用于将工作节点添加到集群。

kubeadm join k8s-cluster.theitroad.com:6443 \
  --token sr4l2l.2kvot0pfalh5o4ik \
  --discovery-token-ca-cert-hash sha256:c692fb047e15883b575bd6710779dc2c5af8073f7cab460abd181fd3ddb29a18

输出:

[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

在控制面板上运行以下命令,以查看该节点是否加入了集群。

$kubectl get nodes
NAME                                 STATUS   ROLES    AGE   VERSION
k8s-master01.theitroad.com   Ready    master   10m   v1.18.3
k8s-worker01.theitroad.com   Ready    <none>   50s   v1.18.3
k8s-worker02.theitroad.com   Ready    <none>   12s   v1.18.3
$kubectl get nodes -o wide

如果连接令牌已过期,请参考我们的如何连接工作节点的教程。
将新的Kubernetes工作节点连接到现有集群

步骤8:在集群上部署应用程序

我们需要通过部署应用程序来验证集群是否正常工作。

kubectl apply -f https://k8s.io/examples/pods/commands.yaml

检查是否已启动AD连播

$kubectl get pods
NAME           READY   STATUS      RESTARTS   AGE
command-demo   0/1     Completed   0          16s

步骤9:安装Kubernetes仪表板(可选)

Kubernetes仪表板可用于将容器化的应用程序部署到Kubernetes集群,对容器化的应用程序进行故障排除以及管理集群资源。