使用kubeadm在CentOS 7上安装Kubernetes集群
本教程将教我们如何使用kubeadm工具在CentOS 7上部署最低限度的Kubernetes集群。
Kubeadm是一个命令行工具,旨在帮助用户引导符合最佳实践的Kubernetes集群。
该工具支持群集生命周期管理功能,例如引导令牌和群集升级。
在CentOS 7上安装Kubernetes集群
下一节将详细讨论在CentOS 7服务器上部署最小的Kubernetes集群的过程。
此安装适用于单个控制平面群集。
我们还有其他有关使用RKE和Kubespray部署高可用性Kubernetes集群的教程。
步骤1:准备Kubernetes服务器
群集中使用的服务器的最低服务器要求是:
每台计算机2 GiB或者更多的RAM –更少的空间留给应用程序。
计算机上至少有2个CPU用作控制平面节点。
群集中所有计算机之间的全网络连接–可以是私有的也可以是公共的此设置用于开发目的,我的服务器具有以下详细信息
服务器类型 | 服务器主机名 | 规格 |
Master | k8s-master01.theitroad.com | 4GB Ram,2vcpus |
Worker | k8s-worker01.theitroad.com | 4GB Ram,2vcpus |
Worker | k8s-worker02.theitroad.com | 4GB Ram,2vcpus |
登录所有服务器并更新操作系统。
sudo yum -y update && sudo systemctl reboot
第2步:安装kubelet,kubeadm和kubectl
重新启动服务器后,将CentOS 7的Kubernetes存储库添加到所有服务器。
sudo tee /etc/yum.repos.d/kubernetes.repo<<EOF [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF
然后安装所需的软件包。
sudo yum -y install epel-release vim git curl wget kubelet kubeadm kubectl --disableexcludes=kubernetes
通过检查kubectl的版本来确认安装。
$kubectl version --client Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2017-05-20T12:52:00Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
步骤3:停用SELinux和交换
如果我们将SELinux置于强制模式下,请将其关闭或者使用许可模式。
sudo setenforce 0 sudo sed -i 's/^SELINUX=.*/SELINUX=permissive/g' /etc/selinux/config
关闭交换。
sudo sed -i '/swap/s/^\(.*\)$/#/g' /etc/fstab sudo swapoff -a
配置sysctl。
sudo modprobe overlay sudo modprobe br_netfilter sudo tee /etc/sysctl.d/kubernetes.conf<<EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF sudo sysctl --system
步骤4:安装容器运行时
为了在Pods中运行容器,Kubernetes使用容器运行时。
支持的容器运行时为:DockerCRI-OContainerd注意:我们必须一次选择一个运行时。
安装Docker运行时:
# Install packages sudo yum install -y yum-utils device-mapper-persistent-data lvm2 sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo sudo yum update -y && yum install -y containerd.io-1.2.13 docker-ce-19.03.8 docker-ce-cli-19.03.8 # Create required directories sudo mkdir /etc/docker sudo mkdir -p /etc/systemd/system/docker.service.d # Create daemon json config file sudo tee /etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2", "storage-opts": [ "overlay2.override_kernel_check=true" ] } EOF # Start and enable Services sudo systemctl daemon-reload sudo systemctl restart docker sudo systemctl enable docker
安装CRI-O:
# Ensure you load modules sudo modprobe overlay sudo modprobe br_netfilter # Set up required sysctl params sudo tee /etc/sysctl.d/kubernetes.conf<<EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF # Reload sysctl sudo sysctl --system # Add repo sudo curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/CentOS_7/devel:kubic:libcontainers:stable.repo sudo curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:1.18.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:1.18/CentOS_7/devel:kubic:libcontainers:stable:cri-o:1.18.repo # Install CRI-O sudo yum install -y cri-o # Start and enable Service sudo systemctl daemon-reload sudo systemctl start crio sudo systemctl enable crio
安装容器:
# Configure persistent loading of modules sudo tee /etc/modules-load.d/containerd.conf <<EOF overlay br_netfilter EOF # Load at runtime sudo modprobe overlay sudo modprobe br_netfilter # Ensure sysctl params are set sudo tee /etc/sysctl.d/kubernetes.conf<<EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF # Reload configs sudo sysctl --system # Install required packages sudo yum install -y yum-utils device-mapper-persistent-data lvm2 # Add Docker repo sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo # Install containerd sudo yum update -y && yum install -y containerd.io # Configure containerd and start service sudo mkdir -p /etc/containerd sudo containerd config default > /etc/containerd/config.toml # restart containerd sudo systemctl restart containerd sudo systemctl enable containerd
要使用systemd cgroup驱动程序,请在以下位置设置plugins.cri.systemd_cgroup = true /etc/containerd/config.toml
。
使用kubeadm时,为kubelet手动配置cgroup驱动程序
步骤5:配置防火墙
如果我们有活动的防火墙服务,则需要启用许多端口。
主服务器端口:
sudo firewall-cmd --add-port={6443,2379-2380,10250,10251,10252,5473,179,5473}/tcp --permanent sudo firewall-cmd --add-port={4789,8285,8472}/udp --permanent sudo firewall-cmd --reload
工作节点端口:
sudo firewall-cmd --add-port={10250,30000-32767,5473,179,5473}/tcp --permanent sudo firewall-cmd --add-port={4789,8285,8472}/udp --permanent sudo firewall-cmd --reload
步骤6:初始化控制平面节点
登录到用作主服务器的服务器,并确保已加载br_netfilter模块:
$lsmod | grep br_netfilter br_netfilter 22256 0 bridge 151336 2 br_netfilter,ebtable_broute
启用kubelet服务。
sudo systemctl enable kubelet
现在我们要初始化将运行控制平面组件的机器,该组件包括etcd(集群数据库)和API Server.Pull容器镜像:
$sudo kubeadm config images pull [config/images] Pulled k8s.gcr.io/kube-apiserver:v1.18.3 [config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.18.3 [config/images] Pulled k8s.gcr.io/kube-scheduler:v1.18.3 [config/images] Pulled k8s.gcr.io/kube-proxy:v1.18.3 [config/images] Pulled k8s.gcr.io/pause:3.2 [config/images] Pulled k8s.gcr.io/etcd:3.4.3-0 [config/images] Pulled k8s.gcr.io/coredns:1.6.7
这些是基本的 kubeadm init
用于引导群集的选项。
--control-plane-endpoint : set the shared endpoint for all control-plane nodes. Can be DNS/IP --pod-network-cidr : Used to set a Pod network add-on CIDR --cri-socket : Use if have more than one container runtime to set runtime socket path --apiserver-advertise-address : Set advertise address for this particular control-plane node's API server
设置群集终结点DNS名称或者将记录添加到/etc/hosts文件。
$sudo vim /etc/hosts 172.29.20.5 k8s-cluster.theitroad.com
创建集群:
sudo kubeadm init \ --pod-network-cidr=192.168.0.0/16 \ --control-plane-endpoint=k8s-cluster.theitroad.com
注意:如果网络中已经使用了192.168.0.0/16,则必须选择其他Pod网络CIDR,以替换上述命令中的192.168.0.0/16.
Docker | /var/run/docker.sock |
containerd | /run/containerd/containerd.sock |
CRI-O | /var/run/crio/crio.sock |
我们可以选择根据需要将套接字文件传递给运行时并发布地址,这是我的初始化命令的输出。
.... [init] Using Kubernetes version: v1.18.3 [preflight] Running pre-flight checks [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster Jan not function correctly [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf" [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf" [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf" [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" W0611 22:34:23.276374 4726 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [control-plane] Creating static Pod manifest for "kube-scheduler" W0611 22:34:23.278380 4726 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 8.008181 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node k8s-master01.theitroad.com as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node k8s-master01.theitroad.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: zoy8cq.6v349sx9ass8dzyj [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root: kubeadm join k8s-cluster.theitroad.com:6443 --token zoy8cq.6v349sx9ass8dzyj \ --discovery-token-ca-cert-hash sha256:14a6e33ca8dc9998f984150bc8780ddf0c3ff9cf6a3848f49825e53ef1374e24 \ --control-plane Then you can join any number of worker nodes by running the following on each as root: kubeadm join k8s-cluster.theitroad.com:6443 --token zoy8cq.6v349sx9ass8dzyj \ --discovery-token-ca-cert-hash sha256:14a6e33ca8dc9998f984150bc8780ddf0c3ff9cf6a3848f49825e53ef1374e24
使用输出中的命令配置kubectl:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
检查集群状态:
$kubectl cluster-info Kubernetes master is running at https://k8s-cluster.theitroad.com:6443 KubeDNS is running at https://k8s-cluster.theitroad.com:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
可以使用以下命令在安装输出中添加其他主节点:
kubeadm join k8s-cluster.theitroad.com:6443 \ --token zoy8cq.6v349sx9ass8dzyj \ --discovery-token-ca-cert-hash sha256:14a6e33ca8dc9998f984150bc8780ddf0c3ff9cf6a3848f49825e53ef1374e24 \ --control-plane
步骤7:安装网络插件
在本教程中,我们将使用印花棉布。
我们可以选择任何其他受支持的网络插件。
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
我们应该看到以下输出。
configmap/calico-config created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created clusterrole.rbac.authorization.k8s.io/calico-node created clusterrolebinding.rbac.authorization.k8s.io/calico-node created daemonset.apps/calico-node created serviceaccount/calico-node created deployment.apps/calico-kube-controllers created serviceaccount/calico-kube-controllers created
确认所有Pod都在运行:
$kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-76d4774d89-nfqrr 1/1 Running 0 2m52s kube-system calico-node-kpprr 1/1 Running 0 2m52s kube-system coredns-66bff467f8-9bxgm 1/1 Running 0 7m43s kube-system coredns-66bff467f8-jgwln 1/1 Running 0 7m43s kube-system etcd-k8s-master01.theitroad.com 1/1 Running 0 7m58s kube-system kube-apiserver-k8s-master01.theitroad.com 1/1 Running 0 7m58s kube-system kube-controller-manager-k8s-master01.theitroad.com 1/1 Running 0 7m58s kube-system kube-proxy-bt7ff 1/1 Running 0 7m43s kube-system kube-scheduler-k8s-master01.theitroad.com 1/1 Running 0 7m58s
确认主节点已准备就绪:
$kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k8s-master01.theitroad.com Ready master 8m38s v1.18.3 95.217.235.35 <none> CentOS Linux 7 (Core) 3.10.0-1127.10.1.el7.x86_64 docker://19.3.8
步骤8:添加工作程序节点
准备好控制平面后,我们可以将工作节点添加到群集中以运行计划的工作负载。
如果端点地址不在DNS中,则将记录添加到/etc/hosts。
$sudo vim /etc/hosts 172.29.20.5 k8s-cluster.theitroad.com
给出的join命令用于将工作节点添加到集群。
kubeadm join k8s-cluster.theitroad.com:6443 \ --token zoy8cq.6v349sx9ass8dzyj \ --discovery-token-ca-cert-hash sha256:14a6e33ca8dc9998f984150bc8780ddf0c3ff9cf6a3848f49825e53ef1374e24
输出:
[preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details.
在控制面板上运行以下命令,以查看该节点是否加入了集群。
$kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master01.theitroad.com Ready master 18m v1.18.3 k8s-worker01.theitroad.com Ready <none> 98s v1.18.3
如果连接令牌已过期,请参考我们的如何连接工作节点的教程。
将新的Kubernetes工作节点连接到现有集群
步骤9:在集群上部署应用程序
我们需要通过部署应用程序来验证集群是否正常工作。
kubectl apply -f https://k8s.io/examples/pods/commands.yaml
检查是否已启动pod
$kubectl get pods NAME READY STATUS RESTARTS AGE command-demo 0/1 Completed 0 40s
步骤10:安装Kubernetes仪表板(可选)
Kubernetes仪表板可用于将容器化的应用程序部署到Kubernetes集群,对容器化的应用程序进行故障排除和管理集群资源。