使用Ansible和Calico CNI在CentOS 7/CentOS 8上部署Kubernetes集群
我们想在CentOS 7/CentOS 8上为开发项目设置一个三节点Kubernetes集群-1个主节点和两个或者多个Worker节点?
本教程将引导我们完成在运行并配置了防火墙的Ansible和Calico CNI的CentOS 8/CentOS 7 Linux机器上设置Kubernetes群集的步骤。
Kubernetes(K8s)是一个开源系统,用于自动化容器化应用程序的部署,扩展和管理。
类似的Kubernetes部署教程:使用Rancher RKE安装Production Kubernetes集群如何使用K3s在5分钟内部署轻量级Kubernetes集群使用Ansible&Kubespray部署可用于生产的Kubernetes集群
我的实验室基于以下环境:
机器类型 | 主机名 | IP地址 |
控制节点 | k8smaster01.theitroad.com | 192.168.122.10 |
工作节点1 | k8snode01.theitroad.com | 192.168.122.11 |
工作节点2 | k8snode02.theitroad.com | 192.168.122.12 |
首先,请确保系统已更新,并安装所有依赖项,包括容器运行时,Kubernetes软件包以及为k8s配置防火墙。
步骤1:设定标准要求
我写了一个Ansible角色来进行标准的Kubernetes节点准备工作。
该角色包含以下任务:安装必需的基本软件包设置标准系统要求–禁用交换,修改sysctl,禁用SELinux安装和配置我们选择的容器运行时– cri-o,Docker,Containerd安装Kubernetes软件包– kubelet,kubeadm和kubectl Kubernetes主节点和工作节点访问我的Github页面进行设置:https://github.com/jmutai/k8s-pre-bootstrap这是我最近执行的输出:
TASK [kubernetes-bootstrap : Open flannel ports on the firewall] ** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** ***** skipping: [k8smaster01] => (item=8285) skipping: [k8smaster01] => (item=8472) skipping: [k8snode01] => (item=8285) skipping: [k8snode01] => (item=8472) skipping: [k8snode02] => (item=8285) skipping: [k8snode02] => (item=8472) TASK [kubernetes-bootstrap : Open calico UDP ports on the firewall] ** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** ** ok: [k8snode01] => (item=4789) ok: [k8smaster01] => (item=4789) ok: [k8snode02] => (item=4789) TASK [kubernetes-bootstrap : Open calico TCP ports on the firewall] ** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** ** ok: [k8snode02] => (item=5473) ok: [k8snode01] => (item=5473) ok: [k8smaster01] => (item=5473) ok: [k8snode01] => (item=179) ok: [k8snode02] => (item=179) ok: [k8smaster01] => (item=179) ok: [k8snode02] => (item=5473) ok: [k8snode01] => (item=5473) ok: [k8smaster01] => (item=5473) TASK [kubernetes-bootstrap : Reload firewalld] ** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** *** changed: [k8smaster01] changed: [k8snode01] changed: [k8snode02] PLAY RECAP ** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** *** k8smaster01 : ok=23 changed=3 unreachable=0 failed=0 skipped=11 rescued=0 ignored=0 k8snode01 : ok=23 changed=3 unreachable=0 failed=0 skipped=11 rescued=0 ignored=0 k8snode02 : ok=23 changed=3 unreachable=0 failed=0 skipped=11 rescued=0 ignored=0
步骤2:初始化单节点控制平面
此部署适用于具有集成etcd的单个控制平面节点。
如果我们要执行多个控制节点(HA时为3个),请查看使用kubeadm创建高可用性集群的官方教程。
我们将使用kubeadm引导符合最佳实践的最小可行Kubernetes集群。
kubeadm的好处是它支持其他集群生命周期功能,例如升级,降级和管理引导令牌。
单个控制节点引导要求:控制节点机器的默认IP地址DNS域名/负载均衡器IP(如果我们打算添加更多)稍后控制节点以root用户或者使用sudoLogin的用户的SSH访问控制节点:
$ssh Hyman@theitroad
检查可用于初始化Kubernetes集群的参数:
$kubeadm init --help
我们将使用的标准参数为: --pod-network-cidr
:用于指定Pod网络的IP地址范围。 --apiserver-advertise-address
:API服务器将宣告其正在监听的IP地址。 --control-plane-endpoint
:为控制平面指定一个稳定的IP地址或者DNS名称。 --upload-certs
:将控制平面证书上传到kubeadm-certs Secret。
如果使用Calico,建议的Pod网络为:192.168.0.0/16对于Flannel,建议将Pod网络设置为10.244.0.0/16对于我来说,我将运行命令:
sudo kubeadm init \ --apiserver-advertise-address=192.168.122.10 \ --pod-network-cidr 192.168.0.0/16 \ --upload-certs
为了能够将单个控制平面kubeadm集群升级到高可用性,我们应该指定–-control-plane-endpoint来设置所有控制平面节点的共享端点。
这样的端点可以是负载均衡器的DNS名称或者IP地址。
kubeadm init \ --apiserver-advertise-address=192.168.122.227 \ --pod-network-cidr 192.168.0.0/16 \ --control-plane-endpoint <DNS-End-Point> --upload-certs
这是我的安装输出:
W0109 20:27:51.787966 18069 validation.go:28] Cannot validate kube-proxy config - no validator is available W0109 20:27:51.788126 18069 validation.go:28] Cannot validate kubelet config - no validator is available [init] Using Kubernetes version: v1.17.0 [preflight] Running pre-flight checks [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster Jan not function correctly [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8smaster01.theitroad.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.122.10] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [k8smaster01.theitroad.com localhost] and IPs [192.168.122.10 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8smaster01.theitroad.com localhost] and IPs [192.168.122.10 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" W0109 20:32:51.776569 18069 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [control-plane] Creating static Pod manifest for "kube-scheduler" W0109 20:32:51.777334 18069 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 15.507327 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace [upload-certs] Using certificate key: bce5c1ad320f4c64e42688e25526615d2ffd7efad3e749bc0c632b3a7834752d [mark-control-plane] Marking the node k8smaster01.theitroad.com as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node k8smaster01.theitroad.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: nx1jjq.u42y27ip3bhmj8vj [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.122.10:6443 --token nx1jjq.u42y27ip3bhmj8vj \ --discovery-token-ca-cert-hash sha256:c6de85f6c862c0d58cc3d10fd199064ff25c4021b6e88475822d6163a25b4a6c
复制kubectl配置文件。
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
还请结帐:使用kubectl和kubectx轻松管理多个Kubernetes集群
将Pod网络部署到集群
我将使用Calico,但我们可以自由使用我们选择的任何其他Pod Network插件。
kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml
如下面的输出所示,这将创建许多资源。
configmap/calico-config created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created clusterrole.rbac.authorization.k8s.io/calico-node created clusterrolebinding.rbac.authorization.k8s.io/calico-node created daemonset.apps/calico-node created serviceaccount/calico-node created deployment.apps/calico-kube-controllers created serviceaccount/calico-kube-controllers created
使用以下命令确认所有Pod都在运行。
watch kubectl get pods --all-namespaces
一旦一切运行正常,输出将如下所示。
NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-5c45f5bd9f-c8mwx 1/1 Running 0 3m45s kube-system calico-node-m5qmb 1/1 Running 0 3m45s kube-system coredns-6955765f44-cz65r 1/1 Running 0 9m43s kube-system coredns-6955765f44-mtch2 1/1 Running 0 9m43s kube-system etcd-k8smaster01.theitroad.com 1/1 Running 0 9m59s kube-system kube-apiserver-k8smaster01.theitroad.com 1/1 Running 0 9m59s kube-system kube-controller-manager-k8smaster01.theitroad.com 1/1 Running 0 9m59s kube-system kube-proxy-bw494 1/1 Running 0 9m43s kube-system kube-scheduler-k8smaster01.theitroad.com 1/1 Running 0 9m59s
请注意,每个Pod都有 STATUS
的 Running
请查看Calico文档以获取更多详细信息。
步骤3:将工作节点加入集群
现在,我们已经准备好控制节点,可以添加将其中运行工作负载(容器和Pod等)的新节点。
我们需要在每台用于运行Pods.SSH的计算机上执行此操作
$ssh Hyman@theitroad
运行kubeadm init输出的命令。
例如:
sudo kubeadm join 192.168.122.10:6443 --token nx1jjq.u42y27ip3bhmj8vj \ --discovery-token-ca-cert-hash sha256:c6de85f6c862c0d58cc3d10fd199064ff25c4021b6e88475822d6163a25b4a6c
如果令牌已过期,则可以使用以下命令生成一个新令牌:
kubeadm token create
获取令牌
kubeadm token list
我们可以使用以下命令获取Discovery-token-ca-cert-hash的值:
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
这是一个连接命令输出:
[preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
在所有其他Worker节点上运行相同的join命令,然后查看使用以下命令加入集群的可用节点:
$kubectl get nodes NAME STATUS ROLES AGE VERSION k8smaster01.theitroad.com Ready master 26m v1.17.0 k8snode01.theitroad.com Ready <none> 4m35s v1.17.0 k8snode02.theitroad.com Ready <none> 2m4s v1.17.0
步骤4:将Metrics Server部署到Kubernetes集群
Metrics Server是资源使用情况数据的群集范围内的聚合器。
它从摘要API收集指标,该摘要由Kubelet在每个节点上公开。
使用下面的教程进行部署:如何将Metrics Server部署到Kubernetes集群