在EKS Kubernetes群集中安装Istio Service Mesh
Istio服务网格的工作是为Kubernetes集群中的服务提供访问控制,流量监视,安全性,发现,负载平衡和许多其他有用的功能。代码没有任何更改,我们可以享受这些服务。 Istio为我们完成所有任务。在本教程中,我们将研究如何在EKS Kubernetes群集中安装Istio Service Mesh。
简而言之,Istio在作为网状网络一部分的命名空间中部署的每个服务旁边部署一个代理(称为Sidecar)。用于服务的任何流量都必须通过Sidecar代理。然后,将Istio策略用于将流量路由到服务。使用Istio,我们还可以简化DevOps技术,例如断路器,金丝雀部署和故障注入。
在EKS Kubernetes群集中安装Istio Service Mesh
对于此安装,我们需要一些物品。工作在AWS中的EKS Kubernetes群集正在部署在AWS中以具有管理员特权的用户身份访问该群集如果我们想将Gateway和Virtual Services与域名一起使用,则使用Route53托管区域
在本地计算机/堡垒中安装istioctl
取决于kubectl的安装位置以及istioctl在同一台机器上的位置。如果可以从机器上访问API服务器,则它可以是本地工作站机器。对于在AWS中部署的私有EKS群集,它将是堡垒服务器。
下载并解压缩用于Linux和macOS的istioctl Works。我们将安装版本1.6.8
cd ~/ curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.6.8 sh
为工作站配置istioctl客户端工具。
sudo cp istio-1.6.8/bin/istioctl /usr/local/bin/ sudo chmod +x /usr/local/bin/istioctl
确认istioctl版本:
$istioctl version 1.6.8
为Bash启用istioctl完成
--- Bash -- mkdir -p ~/completions && istioctl collateral --bash -o ~/completions source ~/completions/istioctl.bash echo "source ~/completions/istioctl.bash" >> ~/.bashrc --- Zsh -- mkdir -p ~/completions && istioctl collateral --zsh -o ~/completions source ~/completions/_istioctl echo "source ~/completions/_istioctl" >>~/.zshrc
验证自动完成。
$istioctl <TAB> analyze dashboard install operator proxy-status validate authz deregister kube-inject profile register verify-install convert-ingress experimental manifest proxy-config upgrade version
创建Istio命名空间
创建一个将部署所有与istio相关的服务的名称空间。
$kubectl create namespace istio-system namespace/istio-system created
建立必要的机密
最好在Istio安装过程中安装Grafana,Kiali和Jaeger。在我们的设置中,这些组件中的每一个都需要凭据,这些凭据必须作为秘密提供。
让我们在istio-system名称空间中创建这些秘密。
创建格拉法纳秘密
GRAFANA_USERNAME=$(echo -n "grafana" | base64) GRAFANA_PASSPHRASE=$(echo -n "theitroad@localhost" | base64) # Replace theitroad@localhost with your password cat <<EOF | kubectl apply -f apiVersion: v1 kind: Secret metadata: name: grafana namespace: istio-system labels: app: grafana type: Opaque data: username: $GRAFANA_USERNAME passphrase: $GRAFANA_PASSPHRASE EOF
创建Kiali秘密
KIALI_USERNAME=$(echo -n "kiali" | base64) KIALI_PASSPHRASE=$(echo -n "theitroad@localhost" | base64) # Replace theitroad@localhost with your password cat <<EOF | kubectl apply -f apiVersion: v1 kind: Secret metadata: name: kiali namespace: istio-system labels: app: kiali type: Opaque data: username: $KIALI_USERNAME passphrase: $KIALI_PASSPHRASE EOF
创建Jaeger Secret。
JAEGER_USERNAME=$(echo -n "jaeger" | base64) JAEGER_PASSPHRASE=$(echo -n "theitroad@localhost" | base64) # Replace theitroad@localhost with your password cat <<EOF | kubectl apply -f apiVersion: v1 kind: Secret metadata: name: jaeger namespace: istio-system labels: app: jaeger type: Opaque data: username: $JAEGER_USERNAME passphrase: $JAEGER_PASSPHRASE EOF
列出创建的机密。
$kubectl get secret -n istio-system NAME TYPE DATA AGE default-token-kwrcj kubernetes.io/service-account-token 3 16m grafana Opaque 2 4m59s jaeger Opaque 2 47s kiali Opaque 2 3m7s
创建Istio控制平面配置
现在,我们已经成功创建了所需的秘密,我们可以创建Istio Control平面配置文件。
该文件名为istio-control-plane-eks.yml。该文件将包含用于配置Istio的Istio控制平面详细信息。
$vim istio-control-plane-eks.yml
内容是。参考全局网格选项
apiVersion: install.istio.io/v1alpha2 kind: IstioControlPlane spec: profile: default values: meshConfig: disablePolicyChecks: false # File address for the proxy access log (e.g. /dev/stdout). accessLogFile: "/dev/stdout" # Set the default behavior of the sidecar for handling outbound traffic from the application outboundTrafficPolicy: mode: "ALLOW_ANY" # Enable mutual TLS automatically for service to service communication within the mesh enableAutoMtls: false disablePolicyChecks: false gateways: # Enable egress gateway istio-egressgateway: enabled: true autoscaleEnabled: true # Enable Ingress gateway istio-ingressgateway: enabled: true autoscaleEnabled: true global: # Ensure that the Istio pods are only scheduled to run on Linux nodes defaultNodeSelector: beta.kubernetes.io/os: linux # Enable mutual TLS for the control plane controlPlaneSecurityEnabled: true grafana: # Enable Grafana deployment for analytics and monitoring dashboards enabled: true security: # Enable authentication for Grafana enabled: true kiali: # Enable the Kiali deployment for a service mesh observability dashboard enabled: true tracing: # Enable the Jaeger deployment for tracing enabled: true provider: jaeger # zipkin/jaeger
通过执行空运行来验证配置。
$istioctl manifest apply -f istio-control-plane-eks.yml --dry-run ✔ Istio core installed ✔ Istiod installed ✔ Ingress gateways installed ✔ Addons installed - Pruning removed resources ......
使用以下命令安装istio:
$istioctl manifest apply -f istio-control-plane-eks.yml ✔ Istio core installed ✔ Istiod installed ✔ Ingress gateways installed ✔ Addons installed ✔ Installation complete
检查已部署的Pod,以确认它们处于运行状态:
$kubectl get pods -n istio-system NAME READY STATUS RESTARTS AGE grafana-86897cb4f5-wg29n 1/1 Running 0 3h34m istio-egressgateway-8667d76d75-2t96d 1/1 Running 0 51s istio-ingressgateway-5d78f74886-8xpx5 1/1 Running 0 3h35m istio-tracing-57d7cfd779-xbtd8 1/1 Running 0 3h34m istiod-58f84ffddc-khncg 1/1 Running 0 3h35m kiali-7c974669b4-ckfh4 1/1 Running 0 3h34m prometheus-6946fd87b4-ldzt2 2/2 Running 0 3h34m
我们可以通过以下方式列出服务端点:
$kubectl get svc -n istio-system
向Ingress Service添加批注以获取AWS Load Balancer。要添加的注释是:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb" service.beta.kubernetes.io/aws-load-balancer-internal: "0.0.0.0/0"
好用kubectl命令添加注释。
kubectl annotate svc istio-ingressgateway service.beta.kubernetes.io/aws-load-balancer-type="nlb" -n istio-system kubectl annotate svc istio-ingressgateway service.beta.kubernetes.io/aws-load-balancer-internal="0.0.0.0/0" -n istio-system
确认LB已创建。
$kubectl get svc istio-ingressgateway -n istio-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-ingressgateway LoadBalancer 10.100.49.28 a75fa02249f79436290b35e8a00a00b5-8e63bc91906eba93.elb.eu-west-1.amazonaws.com 15021:31022/TCP,80:32766/TCP,443:32512/TCP,15443:31919/TCP 3h31m
访问仪表板:
# Grafana $istioctl dashboard grafana # Kiali $istioctl dashboard kiali # Jaeger $istioctl dashboard jaeger # Prometheus $istioctl dashboard prometheus # Zipkin $istioctl dashboard zipkin # Envoy $istioctl dashboard envoy <pod-name>.<namespace>
配置Route53 DNS
恶意将子域cloud.hirebestengineers.com委托给要在Istio网关中使用的AWS Route53.
如果尚未创建托管区域,请访问Route53控制台。
单击创建托管区域,将域添加到Route53.
将为我们提供DNS服务器条目,以在注册服务商中更新,以在Route53上解析和管理域DNS条目。
由于我使用Cloudflare来管理DNS故障,因此需要从管理控制台中相应地更新设置。请注意,我在Route53上使用了子域,而不是实际的域名。
要添加的记录类型为NS。添加所有内容后,它将如下所示:
更新后确认DNS传播。对于某些注册表,最多可能需要24小时才能推送更新。
$dig NS cloud.hirebestengineers.com +short ns-1335.awsdns-38.org. ns-1879.awsdns-42.co.uk. ns-454.awsdns-56.com. ns-643.awsdns-16.net.
在route53上创建一条指向Istio Ingress使用的负载均衡器的记录。对我来说,记录将是* .cloud.hirebestengineers.com
单击创建记录>简单路由>定义简单记录并设置:记录名称值/路由流量到:选择网络负载均衡器,设置区域和负载均衡器ID记录类型:A
当不点击定义简单记录按钮。
验证详细信息,然后单击创建记录
为名称空间启用自动Sidecar注入
使用Istio提供的变异Webhook接纳控制器,可以将Sidecar自动添加到适用的Kubernetes Pod。
会为此项目创建一个演示名称空间。
$kubectl create ns demo namespace/demo created
通过在名称空间上添加istio-injection = enabled标签来启用自动边车注入:
$kubectl label namespace demo istio-injection=enabled namespace/demo labeled
确认标签已添加到名称空间。
$kubectl get namespace demo -L istio-injection NAME STATUS AGE ISTIO-INJECTION demo Active 2m20s enabled
使用Istio网关部署测试应用程序
我们将在IstioBookinfo应用程序中使用示例。本示例部署了一个示例应用程序,该应用程序由四个单独的微服务组成,用于演示各种Istio功能。
下载应用程序列表文件。
wget https://raw.githubusercontent.com/istio/istio/master/samples/bookinfo/platform/kube/bookinfo.yaml
使用kubectl命令部署应用程序:
$kubectl apply -f bookinfo.yaml -n demo service/details created serviceaccount/bookinfo-details created deployment.apps/details-v1 created service/ratings created serviceaccount/bookinfo-ratings created deployment.apps/ratings-v1 created service/reviews created serviceaccount/bookinfo-reviews created deployment.apps/reviews-v1 created deployment.apps/reviews-v2 created deployment.apps/reviews-v3 created service/productpage created serviceaccount/bookinfo-productpage created deployment.apps/productpage-v1 created
确认Pod正在运行:
$kubectl get pods -n demo NAME READY STATUS RESTARTS AGE details-v1-5974b67c8-tqsj9 2/2 Running 0 86s productpage-v1-64794f5db4-hg7n6 2/2 Running 0 76s ratings-v1-c6cdf8d98-4dl8h 2/2 Running 0 84s reviews-v1-7f6558b974-64wrw 2/2 Running 0 81s reviews-v2-6cb6ccd848-fp2tl 2/2 Running 0 80s reviews-v3-cc56b578-dpgh2 2/2 Running 0 79s
确认所有服务均已正确定义。
$kubectl get svc -n demo kubectl get svc -n demo NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE details ClusterIP 10.100.229.76 <none> 9080/TCP 4m28s productpage ClusterIP 10.100.23.164 <none> 9080/TCP 4m18s ratings ClusterIP 10.100.172.229 <none> 9080/TCP 4m26s reviews ClusterIP 10.100.18.183 <none> 9080/TCP 4m23s
要确认Bookinfo应用程序正在运行,请通过curl命令从某个pod中向其发送请求,例如,通过等级:
kubectl -n demo exec "$(kubectl get pod -n demo -l app=ratings -o jsonpath='{.items[0].metadata.name}')" -c ratings -- curl productpage:9080/productpage | grep -o "<title>.*</title>"
预期的命令输出:
<title>Simple Bookstore App</title>
下载网关文件。
wget https://raw.githubusercontent.com/istio/istio/master/samples/bookinfo/networking/bookinfo-gateway.yaml
编辑它以设置主机值。
$vim bookinfo-gateway.yaml apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: bookinfo-gateway spec: selector: istio: ingressgateway # use istio default controller servers: - port: number: 80 name: http protocol: HTTP hosts: - "bookinfo.cloud.hirebestengineers.com" -- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo spec: hosts: - "bookinfo.cloud.hirebestengineers.com" gateways: - bookinfo-gateway http: - match: - uri: exact: /productpage - uri: prefix: /static - uri: exact: /login - uri: exact: /logout - uri: prefix: /api/v1/products route: - destination: host: productpage port: number: 9080
定义应用程序的入口网关:
$kubectl apply -f ./bookinfo-gateway.yaml -n demo gateway.networking.istio.io/bookinfo-gateway created virtualservice.networking.istio.io/bookinfo created
使用curl或者Web浏览器测试对应用程序的访问。
卷曲:
$curl -s http://bookinfo.cloud.hirebestengineers.com/productpage | grep -o "<title>.*</title>" <title>Simple Bookstore App</title>
从网络浏览器。
我们可以使用此示例应用程序尝试Istios功能,例如流量路由,故障注入,速率限制等。也可以查看Istio Tasks以了解更多信息。如果我们是初学者,那么配置请求路由也是一个不错的起点。
清洁Bookinfo应用程序。
$kubectl delete -f ./bookinfo-gateway.yaml -n demo gateway.networking.istio.io "bookinfo-gateway" deleted virtualservice.networking.istio.io "bookinfo" deleted $kubectl delete -f ./bookinfo.yaml -n demo service "details" deleted serviceaccount "bookinfo-details" deleted deployment.apps "details-v1" deleted service "ratings" deleted serviceaccount "bookinfo-ratings" deleted deployment.apps "ratings-v1" deleted service "reviews" deleted serviceaccount "bookinfo-reviews" deleted deployment.apps "reviews-v1" deleted deployment.apps "reviews-v2" deleted deployment.apps "reviews-v3" deleted service "productpage" deleted serviceaccount "bookinfo-productpage" deleted deployment.apps "productpage-v1" deleted $kubectl get all -n demo No resources found in demo namespace. $kubectl delete ns demo namespace "demo" deleted