如何在CentOS 7上设置本地OpenShift Origin(OKD)集群

时间:2020-02-23 14:31:50  来源:igfitidea点击:

在我们最近的文章中,我们介绍了OpenShift 4的新功能。OpenShift4是每个人都在等待的最佳Kubernetes发行版。 Openshift为我们提供了一个自助服务平台,可以根据需要创建,修改和部署容器化的应用程序。本教程将深入介绍在CentOS 7 VM上安装OpenShift Origin(OKD)3.x。

OpenShift开发团队在简化OpenShift群集设置方面所做的工作值得称赞。只需一个命令即可获得运行的OKD Local集群。

对于Ubuntu,请使用:如何在Ubuntu上设置OpenShift Origin(OKD)

我的设置是在具有以下硬件配置的虚拟机上完成的.8 vCPU 32 GB RAM 50 GB可用磁盘空间CentOS 7 OS

请按照下一部分中介绍的步骤在CentOS 7虚拟机上部署OpenShift Origin Local集群。

更新CentOS 7系统

让我们开始更新我们的CentOS 7机器。

sudo yum -y update

安装和配置Docker

OpenShift需要主机上的docker引擎才能运行容器。使用以下命令在CentOS 7上安装Docker。

sudo yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo yum install -y  docker-ce docker-ce-cli containerd.io

将标准用户帐户添加到docker组。

sudo usermod -aG docker $USER
newgrp docker

安装Docker之后,使用不安全的注册表参数172.30.0.0/16配置Docker守护进程

sudo mkdir /etc/docker /etc/containers

sudo tee /etc/containers/registries.conf<<EOF
[registries.insecure]
registries = ['172.30.0.0/16']
EOF

sudo tee /etc/docker/daemon.json<<EOF
{
   "insecure-registries": [
     "172.30.0.0/16"
   ]
}
EOF

编辑配置后,我们需要重新加载systemd并重新启动Docker守护程序。

sudo systemctl daemon-reload
sudo systemctl restart docker

使Docker在启动时启动。

$sudo systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

然后在系统上启用IP转发。

echo "net.ipv4.ip_forward = 1" | sudo tee -a /etc/sysctl.conf
sudo sysctl -p

配置防火墙。

确保防火墙允许容器访问OpenShift主API(8443/tcp)和DNS(53/udp)端点。

DOCKER_BRIDGE=`docker network inspect -f "{{range .IPAM.Config }}{{ .Subnet }}{{end}}" bridge`
sudo firewall-cmd --permanent --new-zone dockerc
sudo firewall-cmd --permanent --zone dockerc --add-source $DOCKER_BRIDGE
sudo firewall-cmd --permanent --zone dockerc --add-port={80,443,8443}/tcp
sudo firewall-cmd --permanent --zone dockerc --add-port={53,8053}/udp
sudo firewall-cmd --reload

下载Linux oc二进制文件

在此步骤中,我们可以从openshift-origin-client-tools-VERSION-linux-64bit.tar.gz下载Linux oc二进制文件并将其放在路径中。

wget https://github.com/openshift/origin/releases/download/v3.11.0/openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit.tar.gz
tar xvf openshift-origin-client-tools*.tar.gz
cd openshift-origin-client*/
sudo mv  oc kubectl  /usr/local/bin/

验证OpenShift客户端实用程序的安装。

$oc version
oc v3.11.0+0cbc58b
kubernetes v1.11.0+d4cacc0
features: Basic-Auth GSSAPI Kerberos SPNEGO

启动OpenShift Origin(OKD)本地群集

现在,通过运行以下命令来引导本地单服务器OpenShift Origin群集:

$oc cluster up

上面的命令将:在本地接口上启动OKD集群侦听127.0.0.1:8443在/console(127.0.0.1:8443)上启动侦听所有接口的Web控制台启动Kubernetes系统组件设置注册表,路由器,初始模板,以及默认项目。OpenShift集群将在Docker主机上作为多合一容器运行。

设置Openshift Origin时可以应用许多选项,请使用以下方法查看它们:

$oc cluster up --help

成功安装后,我们应该获得与以下类似的输出。

Login to server …
Creating initial project "myproject" …
Server Information …
OpenShift server started.
The server is accessible via web console at:
     https://127.0.0.1:8443
You are logged in as:
     User:     developer
     Password: <any value>
To login as administrator:
     oc login -u system:admin

下面的示例使用自定义选项。

$oc cluster up --routing-suffix=<ServerPublicIP>.xip.io \
 --public-hostname=<ServerPulicDNSName>

例。

$oc cluster up --public-hostname=okd.example.com --routing-suffix='services.example.com'

OpenShift Origin集群配置文件将位于openshift.local.clusterup /目录中。

如果集群安装成功,则以下命令应获得肯定的输出。

$oc cluster status
Web console URL: https://okd.example.com:8443/console/

Config is at host directory 
Volumes are at host directory 
Persistent volumes are at host directory /home/dev/openshift.local.clusterup/openshift.local.pv
Data will be discarded when cluster is destroyed

使用OpenShift Origin Cluster

要以管理员身份登录,请使用:

$oc login -u system:admin
Logged into "https://127.0.0.1:8443" as "system:admin" using existing credentials.

You have access to the following projects and can switch between them with 'oc project <projectname>':

  * default
    kube-dns
    kube-proxy
    kube-public
    kube-system
    myproject
    openshift
    openshift-apiserver
    openshift-controller-manager
    openshift-core-operators
    openshift-infra
    openshift-node
    openshift-service-cert-signer
    openshift-web-console
    testproject

Using project "default".

作为系统管理员用户,我们几乎看不到诸如节点状态之类的信息。

$oc get nodes
NAME        STATUS    ROLES     AGE       VERSION
localhost   Ready     <none>    1h        v1.11.0+d4cacc0

$oc get nodes -o wide

要获取有关特定节点的更多详细信息,包括当前状况的原因:

$oc describe node <node>

要显示我们创建的资源的摘要:

$oc status
In project default on server https://127.0.0.1:8443

svc/docker-registry - 172.30.1.1:5000
  dc/docker-registry deploys docker.io/openshift/origin-docker-registry:v3.11 
    deployment #1 deployed 2 hours ago - 1 pod

svc/kubernetes - 172.30.0.1:443 -> 8443

svc/router - 172.30.235.156 ports 80, 443, 1936
  dc/router deploys docker.io/openshift/origin-haproxy-router:v3.11 
    deployment #1 deployed 2 hours ago - 1 pod

View details with 'oc describe <resource>/<name>' or list everything with 'oc get all'.

要返回普通的" developer"用户,请以该用户身份登录:

$oc login
Authentication required for https://127.0.0.1:8443 (openshift)
Username: developer
Password: developer
Login successful.

确认登录是否成功。

$oc whoami
developer

让我们使用oc new-project命令创建一个测试项目。

$oc new-project dev --display-name="Project1 - Dev" \
   --description="My Dev Project"

Now using project "dev" on server "https://127.0.0.1:8443".

使用OKD管理控制台

OKD包括一个Web控制台,可用于创建和管理操作。该Web控制台可通过https在端口" 8443"上的服务器IP /主机名上访问。

https://<IP|Hostname>:8443/console

我们应该看到带有用户名和密码形式的OpenShift Origin窗口,类似于以下窗口:

登录:

Username: developer
Password: developer

我们应该看到类似于以下内容的仪表板。

如果在尝试访问OpenShift Web控制台时重定向到https://127.0.0.1:8443/,请执行以下操作:

1.停止OpenShift集群

$oc cluster down

2.编辑OCP配置文件。

$vi ./openshift.local.clusterup/openshift-controller-manager/openshift-master.kubeconfig

找到线路服务器:https://127.0.0.1:8443,然后替换为:

server: https://serverip:8443

3.然后启动集群:

$oc cluster up

部署测试应用程序

现在,我们可以在集群中部署测试应用程序。

1.登录到Openshift集群:

$oc login 
Authentication required for https://https://127.0.0.1:8443 (openshift)
Username: developer 
Password: developer
Login successful.

You don't have any projects. You can try to create a new project, by running

    oc new-project

2.创建一个测试项目。

$oc new-project test-project

3.从Docker Hub注册表中标记应用程序镜像。

$oc tag --source=docker openshift/deployment-example:v2 deployment-example:latest 
Tag deployment-example:latest set to openshift/deployment-example:v2.

4.将应用程序部署到OpenShift。

$oc new-app deployment-example 
--> Found image da61bb2 (3 years old) in image stream "test-project/deployment-example" under tag "latest" for "deployment-example"

    * This image will be deployed in deployment config "deployment-example"
    * Port 8080/tcp will be load balanced by service "deployment-example"
      * Other containers can access this service through the hostname "deployment-example"
    * WARNING: Image "test-project/deployment-example:latest" runs as the 'root' user which Jan not be permitted by your cluster administrator

--> Creating resources ...
    deploymentconfig.apps.openshift.io "deployment-example" created
    service "deployment-example" created
--> Success
    Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
     'oc expose svc/deployment-example'
    Run 'oc status' to view your app.

5.显示应用程序部署状态。

$oc status
In project test-project on server https://127.0.0.1:8443

svc/deployment-example - 172.30.15.201:8080
  dc/deployment-example deploys istag/deployment-example:latest 
    deployment #1 deployed about a minute ago - 1 pod

2 infos identified, use 'oc status --suggest' to see details.

6.获取服务详细信息。

$oc get svc
NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
deployment-example   ClusterIP   172.30.15.201           8080/TCP   18m

$oc describe svc deployment-example
Name:              deployment-example
Namespace:         test-project
Labels:            app=deployment-example
Annotations:       openshift.io/generated-by=OpenShiftNewApp
Selector:          app=deployment-example,deploymentconfig=deployment-example
Type:              ClusterIP
IP:                172.30.15.201
Port:              8080-tcp  8080/TCP
TargetPort:        8080/TCP
Endpoints:         172.17.0.12:8080
Session Affinity:  None
Events:            <none>

7.测试应用程序本地访问。

curl http://172.30.15.201:8080

8.显示窗格状态

$oc get pods
NAME                         READY     STATUS    RESTARTS   AGE
deployment-example-1-vmf7t   1/1       Running   0          21m

9.允许外部访问该应用程序。

$oc expose service/deployment-example
route.route.openshift.io/deployment-example exposed

$oc get routes
NAME                 HOST/PORT                                                       PATH      SERVICES             PORT       TERMINATION   WILDCARD
deployment-example   deployment-example-testproject.services.theitroad.local             deployment-example   8080-tcp                 None

10.测试对应用程序的外部访问。

打开浏览器中显示的URL。

请注意,我具有* .services.theitroad.local的通配符DNS记录,该记录指向OpenShift源服务器IP地址。

11.删除测试申请

$oc delete all -l app=deployment-example 
pod "deployment-example-1-8n8sd" deleted
replicationcontroller "deployment-example-1" deleted
service "deployment-example" deleted
deploymentconfig.apps.openshift.io "deployment-example" deleted
route.route.openshift.io "deployment-example" deleted

$oc get pods
No resources found.