在CentOS 8/CentOS 7上使用Heketi设置GlusterFS存储
在本教程中,我们将学习在使用Heketi的CentOS 8/CentOS 7上安装和配置GlusterFS存储。
GlusterFS是软件定义的横向扩展存储解决方案,旨在为非结构化数据提供价格合理且灵活的存储。
GlusterFS允许我们在提高可用性性能和数据可管理性的同时,统一基础架构和数据存储。
GlusterFS存储可以部署在私有云或者数据中心或者内部数据中心中。
这完全是在商用服务器和存储硬件上完成的,从而产生了功能强大,可大规模扩展且高度可用的NAS环境。
Heketi
Heketi提供了一个RESTful管理界面,可用于管理GlusterFS存储卷的生命周期。
这使GlusterFS与OpenShift,OpenStack Manila和Kubernetes等云服务轻松集成,以进行动态卷配置。
Heketi将自动确定砖在整个集群中的位置,并确保将砖及其副本放置在不同的故障域中。
环境设定
我们在CentOS 8/CentOS 7系统上的GlusterFS设置将包括以下内容。
CentOS 8/CentOS 8 Linux服务器GlusterFS 6软件版本三台GlusterFS服务器每台服务器配置了三个磁盘(@ 10GB)DNS解析度–我们可以使用/etc/hosts文件没有DNS服务器具有sudo或者root用户访问权限的用户帐户Heketi将安装在一个GlusterFS节点中。
在每个服务器的/etc/hosts文件下,我有:
$sudo vim /etc/hosts 10.10.1.168 gluster01 10.10.1.179 gluster02 10.10.1.64 gluster03
步骤1:更新所有服务器
确保将成为GlusterFS存储群集一部分的所有服务器都已更新。
sudo yum -y update
由于可能会有内核更新,因此建议我们重新引导系统。
sudo reboot
步骤2:配置NTP时间同步
我们需要使用网络时间协议(NTP)或者Chrony守护程序在所有GlusterFS存储服务器之间同步时间。
请参阅下面的教程。
在CentOS上设置设置时间同步
步骤3:添加GlusterFS存储库
在所有服务器上下载GlusterFS存储库。
由于它是最新的稳定版本,因此我们将在此设置中使用GlusterFS6.
CentOS8:
sudo yum -y install wget sudo wget -O /etc/yum.repos.d/glusterfs-rhel8.repo https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/CentOS/glusterfs-rhel8.repo
CentOS 7:
sudo yum -y install centos-release-gluster6
添加存储库后,请更新YUM索引。
sudo yum makecache
步骤3:在CentOS 8/CentOS 7上安装GlusterFS
在CentOS 8上安装GlusterFS与在CentOS 7上安装有所不同。
在CentOS 8上安装GlusterFS
启用PowerTools存储库
sudo dnf -y install dnf-utils sudo yum-config-manager --enable PowerTools sudo dnf -y install glusterfs-server
在CentOS 7上安装GlusterFS
在所有节点上运行以下命令以在CentOS 7上安装最新的GlusterFS。
sudo yum -y install glusterfs-server
确认已安装的软件包版本。
$rpm -qi glusterfs-server Name : glusterfs-server Version : 6.5 Release : 2.el8 Architecture: x86_64 Install Date: Tue 29 Oct 2019 06:58:16 PM EAT Group : Unspecified Size : 6560178 License : GPLv2 or LGPLv3+ Signature : RSA/SHA256, Wed 28 Aug 2019 03:39:40 PM EAT, Key ID 43607f0dc2f8238c Source RPM : glusterfs-6.5-2.el8.src.rpm Build Date : Wed 28 Aug 2019 03:27:19 PM EAT Build Host : buildhw-09.phx2.fedoraproject.org Relocations : (not relocatable) Packager : Fedora Project Vendor : Fedora Project URL : http://docs.gluster.org/ Bug URL : https://bugz.fedoraproject.org/glusterfs Summary : Distributed file-system server
我们也可以使用gluster命令检查版本。
$gluster --version glusterfs 6.5 Repository revision: git://git.gluster.org/glusterfs.git Copyright (c) 2006-2015 Red Hat, Inc. GlusterFS comes with ABSOLUTELY NO WARRANTY. It is licensed to you under your choice of the GNU Lesser General Public License, version 3 or any later version (LGPLv3 or later), or the GNU General Public License, version 2 (GPLv2), in all cases as published by the Free Software Foundation. $glusterfsd --version
步骤4:在CentOS 8/CentOS 7上启动GlusterFS服务
在CentOS 8/CentOS 7上安装GlusterFS服务后,启动并启用该服务。
sudo systemctl enable --now glusterd.service
加载Heketi所需的所有内核模块。
for i in dm_snapshot dm_mirror dm_thin_pool; do sudo modprobe $i done
如果我们有活动的firewalld服务,请允许GlusterFS使用端口。
sudo firewall-cmd --add-service=glusterfs --permanent sudo firewall-cmd --reload
检查所有节点上的服务状态。
$systemctl status glusterd ● glusterd.service - GlusterFS, a clustered file-system server Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2019-10-29 19:10:08 EAT; 3min 1s ago Docs: man:glusterd(8) Main PID: 32027 (glusterd) Tasks: 9 (limit: 11512) Memory: 3.9M CGroup: /system.slice/glusterd.service └─32027 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO Oct 29 19:10:08 gluster01.novalocal systemd[1]: Starting GlusterFS, a clustered file-system server... Oct 29 19:10:08 gluster01.novalocal systemd[1]: Started GlusterFS, a clustered file-system server. $systemctl status glusterd ● glusterd.service - GlusterFS, a clustered file-system server Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2019-10-29 19:10:13 EAT; 3min 51s ago Docs: man:glusterd(8) Main PID: 3706 (glusterd) Tasks: 9 (limit: 11512) Memory: 3.8M CGroup: /system.slice/glusterd.service └─3706 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO Oct 29 19:10:13 gluster02.novalocal systemd[1]: Starting GlusterFS, a clustered file-system server… Oct 29 19:10:13 gluster02.novalocal systemd[1]: Started GlusterFS, a clustered file-system server. $systemctl status glusterd ● glusterd.service - GlusterFS, a clustered file-system server Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2019-10-29 19:10:15 EAT; 4min 24s ago Docs: man:glusterd(8) Main PID: 3716 (glusterd) Tasks: 9 (limit: 11512) Memory: 3.8M CGroup: /system.slice/glusterd.service └─3716 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO Oct 29 19:10:15 gluster03.novalocal systemd[1]: Starting GlusterFS, a clustered file-system server… Oct 29 19:10:15 gluster03.novalocal systemd[1]: Started GlusterFS, a clustered file-system server.
探测集群中的其他节点
[Hyman@theitroad ~]# gluster peer probe gluster02 peer probe: success. [Hyman@theitroad ~]# gluster peer probe gluster03 peer probe: success. [Hyman@theitroad ~]# gluster peer status Number of Peers: 2 Hostname: gluster02 Uuid: ebfdf84f-3d66-4f98-93df-a6442b5466ed State: Peer in Cluster (Connected) Hostname: gluster03 Uuid: 98547ab1-9565-4f71-928c-8e4e13eb61c3 State: Peer in Cluster (Connected)
步骤5:其中一个节点上安装Heketi
我将使用gluster01节点运行Heketi服务。
从Github发布页面下载最新的Heketi服务器和客户端档案。
curl -s https://api.github.com/repos/heketi/heketi/releases/latest \ | grep browser_download_url \ | grep linux.amd64 \ | cut -d '"' -f 4 \ | wget -qi
提取下载的heketi档案。
for i in `ls | grep heketi | grep .tar.gz`; do tar xvf $i; done
复制heketi和heketi-cli二进制软件包。
sudo cp heketi/{heketi,heketi-cli} /usr/local/bin
确认它们在PATH中可用
$heketi --version Heketi v9.0.0 $heketi-cli --version heketi-cli v9.0.0
步骤5:配置Heketi服务器
添加heketi系统用户。
sudo groupadd --system heketi sudo useradd -s /sbin/nologin --system -g heketi heketi
创建heketi配置和数据路径。
sudo mkdir -p /var/lib/heketi /etc/heketi /var/log/heketi
将heketi配置文件复制到/etc/heketi目录。
sudo cp heketi/heketi.json /etc/heketi
编辑Heketi配置文件
sudo vim /etc/heketi/heketi.json
设置服务端口:
"port": "8080"
设置管理员并使用机密。
"_jwt": "Private keys for access", "jwt": { "_admin": "Admin has access to all APIs", "admin": { "key": "ivd7dfORN7QNeKVO" }, "_user": "User only has access to /volumes endpoint", "user": { "key": "gZPgdZ8NtBNj6jfp" } },
配置glusterfs执行器
_sshexec_comment": "SSH username and private key file information", "sshexec": { "keyfile": "/etc/heketi/heketi_key", "user": "root", "port": "22", "fstab": "/etc/fstab", ...... },
如果我们使用的用户不是root用户,请确保该用户具有无密码的sudo特权升级。
确认数据库路径设置正确
"_db_comment": "Database file name", "db": "/var/lib/heketi/heketi.db", },
以下是我修改后的完整配置文件。
{ "_port_comment": "Heketi Server Port Number", "port": "8080", "_enable_tls_comment": "Enable TLS in Heketi Server", "enable_tls": false, "_cert_file_comment": "Path to a valid certificate file", "cert_file": "", "_key_file_comment": "Path to a valid private key file", "key_file": "", "_use_auth": "Enable JWT authorization. Please enable for deployment", "use_auth": false, "_jwt": "Private keys for access", "jwt": { "_admin": "Admin has access to all APIs", "admin": { "key": "ivd7dfORN7QNeKVO" }, "_user": "User only has access to /volumes endpoint", "user": { "key": "gZPgdZ8NtBNj6jfp" } }, "_backup_db_to_kube_secret": "Backup the heketi database to a Kubernetes secret when running in Kubernetes. Default is off.", "backup_db_to_kube_secret": false, "_profiling": "Enable go/pprof profiling on the /debug/pprof endpoints.", "profiling": false, "_glusterfs_comment": "GlusterFS Configuration", "glusterfs": { "_executor_comment": [ "Execute plugin. Possible choices: mock, ssh", "mock: This setting is used for testing and development.", " It will not send commands to any node.", "ssh: This setting will notify Heketi to ssh to the nodes.", " It will need the values in sshexec to be configured.", "kubernetes: Communicate with GlusterFS containers over", " Kubernetes exec api." ], "executor": "mock", "_sshexec_comment": "SSH username and private key file information", "sshexec": { "keyfile": "/etc/heketi/heketi_key", "user": "cloud-user", "port": "22", "fstab": "/etc/fstab" }, "_db_comment": "Database file name", "db": "/var/lib/heketi/heketi.db", "_refresh_time_monitor_gluster_nodes": "Refresh time in seconds to monitor Gluster nodes", "refresh_time_monitor_gluster_nodes": 120, "_start_time_monitor_gluster_nodes": "Start time in seconds to monitor Gluster nodes when the heketi comes up", "start_time_monitor_gluster_nodes": 10, "_loglevel_comment": [ "Set log level. Choices are:", " none, critical, error, warning, info, debug", "Default is warning" ], "loglevel" : "debug", "_auto_create_block_hosting_volume": "Creates Block Hosting volumes automatically if not found or exsisting volume exhausted", "auto_create_block_hosting_volume": true, "_block_hosting_volume_size": "New block hosting volume will be created in size mentioned, This is considered only if auto-create is enabled.", "block_hosting_volume_size": 500, "_block_hosting_volume_options": "New block hosting volume will be created with the following set of options. Removing the group gluster-block option is NOT recommended. Additional options can be added next to it separated by a comma.", "block_hosting_volume_options": "group gluster-block", "_pre_request_volume_options": "Volume options that will be applied for all volumes created. Can be overridden by volume options in volume create request.", "pre_request_volume_options": "", "_post_request_volume_options": "Volume options that will be applied for all volumes created. To be used to override volume options in volume create request.", "post_request_volume_options": "" } }
生成Heketi SSH密钥
sudo ssh-keygen -f /etc/heketi/heketi_key -t rsa -N '' sudo chown heketi:heketi /etc/heketi/heketi_key*
将生成的公钥复制到所有GlusterFS节点
for i in gluster01 gluster02 gluster03; do ssh-copy-id -i /etc/heketi/heketi_key.pub Hyman@theitroad$i done
或者,我们可以将/etc/heketi/heketi_key.pub的内容编入目录,并添加到每个服务器~/.ssh/authorized_keys确认我们可以使用Heketi私钥访问GlusterFS节点:
$ssh -i /etc/heketi/heketi_key Hyman@theitroad The authenticity of host 'gluster02 (10.10.1.179)' can't be established. ECDSA key fingerprint is SHA256:GXNdsSxmp2O104rPB4RmYsH73nTa5U10cw3LG22sANc. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'gluster02,10.10.1.179' (ECDSA) to the list of known hosts. Activate the web console with: systemctl enable --now cockpit.socket Last login: Tue Oct 29 20:11:32 2019 from 10.10.1.168 [Hyman@theitroad ~]#
创建系统单位文件为Heketi创建系统单位文件
$sudo vim /etc/systemd/system/heketi.service [Unit] Description=Heketi Server [Service] Type=simple WorkingDirectory=/var/lib/heketi EnvironmentFile=-/etc/heketi/heketi.env User=heketi ExecStart=/usr/local/bin/heketi --config=/etc/heketi/heketi.json Restart=on-failure StandardOutput=syslog StandardError=syslog [Install] WantedBy=multi-user.target
另请下载Heketi的示例环境文件。
sudo wget -O /etc/heketi/heketi.env https://raw.githubusercontent.com/heketi/heketi/master/extras/systemd/heketi.env
设置所有目录权限
sudo chown -R heketi:heketi /var/lib/heketi /var/log/heketi /etc/heketi
启动Heketi服务禁用SELinux
sudo setenforce 0 sudo sed -i 's/^SELINUX=.*/SELINUX=permissive/g' /etc/selinux/config
然后重新加载Systemd并启动Heketi服务
sudo systemctl daemon-reload sudo systemctl enable --now heketi
确认服务正在运行。
$systemctl status heketi ● heketi.service - Heketi Server Loaded: loaded (/etc/systemd/system/heketi.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2019-10-29 20:29:23 EAT; 4s ago Main PID: 2166 (heketi) Tasks: 5 (limit: 11512) Memory: 8.7M CGroup: /system.slice/heketi.service └─2166 /usr/local/bin/heketi --config=/etc/heketi/heketi.json Oct 29 20:29:23 gluster01.novalocal heketi[2166]: Heketi v9.0.0 Oct 29 20:29:23 gluster01.novalocal heketi[2166]: [heketi] INFO 2019/10/29 20:29:23 Loaded mock executor Oct 29 20:29:23 gluster01.novalocal heketi[2166]: [heketi] INFO 2019/10/29 20:29:23 Volumes per cluster limit is set to default value of 1000 Oct 29 20:29:23 gluster01.novalocal heketi[2166]: [heketi] INFO 2019/10/29 20:29:23 Block: Auto Create Block Hosting Volume set to true Oct 29 20:29:23 gluster01.novalocal heketi[2166]: [heketi] INFO 2019/10/29 20:29:23 Block: New Block Hosting Volume size 500 GB Oct 29 20:29:23 gluster01.novalocal heketi[2166]: [heketi] INFO 2019/10/29 20:29:23 Block: New Block Hosting Volume Options: group gluster-block Oct 29 20:29:23 gluster01.novalocal heketi[2166]: [heketi] INFO 2019/10/29 20:29:23 GlusterFS Application Loaded Oct 29 20:29:23 gluster01.novalocal heketi[2166]: [heketi] INFO 2019/10/29 20:29:23 Started background pending operations cleaner Oct 29 20:29:23 gluster01.novalocal heketi[2166]: [heketi] INFO 2019/10/29 20:29:23 Started Node Health Cache Monitor Oct 29 20:29:23 gluster01.novalocal heketi[2166]: Listening on port 8080
步骤6:创建Heketi拓扑文件
我创建了一个Ansible剧本,用于生成和更新拓扑文件。
手动编辑json文件可能会让人感到压力。
这将使扩展变得容易。
在本地安装Ansible –请参阅Ansible安装Ansible文档。
对于CentOS:
sudo yum -y install epel-release sudo yum -y install ansible
对于Ubuntu:
sudo apt update sudo apt install software-properties-common sudo apt-add-repository --yes --update ppa:ansible/ansible sudo apt install ansible
安装Ansible后,创建项目文件夹结构
mkdir -p ~/projects/ansible/roles/heketi/{tasks,templates,defaults}
创建Heketi拓扑Jinja2模板
$vim ~/projects/ansible/roles/heketi/templates/topology.json.j2 { "clusters": [ { "nodes": [ {% if gluster_servers is defined and gluster_servers is iterable %} {% for item in gluster_servers %} { "node": { "hostnames": { "manage": [ "{{ item.servername }}" ], "storage": [ "{{ item.serverip }}" ] }, "zone": {{ item.zone }} }, "devices": [ "{{ item.disks | list | join ("\",\"") }}" ] }{% if not loop.last %},{% endif %} {% endfor %} {% endif %} ] } ] }
定义变量–设置值以匹配环境设置。
$vim ~/projects/ansible/roles/heketi/defaults/main.yml -- # GlusterFS nodes gluster_servers: - servername: gluster01 serverip: 10.10.1.168 zone: 1 disks: - /dev/vdc - /dev/vdd - /dev/vde - servername: gluster02 serverip: 10.10.1.179 zone: 1 disks: - /dev/vdc - /dev/vdd - /dev/vde - servername: gluster03 serverip: 10.10.1.64 zone: 1 disks: - /dev/vdc - /dev/vdd - /dev/vde
创建Ansible任务
$vim ~/projects/ansible/roles/heketi/tasks/main.yml -- - name: Copy heketi topology file template: src: topology.json.j2 dest: /etc/heketi/topology.json - name: Set proper file ownership file: path: /etc/heketi/topology.json owner: heketi group: heketi
创建剧本和库存文件
$vim ~/projects/ansible/heketi.yml -- - name: Generate Heketi topology file and copy to Heketi Server hosts: gluster01 become: yes become_method: sudo roles: - heketi $vim ~/projects/ansible/hosts gluster01
这就是一切的样子
$cd ~/projects/ansible/ $tree . ├── heketi.yml ├── hosts └── roles └── heketi ├── defaults │ └── main.yml ├── tasks │ └── main.yml └── templates └── topology.json.j2 5 directories, 5 files
运行剧本
$cd ~/projects/ansible $ansible-playbook -i hosts --user myuser --ask-pass --ask-become-pass heketi.yml # Key based and Passwordless sudo/root, use: $ansible-playbook -i hosts --user myuser heketi.yml
执行输出
确认生成的拓扑文件的内容。
$cat /etc/heketi/topology.json { "clusters": [ { "nodes": [ { "node": { "hostnames": { "manage": [ "gluster01" ], "storage": [ "10.10.1.168" ] }, "zone": 1 }, "devices": [ "/dev/vdc","/dev/vdd","/dev/vde" ] }, { "node": { "hostnames": { "manage": [ "gluster02" ], "storage": [ "10.10.1.179" ] }, "zone": 1 }, "devices": [ "/dev/vdc","/dev/vdd","/dev/vde" ] }, { "node": { "hostnames": { "manage": [ "gluster03" ], "storage": [ "10.10.1.64" ] }, "zone": 1 }, "devices": [ "/dev/vdc","/dev/vdd","/dev/vde" ] } ] } ] }
步骤7:加载Heketi拓扑文件
如果一切正常,请加载拓扑文件。
# heketi-cli topology load --user admin --secret heketi_admin_secret --json=/etc/heketi/topology.json
在我的设置中,我将运行
# heketi-cli topology load --user admin --secret ivd7dfORN7QNeKVO --json=/etc/heketi/topology.json Creating cluster ... ID: dda582cc3bd943421d57f4e78585a5a9 Allowing file volumes on cluster. Allowing block volumes on cluster. Creating node gluster01 ... ID: 0c349dcaec068d7a78334deaef5cbb9a Adding device /dev/vdc ... OK Adding device /dev/vdd ... OK Adding device /dev/vde ... OK Creating node gluster02 ... ID: 48d7274f325f3d59a3a6df80771d5aed Adding device /dev/vdc ... OK Adding device /dev/vdd ... OK Adding device /dev/vde ... OK Creating node gluster03 ... ID: 4d6a24b992d5fe53ed78011e0ab76ead Adding device /dev/vdc ... OK Adding device /dev/vdd ... OK Adding device /dev/vde ... OK
在下面的屏幕快照中共享了相同的输出。
第7步:确认GlusterFS/Heketi设置
将Heketi访问凭据添加到~/.bashrc文件中。
$vim ~/.bashrc export HEKETI_CLI_SERVER=http://heketiserverip:8080 export HEKETI_CLI_USER=admin export HEKETI_CLI_KEY="AdminPass"
源文件。
source ~/.bashrc
加载拓扑文件后,运行以下命令以列出集群。
# heketi-cli cluster list Clusters: Id:dda582cc3bd943421d57f4e78585a5a9 [file][block]
列出集群中可用的节点:
# heketi-cli node list Id:0c349dcaec068d7a78334deaef5cbb9a Cluster:dda582cc3bd943421d57f4e78585a5a9 Id:48d7274f325f3d59a3a6df80771d5aed Cluster:dda582cc3bd943421d57f4e78585a5a9 Id:4d6a24b992d5fe53ed78011e0ab76ead Cluster:dda582cc3bd943421d57f4e78585a5a9
执行以下命令以检查特定节点的详细信息:
# heketi-cli node info ID # heketi-cli node info 0c349dcaec068d7a78334deaef5cbb9a Node Id: 0c349dcaec068d7a78334deaef5cbb9a State: online Cluster Id: dda582cc3bd943421d57f4e78585a5a9 Zone: 1 Management Hostname: gluster01 Storage Hostname: 10.10.1.168 Devices: Id:0f26bd867f2bd8bc126ff3193b3611dc Name:/dev/vdd State:online Size (GiB):500 Used (GiB):0 Free (GiB):10 Bricks:0 Id:29c34e25bb30db68d70e5fd3afd795ec Name:/dev/vdc State:online Size (GiB):500 Used (GiB):0 Free (GiB):10 Bricks:0 Id:feb55e58d07421c422a088576b42e5ff Name:/dev/vde State:online Size (GiB):500 Used (GiB):0 Free (GiB):10 Bricks:0
现在,我们创建一个Gluster卷以验证Heketi&GlusterFS是否正常运行。
# heketi-cli volume create --size=1 Name: vol_7e071706e1c22052e5121c29966c3803 Size: 1 Volume Id: 7e071706e1c22052e5121c29966c3803 Cluster Id: dda582cc3bd943421d57f4e78585a5a9 Mount: 10.10.1.168:vol_7e071706e1c22052e5121c29966c3803 Mount Options: backup-volfile-servers=10.10.1.179,10.10.1.64 Block: false Free Size: 0 Reserved Size: 0 Block Hosting Restriction: (none) Block Volumes: [] Durability Type: replicate Distribute Count: 1 Replica Count: 3 # heketi-cli volume list Id:7e071706e1c22052e5121c29966c3803 Cluster:dda582cc3bd943421d57f4e78585a5a9 Name:vol_7e071706e1c22052e5121c29966c3803
要查看拓扑,请运行:
heketi-cli topology info
gluster命令还可用于检查集群中的服务器。
gluster pool list