在CentOS 8上安装单节点TiDB数据库集群

时间:2020-02-23 14:31:16  来源:igfitidea点击:

如何在CentOS 8 Linux服务器上安装单节点TiDB数据库集群。 TiDB是MySQL兼容的开放源代码NewSQL数据库,支持分析处理(HTAP)和混合事务工作负载。 TiDB的主要关键功能是高可用性,水平可伸缩性和强一致性。该数据库解决方案涵盖OLTP(在线事务处理),OLAP(在线分析处理)和HTAP服务。

此设置将在单个节点实例中执行,并且适用于Lab和Dev环境。对于需要高可用性集群且集群中至少有三台机器的生产环境,不应参考本指南。有关生产设置要求和建议,请查阅TiDB官方文档页面。查看发行说明以了解所有新软件功能

在CentOS 8上安装单节点TiDB数据库集群

此设置是在具有以下硬件和软件要求的服务器上完成的:

操作系统:CentOS 8(64位)内存:16 GBCPU:8核心+磁盘空间:50GB + root用户SSH访问服务器上的Internet访问

如果我们对其他组件(例如PD,TiKV,TiFlash,TiCDC和Monitor)进行了严格的操作,则这些最低要求可能无法满足要求。在致力于特定组件之前,请热衷于文档中提供的建议。

更新服务器

在CentOS 8上开始安装TiDB数据库之前,请登录到计算机并执行系统更新。

sudo dnf -y update

升级后重新引导系统。

sudo systemctl reboot

禁用系统交换和firewalld

TiDB需要足够的内存空间来进行操作,因此不建议进行交换。因此,建议永久禁用系统交换。

echo "vm.swappiness = 0" | sudo tee -a /etc/sysctl.conf
sudo swapoff -a && sudo swapon -a
sudo sysctl -p

在TiDB集群中,必须打开节点之间的访问端口以确保信息的传输,例如读写请求和数据心跳。我建议我们为此实验室设置禁用firewalld。

sudo firewall-cmd --state
sudo systemctl status firewalld.service

如果要在防火墙中打开端口,请查看"网络端口要求"文档。

下载并安装TiUP

接下来是将TiUP安装程序脚本下载到CentOS 8机器上。

curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh -o tiup_installer.sh

给出脚本执行位。

chmod +x tiup_installer.sh

确保已安装tar软件包。

sudo yum -y install tar

执行脚本以开始安装。

sudo ./tiup_installer.sh

执行输出:

WARN: adding root certificate via internet: https://tiup-mirrors.pingcap.com/root.json
You can revoke this by remove /root/.tiup/bin/7b8e153f2e2d0928.root.json
Set mirror to https://tiup-mirrors.pingcap.com success
Detected shell: bash
Shell profile:  /root/.bash_profile
/root/.bash_profile has been modified to to add tiup to PATH
open a new terminal or source /root/.bash_profile to use it
Installed path: /root/.tiup/bin/tiup
===============================================
Have a try:     tiup playground
===============================================

源更新的bash配置文件。

source /root/.bash_profile

下一步是安装TiUP的集群组件:

# tiup cluster
The component `cluster` is not installed; downloading from repository.
download https://tiup-mirrors.pingcap.com/cluster-v1.1.2-linux-amd64.tar.gz 9.87 MiB/9.87 MiB 100.00% 9.28 MiB p/s
Starting component `cluster`:
Deploy a TiDB cluster for production

如果计算机上已经安装了TiUP集群,请更新软件版本:

# tiup update --self && tiup update cluster
download https://tiup-mirrors.pingcap.com/tiup-v1.1.2-linux-amd64.tar.gz 4.32 MiB/4.32 MiB 100.00% 4.91 MiB p/s
Updated successfully!
component cluster version v1.1.2 is already installed
Updated successfully!

创建并启动本地TiDB集群

由于TiUP需要模拟多台机器上的部署,因此建议增加sshd服务的连接限制。

# vi /etc/ssh/sshd_config
MaxSessions 30

进行更改后,重新启动sshd服务。

sudo systemctl restart sshd

创建名为tidb-topology.yaml的拓扑配置文件。

cat >tidb-topology.yaml<<EOF
# # Global variables are applied to all deployments and used as the default value of
# # the deployments if a specific deployment value is missing.
global:
 user: "tidb"
 ssh_port: 22
 deploy_dir: "/tidb-deploy"
 data_dir: "/tidb-data"

# # Monitored variables are applied to all the machines.
monitored:
 node_exporter_port: 9100
 blackbox_exporter_port: 9115

server_configs:
 tidb:
   log.slow-threshold: 300
 tikv:
   readpool.storage.use-unified-pool: false
   readpool.coprocessor.use-unified-pool: true
 pd:
   replication.enable-placement-rules: true
 tiflash:
   logger.level: "info"

pd_servers:
 - host: 127.0.0.1 # Replace with the server IP address you want to use

tidb_servers:
 - host: 127.0.0.1 # Replace with the server IP address you want to use

tikv_servers:
 - host: 127.0.0.1 # Replace with the server IP address you want to use
   port: 20150
   status_port: 20160

 - host: 127.0.0.1 # Replace with the server IP address you want to use
   port: 20151
   status_port: 20161

 - host: 127.0.0.1 # Replace with the server IP address you want to use
   port: 20152
   status_port: 20162

tiflash_servers:
 - host: 127.0.0.1 # Replace with the server IP address you want to use

monitoring_servers:
 - host: 127.0.0.1 # Replace with the server IP address you want to use

grafana_servers:
 - host: 127.0.0.1 # Replace with the server IP address you want to use
EOF

其中:用户:tidb:使用tidb系统用户(在部署过程中自动创建)来执行集群的内部管理。默认情况下,使用端口22通过SSH登录目标计算机。replication.enable-placement-rules:设置此PD参数以确保TiFlash正常运行。host:目标计算机的IP。

运行集群部署命令:

tiup cluster deploy <cluster-name> <tidb-version> ./tidb-topology.yaml --user root -p

将<clyster-name>替换为要使用的集群名称。<tidp-version> TiDB集群版本。使用以下命令获取所有受支持的TiDB版本:

# tiup list tidb

不适使用上述命令返回的最新版本:

# tiup cluster deploy local-tidb  v4.0.6 ./tidb-topology.yaml --user root -p

按y键并提供root用户密码以完成部署:

Attention:
    1. If the topology is not what you expected, check your yaml file.
    2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]:  y
Input SSH password:
+ Generate SSH keys ... Done
+ Download TiDB components
......

我们应该看到正在下载TiDB组件。

Input SSH password:
+ Generate SSH keys ... Done
+ Download TiDB components
  - Download pd:v4.0.6 (linux/amd64) ... Done
  - Download tikv:v4.0.6 (linux/amd64) ... Done
  - Download tidb:v4.0.6 (linux/amd64) ... Done
  - Download tiflash:v4.0.6 (linux/amd64) ... Done
  - Download prometheus:v4.0.6 (linux/amd64) ... Done
  - Download grafana:v4.0.6 (linux/amd64) ... Done
  - Download node_exporter:v0.17.0 (linux/amd64) ... Done
  - Download blackbox_exporter:v0.12.0 (linux/amd64) ... Done
+ Initialize target host environments
  - Prepare 127.0.0.1:22 ... Done
+ Copy files
  - Copy pd -> 127.0.0.1 ... Done
  - Copy tikv -> 127.0.0.1 ... Done
  - Copy tikv -> 127.0.0.1 ... Done
  - Copy tikv -> 127.0.0.1 ... Done
  - Copy tidb -> 127.0.0.1 ... Done
  - Copy tiflash -> 127.0.0.1 ... Done
  - Copy prometheus -> 127.0.0.1 ... Done
  - Copy grafana -> 127.0.0.1 ... Done
  - Copy node_exporter -> 127.0.0.1 ... Done
  - Copy blackbox_exporter -> 127.0.0.1 ... Done
+ Check status
Deployed cluster `local-tidb` successfully, you can start the cluster via `tiup cluster start local-tidb`

启动集群:

# tiup cluster start local-tidb

输出示例:

....
Starting component pd
	Starting instance pd 127.0.0.1:2379
	Start pd 127.0.0.1:2379 success
Starting component node_exporter
	Starting instance 127.0.0.1
	Start 127.0.0.1 success
Starting component blackbox_exporter
	Starting instance 127.0.0.1
	Start 127.0.0.1 success
Starting component tikv
	Starting instance tikv 127.0.0.1:20152
	Starting instance tikv 127.0.0.1:20150
	Starting instance tikv 127.0.0.1:20151
	Start tikv 127.0.0.1:20151 success
	Start tikv 127.0.0.1:20152 success
	Start tikv 127.0.0.1:20150 success
Starting component tidb
	Starting instance tidb 127.0.0.1:4000
	Start tidb 127.0.0.1:4000 success
....

访问TiDB集群

要查看当前部署的群集列表:

# tiup cluster list
Starting component `cluster`: /root/.tiup/components/cluster/v1.1.2/tiup-cluster list
Name        User  Version  Path                                             PrivateKey
----        ----  -------  ----                                             ---------
local-tidb  tidb  v4.0.6   /root/.tiup/storage/cluster/clusters/local-tidb  /root/.tiup/storage/cluster/clusters/local-tidb/ssh/id_rsa

要查看集群拓扑和状态:

# tiup cluster display local-tidb
Starting component `cluster`: /root/.tiup/components/cluster/v1.1.2/tiup-cluster display local-tidb
tidb Cluster: local-tidb
tidb Version: v4.0.6
ID               Role        Host       Ports                            OS/Arch       Status    Data Dir                    Deploy Dir
--               ----        ----       -----                            -------       ------    --------                    ---------
127.0.0.1:3000   grafana     127.0.0.1  3000                             linux/x86_64  inactive  -                           /tidb-deploy/grafana-3000
127.0.0.1:2379   pd          127.0.0.1  2379/2380                        linux/x86_64  Up|L|UI   /tidb-data/pd-2379          /tidb-deploy/pd-2379
127.0.0.1:9090   prometheus  127.0.0.1  9090                             linux/x86_64  inactive  /tidb-data/prometheus-9090  /tidb-deploy/prometheus-9090
127.0.0.1:4000   tidb        127.0.0.1  4000/10080                       linux/x86_64  Up        -                           /tidb-deploy/tidb-4000
127.0.0.1:9000   tiflash     127.0.0.1  9000/8123/3930/20160/20292/8234  linux/x86_64  N/A       /tidb-data/tiflash-9000     /tidb-deploy/tiflash-9000
127.0.0.1:20150  tikv        127.0.0.1  20150/20160                      linux/x86_64  Up        /tidb-data/tikv-20150       /tidb-deploy/tikv-20150
127.0.0.1:20151  tikv        127.0.0.1  20151/20161                      linux/x86_64  Up        /tidb-data/tikv-20151       /tidb-deploy/tikv-20151
127.0.0.1:20152  tikv        127.0.0.1  20152/20162                      linux/x86_64  Up        /tidb-data/tikv-20152       /tidb-deploy/tikv-20152

一旦启动,我们就可以使用mysql命令行客户端工具访问TiDB集群。

# yum install mariadb -y
# mysql -h 127.0.01 -P 4000 -u root
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.25-TiDB-v4.0.6 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatible

Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MySQL [(none)]> SELECT VERSION();
+--------------------+
| VERSION()          |
+--------------------+
| 5.7.25-TiDB-v4.0.6 |
+--------------------+
1 row in set (0.001 sec)

MySQL [(none)]> EXIT

仪表板访问:可通过http://{grafana-ip}:3000访问Grafana监控仪表板。默认的用户名和密码均为" admin"。可通过http://{pd-ip}:2379/dashboard访问TiDB仪表板。默认的用户名为" root",密码为空。