cephadm 安装部署 ceph 集群

介绍

手册:

https://access.redhat.com/documentation/zh-cn/red_hat_ceph_storage/5/html/architecture_guide/index

http://docs.ceph.org.cn/

ceph可以实现的存储方式:

块存储:提供像普通硬盘一样的存储,为使用者提供“硬盘”

文件系统存储:类似于NFS的共享方式,为使用者提供共享文件夹

对象存储:像百度云盘一样,需要使用单独的客户端

ceph还是一个分布式的存储系统,非常灵活。如果需要扩容,只要向ceph集中增加服务器即可。ceph存储数据时采用多副本的方式进行存储,生产环境下,一个文件至少要存3份。ceph默认也是三副本存储。

ceph的构成

Ceph OSD 守护进程: Ceph OSD 用于存储数据。此外,Ceph OSD 利用 Ceph 节点的 CPU、内存和网络来执行数据复制、纠删代码、重新平衡、恢复、监控和报告功能。存储节点有几块硬盘用于存储,该节点就会有几个osd进程。

Ceph Mon监控器: Ceph Mon维护 Ceph 存储集群映射的主副本和 Ceph 存储群集的当前状态。监控器需要高度一致性,确保对Ceph 存储集群状态达成一致。维护着展示集群状态的各种图表,包括监视器图、 OSD 图、归置组( PG )图、和 CRUSH 图。

MDSs: Ceph 元数据服务器( MDS )为 Ceph 文件系统存储元数据。

RGW:对象存储网关。主要为访问ceph的软件提供API接口。

安装

配置IP地址

1
2
3
4
5
# 配置IP地址
ssh root@192.168.1.154 "nmcli con mod ens18 ipv4.addresses 192.168.1.25/24; nmcli con mod ens18 ipv4.gateway 192.168.1.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"
ssh root@192.168.1.179 "nmcli con mod ens18 ipv4.addresses 192.168.1.26/24; nmcli con mod ens18 ipv4.gateway 192.168.1.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"
ssh root@192.168.1.181 "nmcli con mod ens18 ipv4.addresses 192.168.1.27/24; nmcli con mod ens18 ipv4.gateway 192.168.1.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"

配置基础环境

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
# 配置主机名
hostnamectl set-hostname ceph-1
hostnamectl set-hostname ceph-2
hostnamectl set-hostname ceph-3

# 更新到最新
yum update -y


# 关闭selinux
setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

# 关闭防火墙
systemctl disable --now firewalld

# 配置免密
ssh-keygen -f /root/.ssh/id_rsa -P ''
ssh-copy-id -o StrictHostKeyChecking=no 192.168.1.25
ssh-copy-id -o StrictHostKeyChecking=no 192.168.1.26
ssh-copy-id -o StrictHostKeyChecking=no 192.168.1.27


# 查看磁盘
[root@ceph-1 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 100G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 99G 0 part
├─cs-root 253:0 0 61.2G 0 lvm /
├─cs-swap 253:1 0 7.9G 0 lvm [SWAP]
└─cs-home 253:2 0 29.9G 0 lvm /home
sdb 8:16 0 100G 0 disk
[root@ceph-1 ~]#

# 配置hosts
cat > /etc/hosts <<EOF
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6


192.168.1.25 ceph-1
192.168.1.26 ceph-2
192.168.1.27 ceph-3
EOF

安装时间同步和docker

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
# 安装需要的包
yum install epel* -y
yum install -y ceph-mon ceph-osd ceph-mds ceph-radosgw

# 服务端
yum install chrony -y
cat > /etc/chrony.conf << EOF
pool ntp.aliyun.com iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
allow 192.168.1.0/24
local stratum 10
keyfile /etc/chrony.keys
leapsectz right/UTC
logdir /var/log/chrony
EOF

systemctl restart chronyd ; systemctl enable chronyd

# 客户端
yum install chrony -y
cat > /etc/chrony.conf << EOF
pool 192.168.1.25 iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
keyfile /etc/chrony.keys
leapsectz right/UTC
logdir /var/log/chrony
EOF

systemctl restart chronyd ; systemctl enable chronyd

#使用客户端进行验证
chronyc sources -v

# 安装docker
curl -sSL https://get.daocloud.io/docker | sh


安装集群

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106

# 安装集群

yum install -y python3
# 安装 cephadm 工具
curl --silent --remote-name --location https://mirrors.chenby.cn/https://github.com/ceph/ceph/raw/quincy/src/cephadm/cephadm

# 创建源信息
./cephadm add-repo --release 17.2.5
sed -i 's#download.ceph.com#mirrors.ustc.edu.cn/ceph#' /etc/yum.repos.d/ceph.repo
./cephadm install


# 引导新的集群
[root@ceph-1 ~]# cephadm bootstrap --mon-ip 192.168.1.25
Verifying podman|docker is present...
Verifying lvm2 is present...
Verifying time synchronization is in place...
Unit chronyd.service is enabled and running
Repeating the final host check...
docker (/usr/bin/docker) is present
systemctl is present
lvcreate is present
Unit chronyd.service is enabled and running
Host looks OK
Cluster fsid: 976e04fe-9315-11ed-a275-e29e49e9189c
Verifying IP 192.168.1.25 port 3300 ...
Verifying IP 192.168.1.25 port 6789 ...
Mon IP `192.168.1.25` is in CIDR network `192.168.1.0/24`
Mon IP `192.168.1.25` is in CIDR network `192.168.1.0/24`
Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network
Pulling container image quay.io/ceph/ceph:v17...
Ceph version: ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable)
Extracting ceph user uid/gid from container image...
Creating initial keys...
Creating initial monmap...
Creating mon...
Waiting for mon to start...
Waiting for mon...
mon is available
Assimilating anything we can from ceph.conf...
Generating new minimal ceph.conf...
Restarting the monitor...
Setting mon public_network to 192.168.1.0/24
Wrote config to /etc/ceph/ceph.conf
Wrote keyring to /etc/ceph/ceph.client.admin.keyring
Creating mgr...
Verifying port 9283 ...
Waiting for mgr to start...
Waiting for mgr...
mgr not available, waiting (1/15)...
mgr not available, waiting (2/15)...
mgr not available, waiting (3/15)...
mgr not available, waiting (4/15)...
mgr is available
Enabling cephadm module...
Waiting for the mgr to restart...
Waiting for mgr epoch 4...
mgr epoch 4 is available
Setting orchestrator backend to cephadm...
Generating ssh key...
Wrote public SSH key to /etc/ceph/ceph.pub
Adding key to root@localhost authorized_keys...
Adding host ceph-1...
Deploying mon service with default placement...
Deploying mgr service with default placement...
Deploying crash service with default placement...
Deploying prometheus service with default placement...
Deploying grafana service with default placement...
Deploying node-exporter service with default placement...
Deploying alertmanager service with default placement...
Enabling the dashboard module...
Waiting for the mgr to restart...
Waiting for mgr epoch 8...
mgr epoch 8 is available
Generating a dashboard self-signed certificate...
Creating initial admin user...
Fetching dashboard port number...
Ceph Dashboard is now available at:

URL: https://ceph-1:8443/
User: admin
Password: dsvi6yiat7

Enabling client.admin keyring and conf on hosts with "admin" label
Saving cluster configuration to /var/lib/ceph/976e04fe-9315-11ed-a275-e29e49e9189c/config directory
Enabling autotune for osd_memory_target
You can access the Ceph CLI as following in case of multi-cluster or non-default config:

sudo /usr/sbin/cephadm shell --fsid 976e04fe-9315-11ed-a275-e29e49e9189c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring

Or, if you are only running a single cluster on this host:

sudo /usr/sbin/cephadm shell

Please consider enabling telemetry to help improve Ceph:

ceph telemetry on

For more information see:

https://docs.ceph.com/docs/master/mgr/telemetry/

Bootstrap complete.
[root@ceph-1 ~]#

查看容器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22

[root@ceph-1 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
quay.io/ceph/ceph v17 cc65afd6173a 2 months ago 1.36GB
quay.io/ceph/ceph-grafana 8.3.5 dad864ee21e9 9 months ago 558MB
quay.io/prometheus/prometheus v2.33.4 514e6a882f6e 10 months ago 204MB
quay.io/prometheus/node-exporter v1.3.1 1dbe0e931976 13 months ago 20.9MB
quay.io/prometheus/alertmanager v0.23.0 ba2b418f427c 16 months ago 57.5MB
[root@ceph-1 ~]#


[root@ceph-1 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
41a980ad57b6 quay.io/ceph/ceph-grafana:8.3.5 "/bin/sh -c 'grafana…" 32 seconds ago Up 31 seconds ceph-976e04fe-9315-11ed-a275-e29e49e9189c-grafana-ceph-1
c1d92377e2f2 quay.io/prometheus/alertmanager:v0.23.0 "/bin/alertmanager -…" 33 seconds ago Up 32 seconds ceph-976e04fe-9315-11ed-a275-e29e49e9189c-alertmanager-ceph-1
9262faff37be quay.io/prometheus/prometheus:v2.33.4 "/bin/prometheus --c…" 42 seconds ago Up 41 seconds ceph-976e04fe-9315-11ed-a275-e29e49e9189c-prometheus-ceph-1
2601411f95a6 quay.io/prometheus/node-exporter:v1.3.1 "/bin/node_exporter …" About a minute ago Up About a minute ceph-976e04fe-9315-11ed-a275-e29e49e9189c-node-exporter-ceph-1
a6ca018a7620 quay.io/ceph/ceph "/usr/bin/ceph-crash…" 2 minutes ago Up 2 minutes ceph-976e04fe-9315-11ed-a275-e29e49e9189c-crash-ceph-1
f9e9de110612 quay.io/ceph/ceph:v17 "/usr/bin/ceph-mgr -…" 3 minutes ago Up 3 minutes ceph-976e04fe-9315-11ed-a275-e29e49e9189c-mgr-ceph-1-svfnsm
cac707c88b83 quay.io/ceph/ceph:v17 "/usr/bin/ceph-mon -…" 3 minutes ago Up 3 minutes ceph-976e04fe-9315-11ed-a275-e29e49e9189c-mon-ceph-1
[root@ceph-1 ~]#

使用shell命令

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72

[root@ceph-1 ~]# cephadm shell #切换模式
Inferring fsid 976e04fe-9315-11ed-a275-e29e49e9189c
Inferring config /var/lib/ceph/976e04fe-9315-11ed-a275-e29e49e9189c/mon.ceph-1/config
Using ceph image with id 'cc65afd6173a' and tag 'v17' created on 2022-10-18 07:41:41 +0800 CST
quay.io/ceph/ceph@sha256:0560b16bec6e84345f29fb6693cd2430884e6efff16a95d5bdd0bb06d7661c45
[ceph: root@ceph-1 /]#
[ceph: root@ceph-1 /]#
[ceph: root@ceph-1 /]#
[ceph: root@ceph-1 /]# ceph -s
cluster:
id: 976e04fe-9315-11ed-a275-e29e49e9189c
health: HEALTH_WARN
OSD count 0 < osd_pool_default_size 3

services:
mon: 1 daemons, quorum ceph-1 (age 4m)
mgr: ceph-1.svfnsm(active, since 2m)
osd: 0 osds: 0 up, 0 in

data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:

[ceph: root@ceph-1 /]#

[ceph: root@ceph-1 /]# ceph orch ps #查看目前集群内运行的组件(包括其他节点)
NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID
alertmanager.ceph-1 ceph-1 *:9093,9094 running (2m) 2m ago 4m 15.1M - ba2b418f427c c1d92377e2f2
crash.ceph-1 ceph-1 running (4m) 2m ago 4m 6676k - 17.2.5 cc65afd6173a a6ca018a7620
grafana.ceph-1 ceph-1 *:3000 running (2m) 2m ago 3m 39.1M - 8.3.5 dad864ee21e9 41a980ad57b6
mgr.ceph-1.svfnsm ceph-1 *:9283 running (5m) 2m ago 5m 426M - 17.2.5 cc65afd6173a f9e9de110612
mon.ceph-1 ceph-1 running (5m) 2m ago 5m 29.0M 2048M 17.2.5 cc65afd6173a cac707c88b83
node-exporter.ceph-1 ceph-1 *:9100 running (3m) 2m ago 3m 13.2M - 1dbe0e931976 2601411f95a6
prometheus.ceph-1 ceph-1 *:9095 running (3m) 2m ago 3m 34.4M - 514e6a882f6e 9262faff37be
[ceph: root@ceph-1 /]#
[ceph: root@ceph-1 /]#
[ceph: root@ceph-1 /]# ceph orch ps --daemon-type mon #查看某一组件的状态
NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID
mon.ceph-1 ceph-1 running (5m) 2m ago 5m 29.0M 2048M 17.2.5 cc65afd6173a cac707c88b83
[ceph: root@ceph-1 /]#
[ceph: root@ceph-1 /]# exit #退出命令模式
exit
[root@ceph-1 ~]#


# ceph命令的第二种应用
[root@ceph-1 ~]# cephadm shell -- ceph -s
Inferring fsid 976e04fe-9315-11ed-a275-e29e49e9189c
Inferring config /var/lib/ceph/976e04fe-9315-11ed-a275-e29e49e9189c/mon.ceph-1/config
Using ceph image with id 'cc65afd6173a' and tag 'v17' created on 2022-10-18 07:41:41 +0800 CST
quay.io/ceph/ceph@sha256:0560b16bec6e84345f29fb6693cd2430884e6efff16a95d5bdd0bb06d7661c45
cluster:
id: 976e04fe-9315-11ed-a275-e29e49e9189c
health: HEALTH_WARN
OSD count 0 < osd_pool_default_size 3

services:
mon: 1 daemons, quorum ceph-1 (age 6m)
mgr: ceph-1.svfnsm(active, since 4m)
osd: 0 osds: 0 up, 0 in

data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:

[root@ceph-1 ~]#

安装ceph-common包

1
2
3
4
5
6
7
8
9
10
11
12
# 安装ceph-common包
[root@ceph-1 ~]# cephadm install ceph-common
Installing packages ['ceph-common']...
[root@ceph-1 ~]#

[root@ceph-1 ~]# ceph -v
ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable)
[root@ceph-1 ~]#

# 启用ceph组件
ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-2
ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-3

创建mon和mgr

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34

# 创建mon和mgr
ceph orch host add ceph-2
ceph orch host add ceph-3

#查看目前集群纳管的节点
[root@ceph-1 ~]# ceph orch host ls
HOST ADDR LABELS STATUS
ceph-1 192.168.1.25 _admin
ceph-2 192.168.1.26
ceph-3 192.168.1.27
3 hosts in cluster
[root@ceph-1 ~]#

#ceph集群一般默认会允许存在5个mon和2个mgr;可以使用ceph orch apply mon --placement="3 node1 node2 node3"进行手动修改

[root@ceph-1 ~]# ceph orch apply mon --placement="3 ceph-1 ceph-2 ceph-3"
Scheduled mon update...
[root@ceph-1 ~]#
[root@ceph-1 ~]# ceph orch apply mgr --placement="3 ceph-1 ceph-2 ceph-3"
Scheduled mgr update...
[root@ceph-1 ~]#

[root@ceph-1 ~]# ceph orch ls
NAME PORTS RUNNING REFRESHED AGE PLACEMENT
alertmanager ?:9093,9094 1/1 30s ago 17m count:1
crash 3/3 4m ago 17m *
grafana ?:3000 1/1 30s ago 17m count:1
mgr 3/3 4m ago 46s ceph-1;ceph-2;ceph-3;count:3
mon 3/3 4m ago 118s ceph-1;ceph-2;ceph-3;count:3
node-exporter ?:9100 3/3 4m ago 17m *
prometheus ?:9095 1/1 30s ago 17m count:1
[root@ceph-1 ~]#

创建osd

1
2
3
4
5
6
7
8
9
10
# 创建osd

[root@ceph-1 ~]# ceph orch daemon add osd ceph-1:/dev/sdb
Created osd(s) 0 on host 'ceph-1'
[root@ceph-1 ~]# ceph orch daemon add osd ceph-2:/dev/sdb
Created osd(s) 1 on host 'ceph-2'
[root@ceph-1 ~]# ceph orch daemon add osd ceph-3:/dev/sdb
Created osd(s) 2 on host 'ceph-3'
[root@ceph-1 ~]#

创建mds

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# 创建mds

#首先创建cephfs,不指定pg的话,默认自动调整
[root@ceph-1 ~]# ceph osd pool create cephfs_data
pool 'cephfs_data' created
[root@ceph-1 ~]# ceph osd pool create cephfs_metadata
pool 'cephfs_metadata' created
[root@ceph-1 ~]# ceph fs new cephfs cephfs_metadata cephfs_data
new fs with metadata pool 3 and data pool 2
[root@ceph-1 ~]#

#开启mds组件,cephfs:文件系统名称;–placement:指定集群内需要几个mds,后面跟主机名
[root@ceph-1 ~]# ceph orch apply mds cephfs --placement="3 ceph-1 ceph-2 ceph-3"
Scheduled mds.cephfs update...
[root@ceph-1 ~]#

#查看各节点是否已启动mds容器;还可以使用ceph orch ps 查看某一节点运行的容器
[root@ceph-1 ~]# ceph orch ps --daemon-type mds
NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID
mds.cephfs.ceph-1.zgcgrw ceph-1 running (52s) 44s ago 52s 17.0M - 17.2.5 cc65afd6173a aba28ef97b9a
mds.cephfs.ceph-2.vvpuyk ceph-2 running (51s) 45s ago 51s 14.1M - 17.2.5 cc65afd6173a 940a019d4c75
mds.cephfs.ceph-3.afnozf ceph-3 running (54s) 45s ago 54s 14.2M - 17.2.5 cc65afd6173a bd17d6414aa9
[root@ceph-1 ~]#
[root@ceph-1 ~]#

创建rgw

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90

# 创建rgw

#首先创建一个领域
[root@ceph-1 ~]# radosgw-admin realm create --rgw-realm=myorg --default
{
"id": "a6607d08-ac44-45f0-95b0-5435acddfba2",
"name": "myorg",
"current_period": "16769237-0ed5-4fad-8822-abc444292d0b",
"epoch": 1
}
[root@ceph-1 ~]#

#创建区域组
[root@ceph-1 ~]# radosgw-admin zonegroup create --rgw-zonegroup=default --master --default
{
"id": "4d978fe1-b158-4b3a-93f7-87fbb31f6e7a",
"name": "default",
"api_name": "default",
"is_master": "true",
"endpoints": [],
"hostnames": [],
"hostnames_s3website": [],
"master_zone": "",
"zones": [],
"placement_targets": [],
"default_placement": "",
"realm_id": "a6607d08-ac44-45f0-95b0-5435acddfba2",
"sync_policy": {
"groups": []
}
}
[root@ceph-1 ~]#

#创建区域
[root@ceph-1 ~]# radosgw-admin zone create --rgw-zonegroup=default --rgw-zone=cn-east-1 --master --default
{
"id": "5ac7f118-a69c-4dec-b174-f8432e7115b7",
"name": "cn-east-1",
"domain_root": "cn-east-1.rgw.meta:root",
"control_pool": "cn-east-1.rgw.control",
"gc_pool": "cn-east-1.rgw.log:gc",
"lc_pool": "cn-east-1.rgw.log:lc",
"log_pool": "cn-east-1.rgw.log",
"intent_log_pool": "cn-east-1.rgw.log:intent",
"usage_log_pool": "cn-east-1.rgw.log:usage",
"roles_pool": "cn-east-1.rgw.meta:roles",
"reshard_pool": "cn-east-1.rgw.log:reshard",
"user_keys_pool": "cn-east-1.rgw.meta:users.keys",
"user_email_pool": "cn-east-1.rgw.meta:users.email",
"user_swift_pool": "cn-east-1.rgw.meta:users.swift",
"user_uid_pool": "cn-east-1.rgw.meta:users.uid",
"otp_pool": "cn-east-1.rgw.otp",
"system_key": {
"access_key": "",
"secret_key": ""
},
"placement_pools": [
{
"key": "default-placement",
"val": {
"index_pool": "cn-east-1.rgw.buckets.index",
"storage_classes": {
"STANDARD": {
"data_pool": "cn-east-1.rgw.buckets.data"
}
},
"data_extra_pool": "cn-east-1.rgw.buckets.non-ec",
"index_type": 0
}
}
],
"realm_id": "a6607d08-ac44-45f0-95b0-5435acddfba2",
"notif_pool": "cn-east-1.rgw.log:notif"
}
[root@ceph-1 ~]#

#为特定领域和区域部署radosgw守护程序
[root@ceph-1 ~]# ceph orch apply rgw myorg cn-east-1 --placement="3 ceph-1 ceph-2 ceph-3"
Scheduled rgw.myorg update...
[root@ceph-1 ~]#

#验证各节点是否启动rgw容器
[root@ceph-1 ~]# ceph orch ps --daemon-type rgw
NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID
rgw.myorg.ceph-1.tzzauo ceph-1 *:80 running (60s) 50s ago 60s 18.6M - 17.2.5 cc65afd6173a 2ce31e5c9d35
rgw.myorg.ceph-2.zxwpfj ceph-2 *:80 running (61s) 51s ago 61s 20.0M - 17.2.5 cc65afd6173a a334e346ae5c
rgw.myorg.ceph-3.bvsydw ceph-3 *:80 running (58s) 51s ago 58s 18.6M - 17.2.5 cc65afd6173a 97b09ba01821
[root@ceph-1 ~]#

为所有节点安装ceph-common包

1
2
3
4
5
6
7
8
9

# 为所有节点安装ceph-common包
scp /etc/yum.repos.d/ceph.repo ceph-2:/etc/yum.repos.d/ #将主节点的ceph源同步至其他节点
scp /etc/yum.repos.d/ceph.repo ceph-3:/etc/yum.repos.d/ #将主节点的ceph源同步至其他节点
yum -y install ceph-common #在节点安装ceph-common,ceph-common包会提供ceph命令并在etc下创建ceph目录
scp /etc/ceph/ceph.conf ceph-2:/etc/ceph/ #将ceph.conf文件传输至对应节点
scp /etc/ceph/ceph.conf ceph-3:/etc/ceph/ #将ceph.conf文件传输至对应节点
scp /etc/ceph/ceph.client.admin.keyring ceph-2:/etc/ceph/ #将密钥文件传输至对应节点
scp /etc/ceph/ceph.client.admin.keyring ceph-3:/etc/ceph/ #将密钥文件传输至对应节点

测试

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# 测试
[root@ceph-3 ~]# ceph -s
cluster:
id: 976e04fe-9315-11ed-a275-e29e49e9189c
health: HEALTH_OK

services:
mon: 3 daemons, quorum ceph-1,ceph-2,ceph-3 (age 17m)
mgr: ceph-1.svfnsm(active, since 27m), standbys: ceph-2.zuetkd, ceph-3.vntnlf
mds: 1/1 daemons up, 2 standby
osd: 3 osds: 3 up (since 8m), 3 in (since 8m)
rgw: 3 daemons active (3 hosts, 1 zones)

data:
volumes: 1/1 healthy
pools: 7 pools, 177 pgs
objects: 226 objects, 585 KiB
usage: 108 MiB used, 300 GiB / 300 GiB avail
pgs: 177 active+clean

[root@ceph-3 ~]#

访问界面

1
2
3
4
5
6
7
8
# 页面访问
https://192.168.1.25:8443
http://192.168.1.25:9095/
https://192.168.1.25:3000/

User: admin
Password: dsvi6yiat7

常用命令

1
2
3
4
5
6
7
8
9
10
11
12
13
ceph orch ls    #列出集群内运行的组件
ceph orch host ls #列出集群内的主机
ceph orch ps #列出集群内容器的详细信息
ceph orch apply mon --placement="3 node1 node2 node3" #调整组件的数量
ceph orch ps --daemon-type rgw #--daemon-type:指定查看的组件
ceph orch host label add node1 mon #给某个主机指定标签
ceph orch apply mon label:mon #告诉cephadm根据标签部署mon,修改后只有包含mon的主机才会成为mon,不过原来启动的mon现在暂时不会关闭
ceph orch device ls #列出集群内的存储设备
例如,要在newhost1IP地址10.1.2.123上部署第二台监视器,并newhost2在网络10.1.2.0/24中部署第三台monitor
ceph orch apply mon --unmanaged #禁用mon自动部署
ceph orch daemon add mon newhost1:10.1.2.123
ceph orch daemon add mon newhost2:10.1.2.0/24

关于

https://www.oiox.cn/

https://www.oiox.cn/index.php/start-page.html

CSDN、GitHub、51CTO、知乎、开源中国、思否、掘金、简书、华为云、阿里云、腾讯云、哔哩哔哩、今日头条、新浪微博、个人博客

全网可搜《小陈运维》

文章主要发布于微信公众号