对象存储扩容

本例在新扩容的每个节点上创建12个
扩容机器添加Monitor
如果需要添加Monitor,如图修改ceph1的“/etc/ceph/ceph.conf”,在“mon_initial_members”和“mon_host”中添加ceph4和ceph5的IP地址。
- 编辑ceph.conf。
1 2 3
cd /etc/ceph/ vim ceph.conf
- 将“mon_initial_members=ceph1,ceph2,ceph3”修改为“mon_initial_members=ceph1,ceph2,ceph3,ceph4,ceph5”。
- 将“mon_host=192.168.3.156,192.168.3.157,192.168.3.158”修改为“mon_host=192.168.3.156,192.168.3.157,192.168.3.158,192.168.3.197,192.168.3.198”。
- 将ceph.conf从ceph1节点推送至各个节点。
1
ceph-deploy --overwrite-conf admin ceph1 ceph2 ceph3 ceph4 ceph5
- 在ceph1创建monitor。
1
ceph-deploy mon create ceph4 ceph5
- 查看mon状态。
1
ceph mon stat
若“mon stat”中出现扩容机器的信息,说明扩容进来的服务器创建Monitor成功。
(可选)删除Monitor

删除monitor对集群影响较大,一般需要提前规划好,如不需要,可以不用删除。
以删除Monitor ceph2和ceph3为例,需修改ceph1的“/etc/ceph/ceph.conf”文件中的ceph2和ceph3的信息,并将ceph.conf推送至各节点。
- 编辑ceph.conf。
1 2 3
cd /etc/ceph/ vim ceph.conf
将“mon_initial_members=ceph1,ceph2,ceph3,ceph4,ceph5”修改为“mon_initial_members=ceph1,ceph4,ceph5”。
将“mon_host=192.168.3.156,192.168.3.157,192.168.3.158,192.168.3.197,192.168.3.198”修改为“mon_host=192.168.3.156,192.168.3.197,192.168.3.198”。
- 将ceph.conf从ceph1节点推送至各个节点。
1
ceph-deploy --overwrite-conf admin ceph1 ceph2 ceph3 ceph4 ceph5
- 删除monitor ceph2和ceph3。
1
ceph-deploy mon destroy ceph2 ceph3
准备ceph.conf文件
- 在ceph.conf文件中添加rgw实例的端口配置,在ceph1上编辑ceph.conf。
1
vim /etc/ceph/ceph.conf
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190
[global] fsid = 4f238985-ad0a-4fc3-944b-da59ea3e65d7 mon_initial_members = ceph1, ceph2, ceph3,ceph4,ceph5 mon_host = 192.168.3.156,192.168.3.157,192.168.3.158,192.168.3.197 ,192.168.3.198 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx public_network = 192.168.3.0/24 cluster_network = 192.168.3.0/24 [mon] mon_allow_pool_delete = true [client.rgw.bucket1] rgw_frontends = civetweb port=10001 log file = /var/log/ceph/client.rgw.bucket1.log [client.rgw.bucket2] rgw_frontends = civetweb port=10002 log file = /var/log/ceph/client.rgw.bucket2.log [client.rgw.bucket3] rgw_frontends = civetweb port=10003 log file = /var/log/ceph/client.rgw.bucket3.log [client.rgw.bucket4] rgw_frontends = civetweb port=10004 log file = /var/log/ceph/client.rgw.bucket4.log [client.rgw.bucket5] rgw_frontends = civetweb port=10005 log file = /var/log/ceph/client.rgw.bucket5.log [client.rgw.bucket6] rgw_frontends = civetweb port=10006 log file = /var/log/ceph/client.rgw.bucket6.log [client.rgw.bucket7] rgw_frontends = civetweb port=10007 log file = /var/log/ceph/client.rgw.bucket7.log [client.rgw.bucket8] rgw_frontends = civetweb port=10008 log file = /var/log/ceph/client.rgw.bucket8.log [client.rgw.bucket9] rgw_frontends = civetweb port=10009 log file = /var/log/ceph/client.rgw.bucket9.log [client.rgw.bucket10] rgw_frontends = civetweb port=10010 log file = /var/log/ceph/client.rgw.bucket10.log [client.rgw.bucket11] rgw_frontends = civetweb port=10011 log file = /var/log/ceph/client.rgw.bucket11.log [client.rgw.bucket12] rgw_frontends = civetweb port=10012 log file = /var/log/ceph/client.rgw.bucket12.log [client.rgw.bucket13] rgw_frontends = civetweb port=10013 log file = /var/log/ceph/client.rgw.bucket13.log [client.rgw.bucket14] rgw_frontends = civetweb port=10014 log file = /var/log/ceph/client.rgw.bucket14.log [client.rgw.bucket15] rgw_frontends = civetweb port=10015 log file = /var/log/ceph/client.rgw.bucket15.log [client.rgw.bucket16] rgw_frontends = civetweb port=10016 log file = /var/log/ceph/client.rgw.bucket16.log [client.rgw.bucket17] rgw_frontends = civetweb port=10017 log file = /var/log/ceph/client.rgw.bucket17.log [client.rgw.bucket18] rgw_frontends = civetweb port=10018 log file = /var/log/ceph/client.rgw.bucket18.log [client.rgw.bucket19] rgw_frontends = civetweb port=10019 log file = /var/log/ceph/client.rgw.bucket19.log [client.rgw.bucket20] rgw_frontends = civetweb port=10020 log file = /var/log/ceph/client.rgw.bucket20.log [client.rgw.bucket21] rgw_frontends = civetweb port=10021 log file = /var/log/ceph/client.rgw.bucket21.log [client.rgw.bucket22] rgw_frontends = civetweb port=10022 log file = /var/log/ceph/client.rgw.bucket22.log [client.rgw.bucket23] rgw_frontends = civetweb port=10023 log file = /var/log/ceph/client.rgw.bucket23.log [client.rgw.bucket24] rgw_frontends = civetweb port=10024 log file = /var/log/ceph/client.rgw.bucket24.log [client.rgw.bucket25] rgw_frontends = civetweb port=10025 log file = /var/log/ceph/client.rgw.bucket25.log [client.rgw.bucket26] rgw_frontends = civetweb port=10026 log file = /var/log/ceph/client.rgw.bucket26.log [client.rgw.bucket27] rgw_frontends = civetweb port=10027 log file = /var/log/ceph/client.rgw.bucket27.log [client.rgw.bucket28] rgw_frontends = civetweb port=10028 log file = /var/log/ceph/client.rgw.bucket28.log [client.rgw.bucket29] rgw_frontends = civetweb port=10029 log file = /var/log/ceph/client.rgw.bucket29.log [client.rgw.bucket30] rgw_frontends = civetweb port=10030 log file = /var/log/ceph/client.rgw.bucket30.log [client.rgw.bucket31] rgw_frontends = civetweb port=10031 log file = /var/log/ceph/client.rgw.bucket31.log [client.rgw.bucket32] rgw_frontends = civetweb port=10032 log file = /var/log/ceph/client.rgw.bucket32.log [client.rgw.bucket33] rgw_frontends = civetweb port=10033 log file = /var/log/ceph/client.rgw.bucket33.log [client.rgw.bucket34] rgw_frontends = civetweb port=10034 log file = /var/log/ceph/client.rgw.bucket34.log [client.rgw.bucket35] rgw_frontends = civetweb port=10035 log file = /var/log/ceph/client.rgw.bucket35.log [client.rgw.bucket36] rgw_frontends = civetweb port=10036 log file = /var/log/ceph/client.rgw.bucket36.log [client.rgw.bucket37] rgw_frontends = civetweb port=10037 log file = /var/log/ceph/client.rgw.bucket37.log [client.rgw.bucket38] rgw_frontends = civetweb port=10038 log file = /var/log/ceph/client.rgw.bucket38.log [client.rgw.bucket39] rgw_frontends = civetweb port=10039 log file = /var/log/ceph/client.rgw.bucket39.log [client.rgw.bucket41] rgw_frontends = civetweb port=10041 log file = /var/log/ceph/client.rgw.bucket41.log [client.rgw.bucket42] rgw_frontends = civetweb port=10042 log file = /var/log/ceph/client.rgw.bucket42.log [client.rgw.bucket43] rgw_frontends = civetweb port=10043 log file = /var/log/ceph/client.rgw.bucket43.log [client.rgw.bucket44] rgw_frontends = civetweb port=10044 log file = /var/log/ceph/client.rgw.bucket44.log [client.rgw.bucket45] rgw_frontends = civetweb port=10045 log file = /var/log/ceph/client.rgw.bucket45.log [client.rgw.bucket46] rgw_frontends = civetweb port=10046 log file = /var/log/ceph/client.rgw.bucket46.log [client.rgw.bucket47] rgw_frontends = civetweb port=10047 log file = /var/log/ceph/client.rgw.bucket47.log [client.rgw.bucket48] rgw_frontends = civetweb port=10048 log file = /var/log/ceph/client.rgw.bucket48.log [client.rgw.bucket49] rgw_frontends = civetweb port=10049 log file = /var/log/ceph/client.rgw.bucket49.log [client.rgw.bucket50] rgw_frontends = civetweb port=10050 log file = /var/log/ceph/client.rgw.bucket50.log [client.rgw.bucket51] rgw_frontends = civetweb port=10051 log file = /var/log/ceph/client.rgw.bucket51.log [client.rgw.bucket52] rgw_frontends = civetweb port=10052 log file = /var/log/ceph/client.rgw.bucket52.log [client.rgw.bucket53] rgw_frontends = civetweb port=10053 log file = /var/log/ceph/client.rgw.bucket53.log [client.rgw.bucket54] rgw_frontends = civetweb port=10054 log file = /var/log/ceph/client.rgw.bucket54.log [client.rgw.bucket55] rgw_frontends = civetweb port=10055 log file = /var/log/ceph/client.rgw.bucket55.log [client.rgw.bucket56] rgw_frontends = civetweb port=10056 log file = /var/log/ceph/client.rgw.bucket56.log [client.rgw.bucket57] rgw_frontends = civetweb port=10057 log file = /var/log/ceph/client.rgw.bucket57.log [client.rgw.bucket58] rgw_frontends = civetweb port=10058 log file = /var/log/ceph/client.rgw.bucket58.log [client.rgw.bucket59] rgw_frontends = civetweb port=10059 log file = /var/log/ceph/client.rgw.bucket59.log [client.rgw.bucket60] rgw_frontends = civetweb port=10060 log file = /var/log/ceph/client.rgw.bucket60.log
- 在所有集群节点上同步配置文件,在ceph1上执行:
1
ceph-deploy --overwrite-conf admin ceph1 ceph2 ceph3 ceph4 ceph5
新增RGW实例
- 创建RGW实例,在主节点ceph1上执行以下命令:
1 2
for i in {37..48};do ceph-deploy rgw create ceph4:bucket$i;done for i in {49..60};do ceph-deploy rgw create ceph5:bucket$i;done
- 查看60个RGW进程是否在线。
1
ceph -s
- 使用curl或者web节点登录验证,如下图所示。
至此网关服务创建成功。
部署MGR
为扩容的节点ceph4和ceph5创建MGR。
1
|
ceph-deploy mgr create ceph4 ceph5 |
部署OSD
给扩容进来的服务器创建OSD,由于每台服务器有12块硬盘,执行如下命令:
1 2 3 4 5 6 7 8 |
for i in {a..l} do ceph-deploy osd create ceph4 --data /dev/sd${i} done for i in {a..l} do ceph-deploy osd create ceph5 --data /dev/sd${i} done |
配置存储池
- 查看存储池信息。
1
ceph osd lspools
- 修改相应的“pg pgpnum”和“pgnum”.
pg的计算规则如下:
Total PGs = ((Total_number_of_OSD * 100) / max_replication_count) / pool_count
因此按照该环境修改“pgnum”和“pgpnum”如下:1 2 3 4
ceph osd pool set default.rgw.buckets.data pg_num 2048 ceph osd pool set default.rgw.buckets.data pgp_num 2048 ceph osd pool set default.rgw.buckets.index pg_num 256 ceph osd pool set default.rgw.buckets.index pgp_num 256
验证扩容
扩容后,Ceph会通过迁移来自其他OSD的一些pg到新添进的OSD,再次平衡数据。
- 确认数据迁移结束后集群是否恢复健康。
1
ceph -s
- 确定集群的存储容量是否增加。
1
ceph osd df