Score:0

Reduced ceph storage pool size

ru flag

Currently running a 7-node Ceph cluster, used as file system storage.

Two independent nodes are used to run the mon+mgr+mds service, the other 5 nodes are used as storage nodes and one of the storage nodes is mixed with a set of mon+mgr+mds services.

node1: mon+mgr+mds
node2: mon+mgr+mds
osd1: mon+mgr+mds+hdd*12 ssd*1
osd2: hdd*12 ssd*1
osd3: hdd*12 ssd*1
osd4: hdd*12 ssd*1
osd5: hdd*12 ssd*1

ceph version: 15.2.10

The storage pool uses hdd devices and is an erasure code pool, k=2, m=1; the storage pool uses HDD devices;

The ssd device is used as the cache layer of the hdd storage pool, and hdd shares a different partition of the nvme ssd as block.db;

We are currently expanding the storage nodes for this cluster, adding only one osd at a time, but the total size of the existing storage pool will be reduced.

For example, 320T before the expansion, only 300T after the expansion.

Has anyone encountered a similar problem? I want to know what caused this problem.

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.