Score:0

Changing Ceph to store number of data copies

de flag

Currently I am using replication for data placement and think I am using three copies, which I think is the default. How do I change ceph config to store 4 copies on different nodes in different chassis? Also, would this change impact anything existing on ceph?

Thanks, Kampton

Score:0
us flag

To increase the number of replicas you can set the pool size according to your requirements:

ceph osd pool set size 4

The placement of the copies (chassis) is called failure domain. This is configured in the ruleset you're using. You can change the ruleset for a given pool:

# get current ruleset for given pool
ceph osd pool get iscsi-pool crush_rule 
crush_rule: replicated_rule

# dump ruleset
ceph osd crush rule dump replicated_rule

The docs also describe how to change a crush rule and modify the crushmap. Changing the data placement will cause a remapping of the PGs, depending on your ceph version it will remap 5% of misplaced PGs at max. The remapping process can be controlled with these osd config settings:

osd_recovery_max_active
osd_max_backfills

Set them to higher values to increase the recovery speed, but set them back to the defaults after you're finished.

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.