PG Autoscaler breaks when using custom CRUSH rules

This commit is contained in:
Arti Zirk
2025-07-31 13:11:22 +03:00
parent 2570047748
commit 61e3d8a847

View File

@@ -56,6 +56,11 @@
# ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class> # ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class>
ceph osd crush rule create-replicated replicated_nvme default host nvme ceph osd crush rule create-replicated replicated_nvme default host nvme
ceph osd crush rule create-replicated replicated_hdd default host hdd ceph osd crush rule create-replicated replicated_hdd default host hdd
> **NB**: Using default `replicated_rule` for **ANY** CEPH Pool will result in
> Placement Group (PG) Autoscaler not working as it cant properly calculate
> how much space is available in CEPH due to different device classes we are using
8. Create CEPH Pools for VM disk images 8. Create CEPH Pools for VM disk images
This is done in individual node Ceph -> Pools configuration This is done in individual node Ceph -> Pools configuration