PG Autoscaler breaks when using custom CRUSH rules
This commit is contained in:
		@@ -56,6 +56,11 @@
 | 
			
		||||
       # ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class>
 | 
			
		||||
       ceph osd crush rule create-replicated replicated_nvme default host nvme
 | 
			
		||||
       ceph osd crush rule create-replicated replicated_hdd default host hdd
 | 
			
		||||
 | 
			
		||||
   > **NB**: Using default `replicated_rule` for **ANY** CEPH Pool will result in
 | 
			
		||||
   > Placement Group (PG) Autoscaler not working as it cant properly calculate
 | 
			
		||||
   > how much space is available in CEPH due to different device classes we are using
 | 
			
		||||
 | 
			
		||||
8. Create CEPH Pools for VM disk images
 | 
			
		||||
 | 
			
		||||
    This is done in individual node Ceph -> Pools configuration
 | 
			
		||||
 
 | 
			
		||||
		Reference in New Issue
	
	Block a user