Proxmox Virtual Environment
K-Space Hyper Converged CEPH setup
-
Configure a mesh network
ansible-playbook proxmox/ceph.yaml
This will configure the 40Gbit interfaces and FRR daemon with OpenFabric routing. Our CEPH setup uses a private IPv6 subnet for inner cluster communication.
fdcc:a182:4fed::/64
-
Setup CEPH packages on all nodes
pveceph install --repository no-subscription --version squid
-
CEPH init
pveceph init --network fdcc:a182:4fed::/64
-
Create CEPH monitors on each node
pveceph mon create
-
Also create CEPH managers on each node
pveceph mgr create
-
Create OSD daemons for each disk on all nodes
NVMe drives will get 2 OSD daemons per disk for better IOPS
pveceph osd create /dev/nvme0n1 --crush-device-class nvme --osds-per-device 2
HDD-s will get just 1
pveceph osd create /dev/sdX --crush-device-class hdd
-
Create CRUSH Maps
We want to separate out HDD and NVMe storage into different storage buckets.
Default
replicated_rule
would put datablock on all of the available disks# ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class> ceph osd crush rule create-replicated replicated_nvme default host nvme ceph osd crush rule create-replicated replicated_hdd default host hdd
-
Create CEPH Pools for VM disk images
This is done in individual node Ceph -> Pools configuration
NB: Under advanced, select correct Crush Rule (nvme or hdd)
-
Create CephFS Storage pool for ISO images
First create metadata server on each node
pveceph mds create
Then on one of the individual nodes create a CephFS.
After that is done you can modify under Pools change the cephfs_data and cephfs_metadata Crush rules to use NVMe drives.