Add Proxmox Ceph mesh network playbook
This commit is contained in:
58
proxmox/README.md
Normal file
58
proxmox/README.md
Normal file
@@ -0,0 +1,58 @@
|
||||
# Proxmox Virtual Environment
|
||||
|
||||
## K-Space Hyper Converged CEPH setup
|
||||
|
||||
1. Configure a mesh network
|
||||
|
||||
ansible-playbook proxmox/ceph.yaml
|
||||
|
||||
This will configure the 40Gbit interfaces and FRR daemon with OpenFabric routing.
|
||||
Our CEPH setup uses a private IPv6 subnet for inner cluster communication.
|
||||
|
||||
fdcc:a182:4fed::/64
|
||||
2. Setup CEPH packages on all nodes
|
||||
|
||||
pveceph install --repository no-subscription --version squid
|
||||
3. CEPH init
|
||||
|
||||
pveceph init --network fdcc:a182:4fed::/64
|
||||
4. Create CEPH monitors on each node
|
||||
|
||||
pveceph mon create
|
||||
5. Also create CEPH managers on each node
|
||||
|
||||
pveceph mgr create
|
||||
6. Create OSD daemons for each disk on all nodes
|
||||
|
||||
NVMe drives will get 2 OSD daemons per disk for better IOPS
|
||||
|
||||
pveceph osd create /dev/nvme0n1 --crush-device-class nvme --osds-per-device 2
|
||||
|
||||
HDD-s will get just 1
|
||||
|
||||
pveceph osd create /dev/sdX --crush-device-class hdd
|
||||
7. Create CRUSH Maps
|
||||
|
||||
We want to separate out HDD and NVMe storage into different storage buckets.
|
||||
|
||||
Default `replicated_rule` would put datablock on all of the available disks
|
||||
|
||||
# ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class>
|
||||
ceph osd crush rule create-replicated replicated_nvme default host nvme
|
||||
ceph osd crush rule create-replicated replicated_hdd default host hdd
|
||||
8. Create CEPH Pools for VM disk images
|
||||
|
||||
This is done in individual node Ceph -> Pools configuration
|
||||
|
||||
**NB:** Under advanced, select correct Crush Rule (nvme or hdd)
|
||||
|
||||
9. Create CephFS Storage pool for ISO images
|
||||
|
||||
First create metadata server on each node
|
||||
|
||||
pveceph mds create
|
||||
|
||||
Then on one of the individual nodes create a CephFS.
|
||||
|
||||
After that is done you can modify under Pools change the cephfs_data and cephfs_metadata
|
||||
Crush rules to use NVMe drives.
|
Reference in New Issue
Block a user