Proxmox drive wear #160

Open
opened 2025-02-15 20:09:58 +00:00 by rasmus · 2 comments
Owner

The four Proxmox nodes have one (SAMSUNG MZ7LH480HAHQ 480G) SSD each with disk wear at 79, 63, 73, 72%. They are mostly consumed by (local storage) kube VMs.

  • Do we have any cold spares ready?
  • How does the replacement work? Wait for 99% wearout? Offline dd? Half-online zfs mirror?

  • Kube local VMs seem to not be able to migrate to other nodes.
  • Kindly stop requiring local storage for kube applications :)

  • This should be automatically tracked.
The four Proxmox nodes have one (SAMSUNG MZ7LH480HAHQ 480G) SSD each with disk wear at 79, 63, 73, 72%. They are mostly consumed by (local storage) kube VMs. - Do we have any cold spares ready? - How does the replacement work? Wait for 99% wearout? Offline dd? Half-online zfs mirror? *** - Kube local VMs seem to not be able to migrate to other nodes. - Kindly stop requiring local storage for kube applications :) *** - This should be automatically tracked.
rasmus added this to the k-space.ee/todo project 2025-02-15 20:10:01 +00:00
rasmus added the
upkeep
label 2025-03-28 22:35:32 +00:00
Author
Owner

2025-04-05: 81 65 75 74 (+2% on each). Napking math projection says about a year until there really needs to be orders for replacements.

2025-05-24: 83 68 77 75

2025-06-28: 85 70 79 76

2025-07-21: 87 71 80 77

2025-04-05: 81 65 75 74 (+2% on each). Napking math projection says about a year until there really needs to be orders for replacements. 2025-05-24: 83 68 77 75 2025-06-28: 85 70 79 76 2025-07-21: 87 71 80 77
rasmus added the due date 2026-03-15 2025-04-05 20:02:04 +00:00
Owner

Probably would make sense to replace those drives with newer nvme drives during blade -> hp gen9 pizza box migration

Probably would make sense to replace those drives with newer nvme drives during blade -> hp gen9 pizza box migration
Sign in to join this conversation.
No description provided.