Go to file
Mehran Kholdi 63c8eb44ba Fix race condition that was causing dangling loop devices
Apparently it is wrong to assume that `DeleteVolume` gets called
only after `UnstageVolume` returns success. This was causing the
disk image file to be deleted while the volume was still mounted.
This would prevent the loop device from getting detached and in
turn disk space from getting reclaimed.
2021-11-19 18:59:49 +03:30
.ci Enable "Storage Capacity Tracking" 2021-10-02 21:22:25 +03:30
csi Update CSI proto to 1.5.0 2021-07-02 20:31:34 +04:30
deploy/charts/rawfile-csi Release 0.7.0 2021-10-07 17:13:24 +03:30
orchestrator Refuse to create/resize volumes in case of insufficient disk space 2021-10-02 14:43:26 +03:30
protos Update CSI proto to 1.5.0 2021-07-02 20:31:34 +04:30
templates Fix typo 2021-02-13 02:03:04 +03:30
.dockerignore Configure CI 2020-04-24 19:35:37 +04:30
.gitignore Autogen python gitignore 2020-04-23 04:18:53 +04:30
.travis.yml Change conditions upon which e2e test are run 2020-11-28 04:50:30 +03:30
bd2fs.py Enable "Storage Capacity Tracking" 2021-10-02 21:22:25 +03:30
CODE_OF_CONDUCT.md chore(docs): add contributor guidelines 2020-06-13 06:08:25 +00:00
consts.py Fix race condition that was causing dangling loop devices 2021-11-19 18:59:49 +03:30
declarative.py Specifiy fs type in mount commands 2021-07-04 23:15:50 +04:30
Dockerfile Support xfs filesystem 2021-07-01 22:34:20 +04:30
fs_util.py Refactor: Extract utility functions out of metrics module 2021-07-04 23:15:50 +04:30
GOVERNANCE.md chore(docs): add contributor guidelines 2020-06-13 06:08:25 +00:00
LICENSE Publish under Apache License 2.0 2020-06-12 02:31:02 +04:30
MAINTAINERS chore(docs): add contributor guidelines 2020-06-13 06:08:25 +00:00
metrics.py Expose volume stats as prometheus metrics 2021-07-05 00:00:10 +04:30
rawfile_servicer.py Fix race condition that was causing dangling loop devices 2021-11-19 18:59:49 +03:30
rawfile_util.py Expose volume stats as prometheus metrics 2021-07-05 00:00:10 +04:30
rawfile.py Expose volume stats as prometheus metrics 2021-07-05 00:00:10 +04:30
README.md Enable "Storage Capacity Tracking" 2021-10-02 21:22:25 +03:30
remote.py Fix race condition that was causing dangling loop devices 2021-11-19 18:59:49 +03:30
requirements.in Implement basic metrics 2020-04-26 02:02:00 +04:30
requirements.txt Update dependencies 2021-06-26 01:14:00 +04:30
SECURITY.md chore(docs): add contributor guidelines 2020-06-13 06:08:25 +00:00
util.py Handle attaching loop devices instead of handing it to mount 2020-04-26 02:01:42 +04:30
volume_schema.py Fix #5: Actually delete PVC image files 2021-02-26 16:10:10 +03:30

FOSSA Status

RawFilePV

Kubernetes LocalPVs on Steroids

Prerequisite

  • Kubernetes: 1.21+

Install

helm install -n kube-system rawfile-csi ./deploy/charts/rawfile-csi/

Usage

Create a StorageClass with your desired options:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: my-sc
provisioner: rawfile.csi.openebs.io
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true

Features

  • Direct I/O: Near-zero disk performance overhead
  • Dynamic provisioning
  • Enforced volume size limit
  • Access Modes
    • ReadWriteOnce
    • ReadOnlyMany
    • ReadWriteMany
  • Volume modes
    • Filesystem mode
    • Block mode
  • Volume metrics
  • Supports fsTypes: ext4, btrfs, xfs
  • Online expansion: If fs supports it (e.g. ext4, btrfs, xfs)
  • Online shrinking: If fs supports it (e.g. btrfs)
  • Offline expansion/shrinking
  • Ephemeral inline volume
  • Snapshots: If the fs supports it (e.g. btrfs)

Motivation

One might have a couple of reasons to consider using node-based (rather than network-based) storage solutions:

  • Performance: Almost no network-based storage solution can keep up with baremetal disk performance in terms of IOPS/latency/throughput combined. And youd like to get the best out of the SSD youve got!
  • On-premise Environment: You might not be able to afford the cost of upgrading all your networking infrastructure, to get the best out of your network-based storage solution.
  • Complexity: Network-based solutions are distributed systems. And distributed systems are not easy! You might want to have a system that is easier to understand and to reason about. Also, with less complexity, you can fix unpredicted issues more easily.

Using node-based storage has come a long way since k8s was born. Right now, OpenEBSs hostPath makes it pretty easy to automatically provision hostPath PVs and use them in your workloads. There are known limitations though:

  • You cant monitor volume usage: There are hacky workarounds to run “du” regularly, but that could prove to be a performance killer, since it could put a lot of burden on your CPU and cause your filesystem cache to fill up. Not really good for a production workload.
  • You cant enforce hard limits on your volumes size: Again, you can hack your way around it, with the same caveats.
  • You are stuck with whatever filesystem your kubelet node is offering
  • You cant customize your filesystem:

All these issues stem from the same root cause: hostPath/LocalPVs are simple bind-mounts from the host filesystem into the pod.

The idea here is to use a single file as the block device, using Linuxs loop, and create a volume based on it. That way:

  • You can monitor volume usage by running df in O(1) since devices are mounted separately.
  • The size limit is enforced by the operating system, based on the backing file size.
  • Since volumes are backed by different files, each file could be formatted using different filesystems, and/or customized with different filesystem options.

License

FOSSA Status