You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
 
Lauri Võsandi 9c4cda7c0a Deprecate disk exhausted errors 2 weeks ago
.ci Enable "Storage Capacity Tracking" 1 year ago
csi Update CSI proto to 1.5.0 2 years ago
deploy/charts/rawfile-csi Release 0.8.0 2 weeks ago
orchestrator Delete task pods even upon failure 2 weeks ago
protos Update CSI proto to 1.5.0 2 years ago
templates Fix typo 2 years ago
.dockerignore Configure CI 3 years ago
.drone.yml Add Drone config 2 weeks ago
.gitignore Autogen python gitignore 3 years ago
.travis.yml Change conditions upon which e2e test are run 2 years ago
CODE_OF_CONDUCT.md chore(docs): add contributor guidelines 3 years ago
Dockerfile Support creating snapshots from btrfs volumes 2 weeks ago
GOVERNANCE.md chore(docs): add contributor guidelines 3 years ago
LICENSE Publish under Apache License 2.0 3 years ago
MAINTAINERS chore(docs): add contributor guidelines 3 years ago
README.md Support creating snapshots from btrfs volumes 2 weeks ago
SECURITY.md chore(docs): add contributor guidelines 3 years ago
bd2fs.py Support creating snapshots from btrfs volumes 2 weeks ago
consts.py Deprecate disk exhausted errors 2 weeks ago
declarative.py Fix xfs_grow arguments 2 months ago
fs_util.py Refactor: Extract utility functions out of metrics module 2 years ago
metrics.py Expose volume stats as prometheus metrics 2 years ago
rawfile.py Expose volume stats as prometheus metrics 2 years ago
rawfile_servicer.py Deprecate disk exhausted errors 2 weeks ago
rawfile_util.py Deprecate disk exhausted errors 2 weeks ago
remote.py Deprecate disk exhausted errors 2 weeks ago
requirements.in Support creating snapshots from btrfs volumes 2 weeks ago
requirements.txt Update base python version 2 weeks ago
util.py Handle attaching loop devices instead of handing it to mount 3 years ago
volume_schema.py Fix #5: Actually delete PVC image files 2 years ago

README.md

FOSSA Status

RawFilePV

Kubernetes LocalPVs on Steroids

Prerequisite

  • Kubernetes: 1.21+

Install

helm install -n kube-system rawfile-csi ./deploy/charts/rawfile-csi/

Usage

Create a StorageClass with your desired options:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: my-sc
provisioner: rawfile.csi.openebs.io
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true

Features

  • Direct I/O: Near-zero disk performance overhead
  • Dynamic provisioning
  • Enforced volume size limit
  • Access Modes
    • ReadWriteOnce
    • ReadOnlyMany
    • ReadWriteMany
  • Volume modes
    • Filesystem mode
    • Block mode
  • Volume metrics
  • Supports fsTypes: ext4, btrfs, xfs
  • Online expansion: If fs supports it (e.g. ext4, btrfs, xfs)
  • Online shrinking: If fs supports it (e.g. btrfs)
  • Offline expansion/shrinking
  • Ephemeral inline volume
  • Filesystem-level snapshots: btrfs supported

Motivation

One might have a couple of reasons to consider using node-based (rather than network-based) storage solutions:

  • Performance: Almost no network-based storage solution can keep up with baremetal disk performance in terms of IOPS/latency/throughput combined. And you’d like to get the best out of the SSD you’ve got!
  • On-premise Environment: You might not be able to afford the cost of upgrading all your networking infrastructure, to get the best out of your network-based storage solution.
  • Complexity: Network-based solutions are distributed systems. And distributed systems are not easy! You might want to have a system that is easier to understand and to reason about. Also, with less complexity, you can fix unpredicted issues more easily.

Using node-based storage has come a long way since k8s was born. Right now, OpenEBS’s hostPath makes it pretty easy to automatically provision hostPath PVs and use them in your workloads. There are known limitations though:

  • You can’t monitor volume usage: There are hacky workarounds to run “du” regularly, but that could prove to be a performance killer, since it could put a lot of burden on your CPU and cause your filesystem cache to fill up. Not really good for a production workload.
  • You can’t enforce hard limits on your volume’s size: Again, you can hack your way around it, with the same caveats.
  • You are stuck with whatever filesystem your kubelet node is offering
  • You can’t customize your filesystem:

All these issues stem from the same root cause: hostPath/LocalPVs are simple bind-mounts from the host filesystem into the pod.

The idea here is to use a single file as the block device, using Linux’s loop, and create a volume based on it. That way:

  • You can monitor volume usage by running df in O(1) since devices are mounted separately.
  • The size limit is enforced by the operating system, based on the backing file size.
  • Since volumes are backed by different files, each file could be formatted using different filesystems, and/or customized with different filesystem options.

License

FOSSA Status