Kubernetes manifests of services running on k-space.ee domains
Go to file
Lauri Võsandi a51b041621 Upgrade to Kubernetes 1.24 and Longhorn 1.4.0 2023-02-20 11:16:12 +02:00
argocd Add logmower 2022-11-05 20:55:52 +02:00
authelia Add Grafana 2022-10-14 14:38:23 +03:00
camtiler camtiler: Restore cams on members site 2023-01-25 09:55:04 +02:00
cert-manager Initial commit 2022-08-25 11:22:50 +03:00
drone Switch to wildcard *.k-space.ee certificate 2022-10-14 14:32:36 +03:00
drone-execution Initial commit 2022-08-25 11:22:50 +03:00
elastic-system elastic-system: Exclude logging ECK stack itself 2022-10-21 00:57:11 +03:00
etherpad Switch to wildcard *.k-space.ee certificate 2022-10-14 14:32:36 +03:00
external-dns Update external-dns 2022-12-14 18:46:00 +02:00
freescout Migrate to Prometheus Operator 2022-09-11 16:38:16 +03:00
grafana Add Grafana 2022-10-14 14:38:23 +03:00
harbor harbor: Reduce logging verbosity 2022-11-05 22:43:00 +02:00
keel Initial commit 2022-08-25 11:22:50 +03:00
kube-system Add descheduler 2022-12-23 23:30:39 +02:00
kubernetes-dashboard Switch to wildcard *.k-space.ee certificate 2022-10-14 14:32:36 +03:00
local-path-storage Initial commit 2022-08-25 11:22:50 +03:00
logging elastic-system: Add Syslog ingestion 2022-09-23 16:37:29 +03:00
logmower logmower: Remove explicit command for event source 2022-12-24 00:02:01 +02:00
longhorn-system Upgrade to Kubernetes 1.24 and Longhorn 1.4.0 2023-02-20 11:16:12 +02:00
member-site Migrate doorboy to Kubernetes 2022-12-17 17:49:57 +02:00
metallb-system Migrate to Prometheus Operator 2022-09-11 16:38:16 +03:00
mongodb-operator Initial commit 2022-08-25 11:22:50 +03:00
mysql-operator mysql-operator: Bump to version 8.0.30-2.0.6 2022-09-16 08:41:07 +03:00
nyancat Add nyancat server 2023-01-03 10:25:08 +02:00
openebs Add rawfile-localpv 2022-12-02 00:10:04 +02:00
phpmyadmin Switch to wildcard *.k-space.ee certificate 2022-10-14 14:32:36 +03:00
playground playground: Initial commit 2022-10-14 00:14:35 +03:00
prometheus-operator prometheus-operator: Update node-exporter and add pve2 2023-01-07 10:27:05 +02:00
reloader Initial commit 2022-08-25 11:22:50 +03:00
rosdump Migrate to Prometheus Operator 2022-09-11 16:38:16 +03:00
shared Remove symlink method of adding Redis/KeyDB instances 2022-08-28 11:24:09 +03:00
tigera-operator camtiler: Clean ups 2022-12-14 19:50:55 +02:00
traefik traefik: Namespace filtering breaks allowExternalNameServices 2022-11-04 12:20:30 +02:00
wildduck Switch to wildcard *.k-space.ee certificate 2022-10-14 14:32:36 +03:00
.drone.yml Initial commit 2022-08-25 11:22:50 +03:00
.gitignore Update .gitignore file. Add IntelliJ IDEA part 2022-10-08 16:43:48 +00:00
CONTRIBUTORS.md Initial commit 2022-08-25 11:22:50 +03:00
LICENSE.md Initial commit 2022-08-25 11:22:50 +03:00
README.md Upgrade to Kubernetes 1.24 and Longhorn 1.4.0 2023-02-20 11:16:12 +02:00
cluster-role-bindings.yml Initial commit 2022-08-25 11:22:50 +03:00
storage-class.yaml Introduce separated storage classes per workload type 2022-12-06 09:06:07 +02:00


Kubernetes cluster manifests


This is the Kubernetes manifests of services running on k-space.ee domains:

Most endpoints are protected by OIDC autentication or Authelia SSO middleware.

Cluster access

General discussion is happening in the #kube Slack channel.

Bootstrapping access For bootstrap access obtain `/etc/kubernetes/admin.conf` from one of the master nodes and place it under `~/.kube/config` on your machine.

Once Authelia is working, OIDC access for others can be enabled with running following on Kubernetes masters:

patch /etc/kubernetes/manifests/kube-apiserver.yaml - << EOF
@@ -23,6 +23,10 @@
     - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
     - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
     - --etcd-servers=
+    - --oidc-issuer-url=https://auth.k-space.ee
+    - --oidc-client-id=kubelogin
+    - --oidc-username-claim=preferred_username
+    - --oidc-groups-claim=groups
     - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
     - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
     - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
sudo systemctl daemon-reload
systemctl restart kubelet

The following can be used to talk to the Kubernetes cluster using OIDC credentials:

kubectl krew install oidc-login
mkdir -p ~/.kube
cat << EOF > ~/.kube/config
apiVersion: v1
- cluster:
    server: https://master.kube.k-space.ee:6443
  name: kubernetes
- context:
    cluster: kubernetes
    user: oidc
  name: default
current-context: default
kind: Config
preferences: {}
- name: oidc
      apiVersion: client.authentication.k8s.io/v1beta1
      - oidc-login
      - get-token
      - --oidc-issuer-url=https://auth.k-space.ee
      - --oidc-client-id=kubelogin
      - --oidc-use-pkce
      - --oidc-extra-scope=profile,email,groups
      - --listen-address=
      command: kubectl
      env: null
      provideClusterInfo: false

For access control mapping see cluster-role-bindings.yml

systemd-resolved issues on access

Unable to connect to the server: dial tcp: lookup master.kube.k-space.ee on no such host
Network → VPN → `IPv4` → Other nameservers (Muud nimeserverid): ``
Network → VPN → `IPv6` → Other nameservers (Muud nimeserverid): `2001:bb8:4008:21::1`
Network → VPN → `IPv4` → Search domains (Otsingudomeenid): `kube.k-space.ee`
Network → VPN → `IPv6` → Search domains (Otsingudomeenid): `kube.k-space.ee`

Technology mapping

Our self-hosted Kubernetes stack compared to AWS based deployments:

Hipster startup Self-hosted hackerspace Purpose
AWS ALB Traefik Reverse proxy also known as ingress controller in Kubernetes jargon
AWS AMP Prometheus Operator Monitoring and alerting
AWS CloudTrail ECK Operator Log aggregation
AWS DocumentDB MongoDB Community Operator Highly available NoSQL database
AWS EBS Longhorn Block storage for arbitrary applications needing persistent storage
AWS EC2 Proxmox Virtualization layer
AWS ECR Harbor Docker registry
AWS EKS kubeadm Provision Kubernetes master nodes
AWS NLB MetalLB L2/L3 level load balancing
AWS RDS for MySQL MySQL Operator Provision highly available relational databases
AWS Route53 Bind and RFC2136 DNS records and Let's Encrypt DNS validation
AWS S3 Minio Operator Highly available object storage
AWS VPC Calico Overlay network
Dex Authelia ACL mapping and OIDC provider which integrates with GitHub/Samba
GitHub Actions Drone Build Docker images
GitHub Gitea Source code management, issue tracking
GitHub OAuth2 Samba (Active Directory compatible) Source of truth for authentication and authorization
Gmail Wildduck E-mail

External dependencies running as classic virtual machines:

  • Samba as Authelia's source of truth
  • Bind as DNS server

Adding applications

Deploy applications via ArgoCD

We use Treafik with Authelia for Ingress. Applications where possible and where applicable should use Remote-User authentication. This prevents application exposure on public Internet. Otherwise use OpenID Connect for authentication, see Argo itself as an example how that is done.

See kspace-camtiler/ingress.yml for commented Ingress example.

Note that we do not use IngressRoute objects because they don't support external-dns out of the box. Do NOT add nginx annotations, we use Traefik. Do NOT manually add DNS records, they are added by external-dns. Do NOT manually create Certificate objects, these should be handled by tls: section in Ingress.

Cluster formation

Created Ubuntu 22.04 VM-s on Proxmox with local storage. Added some ARM64 workers by using Ubuntu 22.04 server on Raspberry Pi.

After machines have booted up and you can reach them via SSH:

# Enable required kernel modules
cat > /etc/modules << EOF
cat /etc/modules | xargs -L 1 -t modprobe

# Finetune sysctl:
cat > /etc/sysctl.d/99-k8s.conf << EOF
net.ipv4.conf.all.accept_redirects  = 0
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1

# Elasticsearch needs this
vm.max_map_count                    = 524288

# Bump inotify limits to make sure
sysctl --system

# Disable Ubuntu caching DNS resolver
systemctl disable systemd-resolved.service
systemctl stop systemd-resolved
rm -fv /etc/resolv.conf
cat > /etc/resolv.conf << EOF

# Disable multipathd as Longhorn handles that itself
systemctl mask multipathd snapd
systemctl disable --now multipathd snapd bluetooth ModemManager hciuart wpa_supplicant packagekit

# Permit root login
sed -i -e 's/PermitRootLogin no/PermitRootLogin without-password/' /etc/ssh/sshd_config
systemctl reload ssh
cat ~ubuntu/.ssh/authorized_keys > /root/.ssh/authorized_keys
userdel -f ubuntu
apt-get install -yqq linux-image-generic
apt-get remove -yq cloud-init linux-image-*-kvm

Install packages:

cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /
cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list
deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /

rm -fv /etc/apt/trusted.gpg
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | gpg --dearmor > /etc/apt/trusted.gpg.d/libcontainers.gpg
curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/Release.key | gpg --dearmor > /etc/apt/trusted.gpg.d/libcontainers-cri-o.gpg
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg > /etc/apt/trusted.gpg.d/packages-cloud-google.gpg
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list

apt-get update
apt-get install -yqq --allow-change-held-packages apt-transport-https curl cri-o cri-o-runc kubelet=1.24.10-00 kubectl=1.24.10-00 kubeadm=1.24.10-00

cat << \EOF > /etc/containers/registries.conf
unqualified-search-registries = ["docker.io"]
# To pull Docker images from a mirror uncomment following
#prefix = "docker.io"
#location = "mirror.gcr.io"
sudo systemctl restart crio
sudo systemctl daemon-reload
sudo systemctl enable crio --now
apt-mark hold kubelet kubeadm kubectl

On master:

kubeadm init --token-ttl=120m --pod-network-cidr= --control-plane-endpoint "master.kube.k-space.ee:6443" --upload-certs --apiserver-cert-extra-sans master.kube.k-space.ee --node-name master1.kube.k-space.ee

For the kubeadm join command specify FQDN via --node-name $(hostname -f).

Set AZ labels:

for j in $(seq 1 9); do
  for t in master mon worker storage; do
    kubectl label nodes ${t}${j}.kube.k-space.ee topology.kubernetes.io/zone=node${j}

After forming the cluster add taints:

for j in $(seq 1 9); do
  kubectl label nodes worker${j}.kube.k-space.ee node-role.kubernetes.io/worker=''

for j in $(seq 1 4); do
  kubectl taint nodes mon${j}.kube.k-space.ee dedicated=monitoring:NoSchedule
  kubectl label nodes mon${j}.kube.k-space.ee dedicated=monitoring

for j in $(seq 1 4); do
  kubectl taint nodes storage${j}.kube.k-space.ee dedicated=storage:NoSchedule
  kubectl label nodes storage${j}.kube.k-space.ee dedicated=storage

For arm64 nodes add suitable taint to prevent scheduling non-multiarch images on them:

kubectl taint nodes worker9.kube.k-space.ee arch=arm64:NoSchedule

For door controllers:

for j in ground front back; do
  kubectl taint nodes door-${j}.kube.k-space.ee dedicated=door:NoSchedule
  kubectl label nodes door-${j}.kube.k-space.ee dedicated=door
  kubectl taint nodes door-${j}.kube.k-space.ee arch=arm64:NoSchedule

To reduce wear on storage:

echo StandardOutput=null >> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
systemctl daemon-reload
systemctl restart kubelet