Kubernetes manifests of services running on k-space.ee domains (mirrored to https://gitlab.com/k-space/kube)
Go to file
2023-08-19 08:41:44 +03:00
argocd Remove irrelevant group membership checks 2023-07-29 15:02:15 +03:00
asterisk asterisk: Add pod monitor and alerting rules 2023-08-18 08:45:04 +03:00
camtiler camtiler: Fix event broker config 2023-08-18 16:40:18 +03:00
cert-manager Initial commit 2022-08-25 11:22:50 +03:00
cnpg-system Add CloudNativePG 2023-08-16 10:29:09 +03:00
drone drone: Clean up configs 2023-08-11 14:26:55 +03:00
drone-execution Initial commit 2022-08-25 11:22:50 +03:00
elastic-system elastic-system: Exclude logging ECK stack itself 2022-10-21 00:57:11 +03:00
etherpad etherpad: Switch to Passmower 2023-08-16 10:11:05 +03:00
external-dns external-dns: Migrate k6.ee and kspace.ee 2023-08-14 18:59:15 +03:00
freescout freescout: Add Job for resetting OIDC config 2023-08-16 22:41:41 +03:00
gitea gitea: Disable releases and wiki 2023-08-16 10:41:43 +03:00
grafana grafana: Use direct link for OIDC app listing 2023-08-04 18:06:36 +03:00
harbor harbor: Reduce logging verbosity 2022-11-05 22:43:00 +02:00
kube-system Add descheduler 2022-12-23 23:30:39 +02:00
kubernetes-dashboard Switch to wildcard *.k-space.ee certificate 2022-10-14 14:32:36 +03:00
local-path-storage Initial commit 2022-08-25 11:22:50 +03:00
logging elastic-system: Add Syslog ingestion 2022-09-23 16:37:29 +03:00
logmower logmower: Cleanups 2023-08-16 10:45:55 +03:00
longhorn-system Set up Longhorn backups to ZFS box 2023-07-28 13:06:00 +03:00
member-site Migrate doorboy to Kubernetes 2022-12-17 17:49:57 +02:00
metallb-system Bump metallb operator from v0.13.4 to v0.13.11 2023-07-28 12:22:05 +03:00
minio-clusters Clean up operatorlib related stuff 2023-08-16 10:39:20 +03:00
mongodb-operator Initial commit 2022-08-25 11:22:50 +03:00
mysql-clusters mysql-clusters: Rename phpMyAdmin manifest 2023-08-16 15:56:29 +03:00
mysql-operator Clean up operatorlib related stuff 2023-08-16 10:39:20 +03:00
nextcloud Migrate Nextcloud to Kube 2023-07-30 00:14:56 +03:00
nyancat nyancat: Move to internal IP 2023-05-18 22:54:50 +03:00
oidc-gateway Move whoami 2023-08-13 18:35:25 +03:00
openebs openebs: Pin specific image 2023-08-16 10:35:57 +03:00
playground playground: Initial commit 2022-10-14 00:14:35 +03:00
postgres-clusters Clean up operatorlib related stuff 2023-08-16 10:39:20 +03:00
postgres-operator Add Crunchydata PGO 2023-02-26 11:09:11 +02:00
prometheus-operator prometheus-operator: Drop mfp-cyber.pub.k-space.ee 2023-08-19 08:41:44 +03:00
redis-clusters Clean up operatorlib related stuff 2023-08-16 10:39:20 +03:00
reloader Initial commit 2022-08-25 11:22:50 +03:00
rosdump rosdump: Fix NetworkPolicies for in-kube Gitea 2023-08-16 11:35:41 +03:00
shared Add Crunchydata PGO 2023-02-26 11:09:11 +02:00
tigera-operator camtiler: Clean ups 2022-12-14 19:50:55 +02:00
traefik Move whoami 2023-08-13 18:35:25 +03:00
wiki wikijs: Add Job for resetting OIDC config 2023-08-16 22:18:30 +03:00
wildduck wildflock: Limit ACL-s 2023-08-17 11:58:38 +03:00
woodpecker woodpecker-agent: Drop privileges 2023-08-16 10:10:21 +03:00
.drone.yml Initial commit 2022-08-25 11:22:50 +03:00
.gitignore Update .gitignore file. Add IntelliJ IDEA part 2022-10-08 16:43:48 +00:00
ansible-doors.yml Add door controller setup 2023-08-12 13:20:03 +03:00
ansible-kubernetes.yml Work around unattended-upgrades quirk 2023-08-15 22:41:54 +03:00
ansible.cfg Add Ansible config 2023-08-10 19:35:17 +03:00
cluster-role-bindings.yml Deprecate Authelia 2023-07-28 12:23:29 +03:00
CONTRIBUTORS.md Initial commit 2022-08-25 11:22:50 +03:00
inventory.yml Move Kubernetes cluster bootstrap partially to Ansible 2023-08-13 20:21:15 +03:00
LICENSE.md Initial commit 2022-08-25 11:22:50 +03:00
README.md Move Kubernetes cluster bootstrap partially to Ansible 2023-08-13 20:21:15 +03:00
ssh_config Add Ansible config 2023-08-10 19:35:17 +03:00
storage-class.yaml Clean up operatorlib related stuff 2023-08-16 10:39:20 +03:00

Kubernetes cluster manifests

Introduction

This is the Kubernetes manifests of services running on k-space.ee domains:

Most endpoints are protected by OIDC autentication or Authelia SSO middleware.

Cluster access

General discussion is happening in the #kube Slack channel.

Bootstrapping access For bootstrap access obtain `/etc/kubernetes/admin.conf` from one of the master nodes and place it under `~/.kube/config` on your machine.

Once Authelia is working, OIDC access for others can be enabled with running following on Kubernetes masters:

patch /etc/kubernetes/manifests/kube-apiserver.yaml - << EOF
@@ -23,6 +23,10 @@
     - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
     - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
     - --etcd-servers=https://127.0.0.1:2379
+    - --oidc-issuer-url=https://auth2.k-space.ee/
+    - --oidc-client-id=kubelogin
+    - --oidc-username-claim=sub
+    - --oidc-groups-claim=groups
     - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
     - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
     - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
EOF
sudo systemctl daemon-reload
systemctl restart kubelet

The following can be used to talk to the Kubernetes cluster using OIDC credentials:

kubectl krew install oidc-login
mkdir -p ~/.kube
cat << EOF > ~/.kube/config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1EVXdNakEzTXpVMU1Wb1hEVE15TURReU9UQTNNelUxTVZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS2J2CjY3UFlXVHJMc3ZCQTZuWHUvcm55SlVhNnppTnNWTVN6N2w4ekhxM2JuQnhqWVNPUDJhN1RXTnpUTmZDanZBWngKTmlNbXJya1hpb2dYQWpVVkhSUWZlYm81TFIrb0JBOTdLWlcrN01UMFVJRXBuWVVaaTdBRHlaS01vcEJFUXlMNwp1SlU5UDhnNUR1T29FRHZieGJSMXFuV1JZRXpteFNmSFpocllpMVA3bFd4emkxR243eGRETFZaMjZjNm0xR3Y1CnViRjZyaFBXK1JSVkhiQzFKakJGeTBwRXdhYlUvUTd0Z2dic0JQUjk5NVZvMktCeElBelRmbHhVanlYVkJ3MjEKU2d3ZGI1amlpemxEM0NSbVdZZ0ZrRzd0NTVZeGF3ZmpaQjh5bW4xYjhUVjkwN3dRcG8veU8zM3RaaEE3L3BFUwpBSDJYeDk5bkpMbFVGVUtSY1A4Q0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZKNnZKeVk1UlJ1aklQWGxIK2ZvU3g2QzFRT2RNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBQ04zcGtCTVM3ekkrbUhvOWdTZQp6SzdXdjl3bXlCTVE5Q3crQXBSNnRBQXg2T1VIN0d1enc5TTV2bXNkYjkrYXBKMHBlZFB4SUg3YXZ1aG9SUXNMCkxqTzRSVm9BMG9aNDBZV3J3UStBR0dvdkZuaWNleXRNcFVSNEZjRXc0ZDRmcGl6V3d0TVNlRlRIUXR6WG84V2MKNFJGWC9xUXNVR1NWa01PaUcvcVVrSFpXQVgyckdhWXZ1Tkw2eHdSRnh5ZHpsRTFSUk56TkNvQzVpTXhjaVRNagpackEvK0pqVEFWU2FuNXZnODFOSmthZEphbmNPWmEwS3JEdkZzd1JJSG5CMGpMLzh3VmZXSTV6czZURU1VZUk1ClF6dU01QXUxUFZ4VXZJUGhlMHl6UXZjWDV5RlhnMkJGU3MzKzJBajlNcENWVTZNY2dSSTl5TTRicitFTUlHL0kKY0pjPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    server: https://master.kube.k-space.ee:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: oidc
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: oidc
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args:
      - oidc-login
      - get-token
      - --oidc-issuer-url=https://auth2.k-space.ee/
      - --oidc-client-id=oidc-gateway-kubelogin
      - --oidc-use-pkce
      - --oidc-extra-scope=profile,email,groups
      - --listen-address=127.0.0.1:27890
      command: kubectl
      env: null
      provideClusterInfo: false
EOF

For access control mapping see cluster-role-bindings.yml

systemd-resolved issues on access

Unable to connect to the server: dial tcp: lookup master.kube.k-space.ee on 127.0.0.53:53: no such host
Network → VPN → `IPv4` → Other nameservers (Muud nimeserverid): `172.21.0.1`
Network → VPN → `IPv6` → Other nameservers (Muud nimeserverid): `2001:bb8:4008:21::1`
Network → VPN → `IPv4` → Search domains (Otsingudomeenid): `kube.k-space.ee`
Network → VPN → `IPv6` → Search domains (Otsingudomeenid): `kube.k-space.ee`

Technology mapping

Our self-hosted Kubernetes stack compared to AWS based deployments:

Hipster startup Self-hosted hackerspace Purpose
AWS ALB Traefik Reverse proxy also known as ingress controller in Kubernetes jargon
AWS AMP Prometheus Operator Monitoring and alerting
AWS CloudTrail ECK Operator Log aggregation
AWS DocumentDB MongoDB Community Operator Highly available NoSQL database
AWS EBS Longhorn Block storage for arbitrary applications needing persistent storage
AWS EC2 Proxmox Virtualization layer
AWS ECR Harbor Docker registry
AWS EKS kubeadm Provision Kubernetes master nodes
AWS NLB MetalLB L2/L3 level load balancing
AWS RDS for MySQL MySQL Operator Provision highly available relational databases
AWS Route53 Bind and RFC2136 DNS records and Let's Encrypt DNS validation
AWS S3 Minio Operator Highly available object storage
AWS VPC Calico Overlay network
Dex Authelia ACL mapping and OIDC provider which integrates with GitHub/Samba
GitHub Actions Drone Build Docker images
GitHub Gitea Source code management, issue tracking
GitHub OAuth2 Samba (Active Directory compatible) Source of truth for authentication and authorization
Gmail Wildduck E-mail

External dependencies running as classic virtual machines:

  • Samba as Authelia's source of truth
  • Bind as DNS server

Adding applications

Deploy applications via ArgoCD

We use Treafik with Authelia for Ingress. Applications where possible and where applicable should use Remote-User authentication. This prevents application exposure on public Internet. Otherwise use OpenID Connect for authentication, see Argo itself as an example how that is done.

See kspace-camtiler/ingress.yml for commented Ingress example.

Note that we do not use IngressRoute objects because they don't support external-dns out of the box. Do NOT add nginx annotations, we use Traefik. Do NOT manually add DNS records, they are added by external-dns. Do NOT manually create Certificate objects, these should be handled by tls: section in Ingress.

Cluster formation

Created Ubuntu 22.04 VM-s on Proxmox with local storage. Added some ARM64 workers by using Ubuntu 22.04 server on Raspberry Pi.

After machines have booted up and you can reach them via SSH:

# Disable Ubuntu caching DNS resolver
systemctl disable systemd-resolved.service
systemctl stop systemd-resolved
rm -fv /etc/resolv.conf
cat > /etc/resolv.conf << EOF
nameserver 1.1.1.1
nameserver 8.8.8.8
EOF

# Disable multipathd as Longhorn handles that itself
systemctl mask multipathd snapd
systemctl disable --now multipathd snapd bluetooth ModemManager hciuart wpa_supplicant packagekit

# Permit root login
sed -i -e 's/PermitRootLogin no/PermitRootLogin without-password/' /etc/ssh/sshd_config
systemctl reload ssh
cat ~ubuntu/.ssh/authorized_keys > /root/.ssh/authorized_keys
userdel -f ubuntu
apt-get install -yqq linux-image-generic
apt-get remove -yq cloud-init linux-image-*-kvm

On master:

kubeadm init --token-ttl=120m --pod-network-cidr=10.244.0.0/16 --control-plane-endpoint "master.kube.k-space.ee:6443" --upload-certs --apiserver-cert-extra-sans master.kube.k-space.ee --node-name master1.kube.k-space.ee

For the kubeadm join command specify FQDN via --node-name $(hostname -f).

Set AZ labels:

for j in $(seq 1 9); do
  for t in master mon worker storage; do
    kubectl label nodes ${t}${j}.kube.k-space.ee topology.kubernetes.io/zone=node${j}
  done
done

After forming the cluster add taints:

for j in $(seq 1 9); do
  kubectl label nodes worker${j}.kube.k-space.ee node-role.kubernetes.io/worker=''
done

for j in $(seq 1 4); do
  kubectl taint nodes mon${j}.kube.k-space.ee dedicated=monitoring:NoSchedule
  kubectl label nodes mon${j}.kube.k-space.ee dedicated=monitoring
done

for j in $(seq 1 4); do
  kubectl taint nodes storage${j}.kube.k-space.ee dedicated=storage:NoSchedule
  kubectl label nodes storage${j}.kube.k-space.ee dedicated=storage
done

For arm64 nodes add suitable taint to prevent scheduling non-multiarch images on them:

kubectl taint nodes worker9.kube.k-space.ee arch=arm64:NoSchedule

For door controllers:

for j in ground front back; do
  kubectl taint nodes door-${j}.kube.k-space.ee dedicated=door:NoSchedule
  kubectl label nodes door-${j}.kube.k-space.ee dedicated=door
  kubectl taint nodes door-${j}.kube.k-space.ee arch=arm64:NoSchedule
done

To reduce wear on storage:

echo StandardOutput=null >> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
systemctl daemon-reload
systemctl restart kubelet