Kubernetes manifests of services running on k-space.ee domains
Go to file
2024-07-30 10:32:57 +03:00
ansible docs: camtiler & doors 2024-07-30 06:13:56 +03:00
argocd migrate to new passmower 2024-07-27 03:17:24 +03:00
asterisk asterisk: update network policy 2023-10-09 13:45:23 +03:00
bind Update external-dns 2024-07-26 13:16:49 +03:00
camtiler docs: camtiler & doors 2024-07-30 06:13:56 +03:00
cert-manager Upgrade cert-manager 2024-07-28 10:37:34 +03:00
cnpg-system Upgrade CloudNativePG to 1.23.2 2024-07-26 17:35:42 +03:00
dragonfly-operator-system Add DragonflyDB operator 2024-07-26 17:46:45 +03:00
elastic-system attempt to get kibana working 2024-07-28 20:22:08 +03:00
etherpad Upgrade Etherpad 2024-07-27 08:31:56 +03:00
freescout migrate to new passmower 2024-07-27 03:17:24 +03:00
gitea migrate gitea to new passmower 2024-07-27 22:57:01 +03:00
grafana migrate grafana to new passmower and external db 2024-07-27 23:08:29 +03:00
hackerspace inventory: add ingress and other manifests 2024-07-28 20:58:25 +03:00
harbor fix and update harbor install 2024-07-28 20:22:08 +03:00
kube-system kube-system: Remove noisy KubernetesJobSlowCompletion alert 2023-08-28 20:55:28 +03:00
kubernetes-dashboard migrate to new passmower 2024-07-27 03:17:24 +03:00
local-path-storage Initial commit 2022-08-25 11:22:50 +03:00
logging Updates and cleanups 2023-08-29 09:29:36 +03:00
logmower migrate to new passmower 2024-07-27 03:17:24 +03:00
longhorn-system migrate to new passmower 2024-07-27 03:17:24 +03:00
member-site docs: camtiler & doors 2024-07-30 06:13:56 +03:00
metallb-system Upgrade MetalLB 2024-07-27 08:30:53 +03:00
minio-clusters use gcr mirror for images with full docker.io path 2024-04-28 05:01:02 +03:00
mongodb-operator mongodb: use mirror.gcr.io 2024-02-19 05:24:09 +02:00
monitoring zrepl: prometheus target 2024-07-28 20:00:51 +03:00
mysql-clusters migrate to new passmower 2024-07-27 03:17:24 +03:00
nextcloud migrate to new passmower 2024-07-27 03:17:24 +03:00
nyancat nyancat: Move to internal IP 2023-05-18 22:54:50 +03:00
oidc-gateway Make login url clickable in emails 2024-07-28 18:42:38 +00:00
openebs add openebs-localpath 2024-07-27 22:57:01 +03:00
opensearch-operator Add OpenSearch operator 2024-07-27 08:42:16 +03:00
passmower passmower users: list prefix before name 2024-07-30 08:00:14 +03:00
playground playground: Initial commit 2022-10-14 00:14:35 +03:00
postgres-clusters migrate to new passmower 2024-07-27 03:17:24 +03:00
prometheus-operator Update Prometheus operator 2024-07-25 19:17:24 +03:00
redis-clusters use gcr mirror for images with full docker.io path 2024-04-28 05:01:02 +03:00
reloader Initial commit 2022-08-25 11:22:50 +03:00
ripe87 ripe87: add ripe87.k-space.ee website 2023-11-19 16:45:51 +02:00
rosdump rosdump: Easier to navigate commit messages 2023-08-26 08:54:04 +03:00
shared mongoexpress: fix usage 2024-02-22 12:43:20 +02:00
signs Migrate signs.k-space.ee from GitLab to kube 2024-07-30 10:18:40 +03:00
tigera-operator Upgrade Calico 2024-07-28 10:38:25 +03:00
traefik migrate to new passmower 2024-07-27 03:17:24 +03:00
whoami-oidc debug 2024-02-12 09:29:00 +02:00
wiki migrate wiki to new passmower 2024-07-27 22:57:01 +03:00
wildduck wildduck: migrate to dragonfly, disable network policies, upgrade wildduck-operator 2024-07-28 20:22:08 +03:00
woodpecker migrate woodpecker to external mysql 2024-07-27 22:57:01 +03:00
.drone.yml Initial commit 2022-08-25 11:22:50 +03:00
.gitignore Add Ansible tasks to update authorized SSH keys 2024-07-19 14:08:51 +03:00
ansible.cfg Fix ansible.cfg 2024-07-28 01:42:55 +03:00
cluster-role-bindings.yml Deprecate Authelia 2023-07-28 12:23:29 +03:00
CONTRIBUTORS.md Initial commit 2022-08-25 11:22:50 +03:00
known_hosts mv to ansible/ 2024-07-27 23:55:16 +03:00
kube-apiserver.j2 manage kube-apiserver manifest with ansible 2024-07-27 22:57:01 +03:00
LICENSE.md Initial commit 2022-08-25 11:22:50 +03:00
README.md update readme 2024-07-27 23:08:15 +03:00
SLACK.md docs: Slack bots 2024-07-30 10:32:57 +03:00
storage-class.yaml monitoring: Switch Prometheus to local path provisioner 2023-09-23 11:55:56 +03:00

Kubernetes cluster manifests

Introduction

This is the Kubernetes manifests of services running on k-space.ee domains. The applications are listed on https://auth2.k-space.ee for authenticated users.

Cluster access

General discussion is happening in the #kube Slack channel.

Bootstrapping access For bootstrap access obtain `/etc/kubernetes/admin.conf` from one of the master nodes and place it under `~/.kube/config` on your machine.

Once Passmower is working, OIDC access for others can be enabled with running following on Kubernetes masters:

patch /etc/kubernetes/manifests/kube-apiserver.yaml - << EOF
@@ -23,6 +23,10 @@
     - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
     - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
     - --etcd-servers=https://127.0.0.1:2379
+    - --oidc-issuer-url=https://auth.k-space.ee/
+    - --oidc-client-id=oidc-gateway.kubelogin
+    - --oidc-username-claim=sub
+    - --oidc-groups-claim=groups
     - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
     - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
     - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
EOF
sudo systemctl daemon-reload
systemctl restart kubelet

The following can be used to talk to the Kubernetes cluster using OIDC credentials:

kubectl krew install oidc-login
mkdir -p ~/.kube
cat << EOF > ~/.kube/config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1EVXdNakEzTXpVMU1Wb1hEVE15TURReU9UQTNNelUxTVZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS2J2CjY3UFlXVHJMc3ZCQTZuWHUvcm55SlVhNnppTnNWTVN6N2w4ekhxM2JuQnhqWVNPUDJhN1RXTnpUTmZDanZBWngKTmlNbXJya1hpb2dYQWpVVkhSUWZlYm81TFIrb0JBOTdLWlcrN01UMFVJRXBuWVVaaTdBRHlaS01vcEJFUXlMNwp1SlU5UDhnNUR1T29FRHZieGJSMXFuV1JZRXpteFNmSFpocllpMVA3bFd4emkxR243eGRETFZaMjZjNm0xR3Y1CnViRjZyaFBXK1JSVkhiQzFKakJGeTBwRXdhYlUvUTd0Z2dic0JQUjk5NVZvMktCeElBelRmbHhVanlYVkJ3MjEKU2d3ZGI1amlpemxEM0NSbVdZZ0ZrRzd0NTVZeGF3ZmpaQjh5bW4xYjhUVjkwN3dRcG8veU8zM3RaaEE3L3BFUwpBSDJYeDk5bkpMbFVGVUtSY1A4Q0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZKNnZKeVk1UlJ1aklQWGxIK2ZvU3g2QzFRT2RNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBQ04zcGtCTVM3ekkrbUhvOWdTZQp6SzdXdjl3bXlCTVE5Q3crQXBSNnRBQXg2T1VIN0d1enc5TTV2bXNkYjkrYXBKMHBlZFB4SUg3YXZ1aG9SUXNMCkxqTzRSVm9BMG9aNDBZV3J3UStBR0dvdkZuaWNleXRNcFVSNEZjRXc0ZDRmcGl6V3d0TVNlRlRIUXR6WG84V2MKNFJGWC9xUXNVR1NWa01PaUcvcVVrSFpXQVgyckdhWXZ1Tkw2eHdSRnh5ZHpsRTFSUk56TkNvQzVpTXhjaVRNagpackEvK0pqVEFWU2FuNXZnODFOSmthZEphbmNPWmEwS3JEdkZzd1JJSG5CMGpMLzh3VmZXSTV6czZURU1VZUk1ClF6dU01QXUxUFZ4VXZJUGhlMHl6UXZjWDV5RlhnMkJGU3MzKzJBajlNcENWVTZNY2dSSTl5TTRicitFTUlHL0kKY0pjPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    server: https://master.kube.k-space.ee:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: oidc
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: oidc
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args:
      - oidc-login
      - get-token
      - --oidc-issuer-url=https://auth.k-space.ee/
      - --oidc-client-id=passmower.kubelogin
      - --oidc-use-pkce
      - --oidc-extra-scope=profile,email,groups
      - --listen-address=127.0.0.1:27890
      command: kubectl
      env: null
      provideClusterInfo: false
EOF

For access control mapping see cluster-role-bindings.yml

systemd-resolved issues on access

Unable to connect to the server: dial tcp: lookup master.kube.k-space.ee on 127.0.0.53:53: no such host
Network → VPN → `IPv4` → Other nameservers (Muud nimeserverid): `172.21.0.1`
Network → VPN → `IPv6` → Other nameservers (Muud nimeserverid): `2001:bb8:4008:21::1`
Network → VPN → `IPv4` → Search domains (Otsingudomeenid): `kube.k-space.ee`
Network → VPN → `IPv6` → Search domains (Otsingudomeenid): `kube.k-space.ee`

Technology mapping

Our self-hosted Kubernetes stack compared to AWS based deployments:

Hipster startup Self-hosted hackerspace Purpose
AWS ALB Traefik Reverse proxy also known as ingress controller in Kubernetes jargon
AWS AMP Prometheus Operator Monitoring and alerting
AWS CloudTrail ECK Operator Log aggregation
AWS DocumentDB MongoDB Community Operator Highly available NoSQL database
AWS EBS Longhorn Block storage for arbitrary applications needing persistent storage
AWS EC2 Proxmox Virtualization layer
AWS ECR Harbor Docker registry
AWS EKS kubeadm Provision Kubernetes master nodes
AWS NLB MetalLB L2/L3 level load balancing
AWS RDS for MySQL MySQL Operator Provision highly available relational databases
AWS Route53 Bind and RFC2136 DNS records and Let's Encrypt DNS validation
AWS S3 Minio Operator Highly available object storage
AWS VPC Calico Overlay network
Dex Passmower ACL mapping and OIDC provider which integrates with GitHub/Samba
GitHub Actions Drone Build Docker images
GitHub Gitea Source code management, issue tracking
GitHub OAuth2 Samba (Active Directory compatible) Source of truth for authentication and authorization
Gmail Wildduck E-mail

External dependencies running as classic virtual machines:

  • Bind as DNS server

Adding applications

Deploy applications via ArgoCD

We use Treafik with Passmower for Ingress. Applications where possible and where applicable should use Remote-User authentication. This prevents application exposure on public Internet. Otherwise use OpenID Connect for authentication, see Argo itself as an example how that is done.

See camtiler/ingress.yml for commented Ingress example.

Note that we do not use IngressRoute objects because they don't support external-dns out of the box. Do NOT add nginx annotations, we use Traefik. Do NOT manually add DNS records, they are added by external-dns. Do NOT manually create Certificate objects, these should be handled by tls: section in Ingress.

Cluster formation

Created Ubuntu 22.04 VM-s on Proxmox with local storage. Added some ARM64 workers by using Ubuntu 22.04 server on Raspberry Pi.

After machines have booted up and you can reach them via SSH:

# Disable Ubuntu caching DNS resolver
systemctl disable systemd-resolved.service
systemctl stop systemd-resolved
rm -fv /etc/resolv.conf
cat > /etc/resolv.conf << EOF
nameserver 1.1.1.1
nameserver 8.8.8.8
EOF

# Disable multipathd as Longhorn handles that itself
systemctl mask multipathd snapd
systemctl disable --now multipathd snapd bluetooth ModemManager hciuart wpa_supplicant packagekit

# Permit root login
sed -i -e 's/PermitRootLogin no/PermitRootLogin without-password/' /etc/ssh/sshd_config
systemctl reload ssh
cat ~ubuntu/.ssh/authorized_keys > /root/.ssh/authorized_keys
userdel -f ubuntu
apt-get install -yqq linux-image-generic
apt-get remove -yq cloud-init linux-image-*-kvm

On master:

kubeadm init --token-ttl=120m --pod-network-cidr=10.244.0.0/16 --control-plane-endpoint "master.kube.k-space.ee:6443" --upload-certs --apiserver-cert-extra-sans master.kube.k-space.ee --node-name master1.kube.k-space.ee

For the kubeadm join command specify FQDN via --node-name $(hostname -f).

Set AZ labels:

for j in $(seq 1 9); do
  for t in master mon worker storage; do
    kubectl label nodes ${t}${j}.kube.k-space.ee topology.kubernetes.io/zone=node${j}
  done
done

After forming the cluster add taints:

for j in $(seq 1 9); do
  kubectl label nodes worker${j}.kube.k-space.ee node-role.kubernetes.io/worker=''
done

for j in $(seq 1 4); do
  kubectl taint nodes mon${j}.kube.k-space.ee dedicated=monitoring:NoSchedule
  kubectl label nodes mon${j}.kube.k-space.ee dedicated=monitoring
done

for j in $(seq 1 4); do
  kubectl taint nodes storage${j}.kube.k-space.ee dedicated=storage:NoSchedule
  kubectl label nodes storage${j}.kube.k-space.ee dedicated=storage
done

For arm64 nodes add suitable taint to prevent scheduling non-multiarch images on them:

kubectl taint nodes worker9.kube.k-space.ee arch=arm64:NoSchedule

For door controllers:

for j in ground front back; do
  kubectl taint nodes door-${j}.kube.k-space.ee dedicated=door:NoSchedule
  kubectl label nodes door-${j}.kube.k-space.ee dedicated=door
  kubectl taint nodes door-${j}.kube.k-space.ee arch=arm64:NoSchedule
done

To reduce wear on storage:

echo StandardOutput=null >> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
systemctl daemon-reload
systemctl restart kubelet