1 Commits

Author SHA1 Message Date
9e3183d696 hackerspace kustomize
+ move static env to dockerfile
+ doorboy-direct refactor
2025-08-08 06:23:54 +03:00

View File

@@ -61,24 +61,44 @@ Network → VPN → `IPv6` → Search domains (Otsingudomeenid): `kube.k-space.e
Created Ubuntu 22.04 VM-s on Proxmox with local storage. Created Ubuntu 22.04 VM-s on Proxmox with local storage.
Added some ARM64 workers by using Ubuntu 22.04 server on Raspberry Pi. Added some ARM64 workers by using Ubuntu 22.04 server on Raspberry Pi.
First master: After machines have booted up and you can reach them via SSH:
```
# Disable Ubuntu caching DNS resolver
systemctl disable systemd-resolved.service
systemctl stop systemd-resolved
rm -fv /etc/resolv.conf
cat > /etc/resolv.conf << EOF
nameserver 1.1.1.1
nameserver 8.8.8.8
EOF
# Disable multipathd as Longhorn handles that itself
systemctl mask multipathd snapd
systemctl disable --now multipathd snapd bluetooth ModemManager hciuart wpa_supplicant packagekit
# Permit root login
sed -i -e 's/PermitRootLogin no/PermitRootLogin without-password/' /etc/ssh/sshd_config
systemctl reload ssh
cat ~ubuntu/.ssh/authorized_keys > /root/.ssh/authorized_keys
userdel -f ubuntu
apt-get install -yqq linux-image-generic
apt-get remove -yq cloud-init linux-image-*-kvm
```
On master:
``` ```
kubeadm init --token-ttl=120m --pod-network-cidr=10.244.0.0/16 --control-plane-endpoint "master.kube.k-space.ee:6443" --upload-certs --apiserver-cert-extra-sans master.kube.k-space.ee --node-name master1.kube.k-space.ee kubeadm init --token-ttl=120m --pod-network-cidr=10.244.0.0/16 --control-plane-endpoint "master.kube.k-space.ee:6443" --upload-certs --apiserver-cert-extra-sans master.kube.k-space.ee --node-name master1.kube.k-space.ee
``` ```
Joining nodes: For the `kubeadm join` command specify FQDN via `--node-name $(hostname -f)`.
```
# On a master:
kubeadm token create --print-join-command
# Joining node:
<printed join command --node-name "$(hostname -f)"
```
Set AZ labels: Set AZ labels:
``` ```
for j in $(seq 1 9); do for j in $(seq 1 9); do
for t in master mon worker; do for t in master mon worker storage; do
kubectl label nodes ${t}${j}.kube.k-space.ee topology.kubernetes.io/zone=node${j} kubectl label nodes ${t}${j}.kube.k-space.ee topology.kubernetes.io/zone=node${j}
done done
done done
@@ -95,6 +115,11 @@ for j in $(seq 1 4); do
kubectl taint nodes mon${j}.kube.k-space.ee dedicated=monitoring:NoSchedule kubectl taint nodes mon${j}.kube.k-space.ee dedicated=monitoring:NoSchedule
kubectl label nodes mon${j}.kube.k-space.ee dedicated=monitoring kubectl label nodes mon${j}.kube.k-space.ee dedicated=monitoring
done done
for j in $(seq 1 4); do
kubectl taint nodes storage${j}.kube.k-space.ee dedicated=storage:NoSchedule
kubectl label nodes storage${j}.kube.k-space.ee dedicated=storage
done
``` ```
For `arm64` nodes add suitable taint to prevent scheduling non-multiarch images on them: For `arm64` nodes add suitable taint to prevent scheduling non-multiarch images on them:
@@ -112,6 +137,13 @@ for j in ground front back; do
done done
``` ```
To reduce wear on storage:
```
echo StandardOutput=null >> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
systemctl daemon-reload
systemctl restart kubelet
```
## Technology mapping ## Technology mapping
Our self-hosted Kubernetes stack compared to AWS based deployments: Our self-hosted Kubernetes stack compared to AWS based deployments: