106 Commits

Author SHA1 Message Date
5e04a1bd43 provision new worker nodes with ansible 2024-08-09 12:07:03 +03:00
8a1b0b52af add new worker9 2024-08-08 22:39:35 +03:00
6b24ede7ac Upgrade to Kubernetes 1.30 2024-08-08 19:45:46 +03:00
e0cf532e42 Upgrade to Kubernetes 1.29 2024-08-08 18:55:02 +03:00
Erki Aas
59373041cc passmower: run in 3 replicas 2024-08-08 15:53:53 +03:00
4e80899c77 Prepare for separation of ansible Git repo 2024-08-08 12:56:25 +03:00
Erki Aas
9c2b5c39ee fix/update harbor 2024-08-08 12:45:57 +03:00
d3eb888d58 doc: inventory: reference rosdump 2024-08-08 12:40:54 +03:00
3714b174e7 camtiler: disable, it broken 2024-08-03 09:03:14 +03:00
a1acb06e12 traefik: publish services (for argo healthy) 2024-08-03 09:03:13 +03:00
0b6ab650a2 argo: add apps (already) in argo to git (config drift) 2024-08-03 09:03:11 +03:00
35404464f4 argo: strongarm autosync to prevent further config drift
Commenting empty syncPolicy, otherwise argocd sees it as diff
2024-08-03 08:01:55 +03:00
41da5931f9 auth migra: whoami 2024-08-03 06:04:27 +03:00
6879a4e5a5 argo: drone no longer exists 2024-08-03 06:04:27 +03:00
9b2c655a02 camtiler: unify to cam.k-space.ee 2024-08-03 06:04:27 +03:00
8876300dc4 argo config drift: camtiler 2024-08-03 06:04:24 +03:00
8199b3b732 argo config drift: wildduck
Change for apps/StatefulSet/wildduck/wildduck-operator
caused by 2d25377090 applied by ArgoCD:
-      serviceAccountName: codemowers-io-wildduck-operator
+      serviceAccountName: codemowers-cloud-wildduck-operator
2024-08-03 05:35:31 +03:00
43c9b3aa93 argo config drift: woodpecker 2024-08-03 05:35:31 +03:00
504bd3012e argo config drift: doorboy 2024-08-03 04:27:31 +03:00
75b5d39880 signs: deploy with argo 2024-08-03 04:27:31 +03:00
7377b62b3f doc: readme tip + todo for argo 'user-facing' doc 2024-08-03 04:27:31 +03:00
cd13de6cee doc: Reword backlink warning
we already got more broken links :/
I don't really want it to be an agressive warn.
2024-08-03 04:27:31 +03:00
13da9a8877 Add redirects sign.k-space.ee, members.k-space.ee
There still are dead inventory links with members.k-space.ee
2024-08-03 04:27:31 +03:00
490770485d fixup auth2 → auth rename 2024-08-03 04:27:20 +03:00
ba48643a37 inventory: tls host is k-space.ee, not codemowers
seems like copy-pasta typo
2024-08-03 01:44:15 +03:00
Erki Aas
18a0079a21 chore: add eaas as contributor 2024-07-30 14:15:13 +03:00
Erki Aas
885b13ecd7 chore: move doorboy to hackerspace 2024-07-30 14:13:25 +03:00
Erki Aas
e17caa9c2d passmower: update login link template 2024-07-30 14:12:54 +03:00
Erki Aas
336ab2efa2 update readme 2024-07-30 12:40:01 +03:00
27a5fe14c7 docs: commit todo items 2024-07-30 11:03:00 +03:00
66034d2463 docs: mega refactor
Also bunch of edits at wiki.k-space.ee
2024-07-30 10:51:34 +03:00
186ea5d947 docs: hackerspace / Inventory-app 2024-07-30 10:33:25 +03:00
470d4f3459 docs: Slack bots 2024-07-30 10:32:57 +03:00
8ad6b989e5 Migrate signs.k-space.ee from GitLab to kube
copy from ripe87
2024-07-30 10:18:40 +03:00
b6bf3ab225 passmower users: list prefix before name 2024-07-30 08:00:14 +03:00
7cac31964d docs: camtiler & doors 2024-07-30 06:13:56 +03:00
a250363bb0 rm replaced-unused mysql-operator 2024-07-30 02:56:50 +03:00
Erki Aas
480ff4f426 update passmower deployment 2024-07-29 15:59:45 +03:00
b737d37b9c fmt ansible: compact and more readable 2024-07-28 22:28:30 +03:00
b4ad080e95 zrepl: enable prometheus for offsite 2024-07-28 21:46:26 +03:00
Simon
a5ad80d8cd Make login url clickable in emails 2024-07-28 18:42:38 +00:00
62be47c2e1 inventory: add ingress and other manifests 2024-07-28 20:58:25 +03:00
249ad2e9ed fix and update harbor install 2024-07-28 20:22:08 +03:00
0c38d2369b attempt to get kibana working 2024-07-28 20:22:08 +03:00
b07a5b9bc0 reconfigure grub only on x86 nodes 2024-07-28 20:22:08 +03:00
2d25377090 wildduck: migrate to dragonfly, disable network policies, upgrade wildduck-operator 2024-07-28 20:22:08 +03:00
73d185b2ee fix redirects 2024-07-28 20:22:08 +03:00
0eb2dc6503 deprecate crunchydata postgres operator 2024-07-28 20:22:08 +03:00
34f1b53544 zrepl: prometheus target 2024-07-28 20:00:51 +03:00
fd1aeaa1a3 Upgrade Calico 2024-07-28 10:38:25 +03:00
b8477de6a8 Upgrade cert-manager 2024-07-28 10:37:34 +03:00
2f712a935e fixup: nas root is not encrypted and failed 2024-07-28 03:32:11 +03:00
792ff38bea mv zrepl.yml to playbook.yml 2024-07-28 03:31:16 +03:00
e929b52e6d Fix ansible.cfg 2024-07-28 01:42:55 +03:00
b2b93879c2 mv to ansible/ 2024-07-27 23:55:16 +03:00
c222f22768 fix zrepl playbook 2024-07-27 23:54:29 +03:00
28ed62c40e migrate wildflock to new passmower 2024-07-27 23:51:04 +03:00
74600efb4c zrepl 2024-07-27 23:49:45 +03:00
79aaaf7498 add todo 2024-07-27 23:08:39 +03:00
f0b78f7b17 migrate grafana to new passmower and external db 2024-07-27 23:08:29 +03:00
ba520da57e update readme 2024-07-27 23:08:15 +03:00
30503ad121 update readme 2024-07-27 23:06:20 +03:00
fbe4a55251 migrate gitea to new passmower 2024-07-27 22:57:01 +03:00
37567eccf9 migrate wiki to new passmower 2024-07-27 22:57:01 +03:00
d3ba1cc05f add openebs-localpath 2024-07-27 22:57:01 +03:00
61b1b1d6ef migrate woodpecker to external mysql 2024-07-27 22:57:01 +03:00
1e8bccbfa3 migrate to new passmower 2024-07-27 22:57:01 +03:00
e89edca340 enable xfs quotas on worker node rootfs 2024-07-27 22:57:01 +03:00
2bb13ef505 manage kube-apiserver manifest with ansible 2024-07-27 22:57:01 +03:00
c44cfb8bc8 fix kubelogin 2024-07-27 22:57:01 +03:00
417f3ddcb8 Update storage nodes and readd Raspberry Pi 400 2024-07-27 22:11:38 +03:00
32fbd498cf Fix typo 2024-07-27 11:46:39 +03:00
97563e8092 Upgrade ECK operator 2024-07-27 10:50:17 +03:00
4141c6b8ae Add OpenSearch operator 2024-07-27 08:42:16 +03:00
bd26aa46b4 Upgrade Etherpad 2024-07-27 08:31:56 +03:00
92459ed68b Reorder SSH key update playbook 2024-07-27 08:30:53 +03:00
9cf57d8bc6 Upgrade MetalLB 2024-07-27 08:30:53 +03:00
af1c78dea6 deprecate members.k-space.ee 2024-07-27 03:17:24 +03:00
2e77813162 migrate to new passmower 2024-07-27 03:17:24 +03:00
ca623c11fd Update kubeadm, kubectl, kubelet deployment 2024-07-27 01:06:20 +03:00
047cbb5c6b traefik: upgrade to 3.1, migrate dashboard via ingressroute 2024-07-27 00:06:07 +03:00
3e52f37cde Add DragonflyDB operator 2024-07-26 17:46:45 +03:00
b955369e2a Upgrade CloudNativePG to 1.23.2 2024-07-26 17:35:42 +03:00
5e765e9788 Use Codemower's image for mikrotik-exporter 2024-07-26 14:15:18 +03:00
5d4f49409c Remove Keel annotations 2024-07-26 13:56:13 +03:00
de573721bd Deprecate Drone as it's devs moved on to develop Gitness 2024-07-26 13:51:55 +03:00
c868a62ab7 Update to Woodpecker 2.7.0 2024-07-26 13:26:24 +03:00
7b6f6252a5 Update external-dns 2024-07-26 13:16:49 +03:00
9223c956c0 Update Bind 9.19 to 9.20 2024-07-26 13:16:22 +03:00
1d4e5051d8 Add Prusa 3D printer web endpoint 2024-07-26 13:03:20 +03:00
56bb5be8a9 grafana: Upgrade and fix ACL
:# Please enter the commit message for your changes. Lines starting
2024-07-26 12:36:08 +03:00
d895360510 monitoring: Upgrade node-exporter 2024-07-25 19:17:24 +03:00
bc8de58ca8 monitoring: Upgrade blackbox-exporter 2024-07-25 19:17:24 +03:00
8d355ff9dc Update Prometheus operator 2024-07-25 19:17:24 +03:00
Erki Aas
dc2a08dc78 goredirect: fix mongo uri 2024-07-24 12:51:53 +03:00
19a0b70b9e woodpecker: fix agent 2024-07-19 19:49:32 +03:00
9c656b0ef9 woodpecker: restore storage from backup 2024-07-19 18:13:09 +03:00
278817249e Add Ansible tasks to update authorized SSH keys 2024-07-19 14:08:51 +03:00
cb5644c7f3 Ansible SSH multiplexing fixes 2024-07-19 12:55:40 +03:00
78ef148f83 Add Ansible playbook to update known_hosts and ssh_config 2024-07-19 11:49:47 +03:00
Erki Aas
c2b9ed0368 inventory: migrate to external mogno 2024-07-17 23:58:38 +03:00
Erki Aas
43abf125a9 pve: add pve-internal.k-space.ee for pve-csi in whitelisted codemowers.cloud cluster 2024-07-17 17:59:59 +03:00
Erki Aas
71d968a815 Upgrade longhorn to 1.6.2 2024-07-07 14:38:02 +03:00
Erki Aas
9b4976450f Upgrade longhorn to 1.5.5 2024-07-07 14:00:27 +03:00
27eb0aa6cc Bump Gitea to 1.22.1 2024-07-04 16:26:06 +03:00
f97a77e5aa rm dev.k-space.ee, VM deprecated 2024-06-20 17:27:35 +03:00
160 changed files with 17546 additions and 82186 deletions

View File

@@ -1,10 +0,0 @@
---
kind: pipeline
type: kubernetes
name: gitleaks
steps:
- name: gitleaks
image: zricethezav/gitleaks
commands:
- gitleaks detect --source=/drone/src

1
.gitignore vendored
View File

@@ -1,3 +1,4 @@
*.keys
*secrets.yml
*secret.yml
*.swp

170
CLUSTER.md Normal file
View File

@@ -0,0 +1,170 @@
# Kubernetes cluster
Kubernetes hosts run on [PVE Cluster](https://wiki.k-space.ee/en/hosting/proxmox). Hosts are listed in Ansible [inventory](ansible/inventory.yml).
## `kubectl`
- Authorization [ACLs](cluster-role-bindings.yml)
- [Troubleshooting `no such host`](#systemd-resolved-issues)
Authenticate to auth.k-space.ee:
```bash
kubectl krew install oidc-login
mkdir -p ~/.kube
cat << EOF > ~/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1EVXdNakEzTXpVMU1Wb1hEVE15TURReU9UQTNNelUxTVZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS2J2CjY3UFlXVHJMc3ZCQTZuWHUvcm55SlVhNnppTnNWTVN6N2w4ekhxM2JuQnhqWVNPUDJhN1RXTnpUTmZDanZBWngKTmlNbXJya1hpb2dYQWpVVkhSUWZlYm81TFIrb0JBOTdLWlcrN01UMFVJRXBuWVVaaTdBRHlaS01vcEJFUXlMNwp1SlU5UDhnNUR1T29FRHZieGJSMXFuV1JZRXpteFNmSFpocllpMVA3bFd4emkxR243eGRETFZaMjZjNm0xR3Y1CnViRjZyaFBXK1JSVkhiQzFKakJGeTBwRXdhYlUvUTd0Z2dic0JQUjk5NVZvMktCeElBelRmbHhVanlYVkJ3MjEKU2d3ZGI1amlpemxEM0NSbVdZZ0ZrRzd0NTVZeGF3ZmpaQjh5bW4xYjhUVjkwN3dRcG8veU8zM3RaaEE3L3BFUwpBSDJYeDk5bkpMbFVGVUtSY1A4Q0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZKNnZKeVk1UlJ1aklQWGxIK2ZvU3g2QzFRT2RNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBQ04zcGtCTVM3ekkrbUhvOWdTZQp6SzdXdjl3bXlCTVE5Q3crQXBSNnRBQXg2T1VIN0d1enc5TTV2bXNkYjkrYXBKMHBlZFB4SUg3YXZ1aG9SUXNMCkxqTzRSVm9BMG9aNDBZV3J3UStBR0dvdkZuaWNleXRNcFVSNEZjRXc0ZDRmcGl6V3d0TVNlRlRIUXR6WG84V2MKNFJGWC9xUXNVR1NWa01PaUcvcVVrSFpXQVgyckdhWXZ1Tkw2eHdSRnh5ZHpsRTFSUk56TkNvQzVpTXhjaVRNagpackEvK0pqVEFWU2FuNXZnODFOSmthZEphbmNPWmEwS3JEdkZzd1JJSG5CMGpMLzh3VmZXSTV6czZURU1VZUk1ClF6dU01QXUxUFZ4VXZJUGhlMHl6UXZjWDV5RlhnMkJGU3MzKzJBajlNcENWVTZNY2dSSTl5TTRicitFTUlHL0kKY0pjPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://master.kube.k-space.ee:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: oidc
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: oidc
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- oidc-login
- get-token
- --oidc-issuer-url=https://auth.k-space.ee/
- --oidc-client-id=passmower.kubelogin
- --oidc-use-pkce
- --oidc-extra-scope=profile,email,groups
- --listen-address=127.0.0.1:27890
command: kubectl
env: null
provideClusterInfo: false
EOF
# Test it:
kubectl get nodes # opens browser for authentication
```
### systemd-resolved issues
```sh
Unable to connect to the server: dial tcp: lookup master.kube.k-space.ee on 127.0.0.53:53: no such host
```
```
Network → VPN → `IPv4` → Other nameservers (Muud nimeserverid): `172.21.0.1`
Network → VPN → `IPv6` → Other nameservers (Muud nimeserverid): `2001:bb8:4008:21::1`
Network → VPN → `IPv4` → Search domains (Otsingudomeenid): `kube.k-space.ee`
Network → VPN → `IPv6` → Search domains (Otsingudomeenid): `kube.k-space.ee`
```
## Cluster formation
Created Ubuntu 22.04 VM-s on Proxmox with local storage.
Added some ARM64 workers by using Ubuntu 22.04 server on Raspberry Pi.
After machines have booted up and you can reach them via SSH:
```
# Disable Ubuntu caching DNS resolver
systemctl disable systemd-resolved.service
systemctl stop systemd-resolved
rm -fv /etc/resolv.conf
cat > /etc/resolv.conf << EOF
nameserver 1.1.1.1
nameserver 8.8.8.8
EOF
# Disable multipathd as Longhorn handles that itself
systemctl mask multipathd snapd
systemctl disable --now multipathd snapd bluetooth ModemManager hciuart wpa_supplicant packagekit
# Permit root login
sed -i -e 's/PermitRootLogin no/PermitRootLogin without-password/' /etc/ssh/sshd_config
systemctl reload ssh
cat ~ubuntu/.ssh/authorized_keys > /root/.ssh/authorized_keys
userdel -f ubuntu
apt-get install -yqq linux-image-generic
apt-get remove -yq cloud-init linux-image-*-kvm
```
On master:
```
kubeadm init --token-ttl=120m --pod-network-cidr=10.244.0.0/16 --control-plane-endpoint "master.kube.k-space.ee:6443" --upload-certs --apiserver-cert-extra-sans master.kube.k-space.ee --node-name master1.kube.k-space.ee
```
For the `kubeadm join` command specify FQDN via `--node-name $(hostname -f)`.
Set AZ labels:
```
for j in $(seq 1 9); do
for t in master mon worker storage; do
kubectl label nodes ${t}${j}.kube.k-space.ee topology.kubernetes.io/zone=node${j}
done
done
```
After forming the cluster add taints:
```bash
for j in $(seq 1 9); do
kubectl label nodes worker${j}.kube.k-space.ee node-role.kubernetes.io/worker=''
done
for j in $(seq 1 4); do
kubectl taint nodes mon${j}.kube.k-space.ee dedicated=monitoring:NoSchedule
kubectl label nodes mon${j}.kube.k-space.ee dedicated=monitoring
done
for j in $(seq 1 4); do
kubectl taint nodes storage${j}.kube.k-space.ee dedicated=storage:NoSchedule
kubectl label nodes storage${j}.kube.k-space.ee dedicated=storage
done
```
For `arm64` nodes add suitable taint to prevent scheduling non-multiarch images on them:
```bash
kubectl taint nodes worker9.kube.k-space.ee arch=arm64:NoSchedule
```
For door controllers:
```
for j in ground front back; do
kubectl taint nodes door-${j}.kube.k-space.ee dedicated=door:NoSchedule
kubectl label nodes door-${j}.kube.k-space.ee dedicated=door
kubectl taint nodes door-${j}.kube.k-space.ee arch=arm64:NoSchedule
done
```
To reduce wear on storage:
```
echo StandardOutput=null >> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
systemctl daemon-reload
systemctl restart kubelet
```
## Technology mapping
Our self-hosted Kubernetes stack compared to AWS based deployments:
| Hipster startup | Self-hosted hackerspace | Purpose |
|-------------------|-------------------------------------|---------------------------------------------------------------------|
| AWS ALB | Traefik | Reverse proxy also known as ingress controller in Kubernetes jargon |
| AWS AMP | Prometheus Operator | Monitoring and alerting |
| AWS CloudTrail | ECK Operator | Log aggregation |
| AWS DocumentDB | MongoDB Community Operator | Highly available NoSQL database |
| AWS EBS | Longhorn | Block storage for arbitrary applications needing persistent storage |
| AWS EC2 | Proxmox | Virtualization layer |
| AWS ECR | Harbor | Docker registry |
| AWS EKS | kubeadm | Provision Kubernetes master nodes |
| AWS NLB | MetalLB | L2/L3 level load balancing |
| AWS RDS for MySQL | MySQL Operator | Provision highly available relational databases |
| AWS Route53 | Bind and RFC2136 | DNS records and Let's Encrypt DNS validation |
| AWS S3 | Minio Operator | Highly available object storage |
| AWS VPC | Calico | Overlay network |
| Dex | Passmower | ACL mapping and OIDC provider which integrates with GitHub/Samba |
| GitHub Actions | Woodpecker | Build Docker images |
| GitHub | Gitea | Source code management, issue tracking |
| GitHub OAuth2 | Samba (Active Directory compatible) | Source of truth for authentication and authorization |
| Gmail | Wildduck | E-mail |

View File

@@ -10,3 +10,4 @@ this Git repository happen:
* Song Meo <songmeo@k-space.ee>
* Rasmus Kallas <rasmus@k-space.ee>
* Kristjan Kuusk <kkuusk@k-space.ee>
* Erki Aas <eaas@k-space.ee>

252
README.md
View File

@@ -1,230 +1,46 @@
# Kubernetes cluster manifests
# k-space.ee infrastructure
Kubernetes manifests, Ansible [playbooks](ansible/README.md), and documentation for K-SPACE services.
## Introduction
<!-- TODO: Docs for adding to ArgoCD (auto-)sync -->
- Repo is deployed with [ArgoCD](https://argocd.k-space.ee). For `kubectl` access, see [CLUSTER.md](CLUSTER.md#kubectl).
- Debugging Kubernetes [on Wiki](https://wiki.k-space.ee/en/hosting/debugging-kubernetes)
- Need help? → [`#kube`](https://k-space-ee.slack.com/archives/C02EYV1NTM2)
This is the Kubernetes manifests of services running on k-space.ee domains.
The applications are listed on https://auth2.k-space.ee for authenticated users.
Jump to docs: [inventory-app](hackerspace/README.md) / [cameras](camtiler/README.md) / [doors](https://wiki.k-space.ee/en/hosting/doors) / [list of apps](https://auth.k-space.ee) // [all infra](ansible/inventory.yml) / [network](https://wiki.k-space.ee/en/hosting/network/sensitive) / [retro](https://wiki.k-space.ee/en/hosting/retro) / [non-infra](https://wiki.k-space.ee)
Tip: Search the repo for `kind: xyz` for examples.
## Cluster access
## Supporting services
- Build [Git](https://git.k-space.ee) repositories with [Woodpecker](https://woodpecker.k-space.ee).
- Passmower: Authz with `kind: OIDCClient` (or `kind: OIDCMiddlewareClient`[^authz]).
- Traefik[^nonginx]: Expose services with `kind: Service` + `kind: Ingress` (TLS and DNS **included**).
General discussion is happening in the `#kube` Slack channel.
### Additional
- bind: Manage _additional_ DNS records with `kind: DNSEndpoint`.
- [Prometheus](https://wiki.k-space.ee/en/hosting/monitoring): Collect metrics with `kind: PodMonitor` (alerts with `kind: PrometheusRule`).
- [Slack bots](SLACK.md) and Kubernetes [CLUSTER.md](CLUSTER.md) itself.
<!-- TODO: Redirects: external-dns.alpha.kubernetes.io/hostname + in -extras.yaml: IngressRoute and Middleware -->
<details><summary>Bootstrapping access</summary>
For bootstrap access obtain `/etc/kubernetes/admin.conf` from one of the master
nodes and place it under `~/.kube/config` on your machine.
[^nonginx]: No nginx annotations! Use `kind: Ingress` instead. `IngressRoute` is not used as it doesn't support [`external-dns`](bind/README.md) out of the box.
[^authz]: Applications should use OpenID Connect (`kind: OIDCClient`) for authentication, whereever possible. If not possible, use `kind: OIDCMiddlewareClient` client, which will provide authentication via a Traefik middleware (`traefik.ingress.kubernetes.io/router.middlewares: passmower-proxmox@kubernetescrd`). Sometimes you might use both for extra security.
Once Passmower is working, OIDC access for others can be enabled with
running following on Kubernetes masters:
<!-- Linked to by https://wiki.k-space.ee/e/en/hosting/storage -->
### Databases / -stores:
- KeyDB: `kind: KeydbClaim` (replaces Redis[^redisdead])
- Dragonfly: `kind: Dragonfly` (replaces Redis[^redisdead])
- Longhorn: `storageClassName: longhorn` (filesystem storage)
- Mongo[^mongoproblems]: `kind: MongoDBCommunity` (NAS* `inventory-mongodb`)
- Minio S3: `kind: MinioBucketClaim` with `class: dedicated` (NAS*: `class: external`)
- MariaDB*: search for `mysql`, `mariadb`[^mariadb] (replaces MySQL)
- Postgres*: hardcoded to [harbor/application.yml](harbor/application.yml)
```bash
patch /etc/kubernetes/manifests/kube-apiserver.yaml - << EOF
@@ -23,6 +23,10 @@
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379
+ - --oidc-issuer-url=https://auth2.k-space.ee/
+ - --oidc-client-id=oidc-gateway.kubelogin
+ - --oidc-username-claim=sub
+ - --oidc-groups-claim=groups
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
EOF
sudo systemctl daemon-reload
systemctl restart kubelet
```
</details>
\* External, hosted directly on [nas.k-space.ee](https://wiki.k-space.ee/en/hosting/storage)
The following can be used to talk to the Kubernetes cluster using OIDC credentials:
[^mariadb]: As of 2024-07-30 used by auth, authelia, bitwarden, etherpad, freescout, git, grafana, nextcloud, wiki, woodpecker
```bash
kubectl krew install oidc-login
mkdir -p ~/.kube
cat << EOF > ~/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1EVXdNakEzTXpVMU1Wb1hEVE15TURReU9UQTNNelUxTVZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS2J2CjY3UFlXVHJMc3ZCQTZuWHUvcm55SlVhNnppTnNWTVN6N2w4ekhxM2JuQnhqWVNPUDJhN1RXTnpUTmZDanZBWngKTmlNbXJya1hpb2dYQWpVVkhSUWZlYm81TFIrb0JBOTdLWlcrN01UMFVJRXBuWVVaaTdBRHlaS01vcEJFUXlMNwp1SlU5UDhnNUR1T29FRHZieGJSMXFuV1JZRXpteFNmSFpocllpMVA3bFd4emkxR243eGRETFZaMjZjNm0xR3Y1CnViRjZyaFBXK1JSVkhiQzFKakJGeTBwRXdhYlUvUTd0Z2dic0JQUjk5NVZvMktCeElBelRmbHhVanlYVkJ3MjEKU2d3ZGI1amlpemxEM0NSbVdZZ0ZrRzd0NTVZeGF3ZmpaQjh5bW4xYjhUVjkwN3dRcG8veU8zM3RaaEE3L3BFUwpBSDJYeDk5bkpMbFVGVUtSY1A4Q0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZKNnZKeVk1UlJ1aklQWGxIK2ZvU3g2QzFRT2RNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBQ04zcGtCTVM3ekkrbUhvOWdTZQp6SzdXdjl3bXlCTVE5Q3crQXBSNnRBQXg2T1VIN0d1enc5TTV2bXNkYjkrYXBKMHBlZFB4SUg3YXZ1aG9SUXNMCkxqTzRSVm9BMG9aNDBZV3J3UStBR0dvdkZuaWNleXRNcFVSNEZjRXc0ZDRmcGl6V3d0TVNlRlRIUXR6WG84V2MKNFJGWC9xUXNVR1NWa01PaUcvcVVrSFpXQVgyckdhWXZ1Tkw2eHdSRnh5ZHpsRTFSUk56TkNvQzVpTXhjaVRNagpackEvK0pqVEFWU2FuNXZnODFOSmthZEphbmNPWmEwS3JEdkZzd1JJSG5CMGpMLzh3VmZXSTV6czZURU1VZUk1ClF6dU01QXUxUFZ4VXZJUGhlMHl6UXZjWDV5RlhnMkJGU3MzKzJBajlNcENWVTZNY2dSSTl5TTRicitFTUlHL0kKY0pjPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://master.kube.k-space.ee:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: oidc
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: oidc
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- oidc-login
- get-token
- --oidc-issuer-url=https://auth2.k-space.ee/
- --oidc-client-id=oidc-gateway.kubelogin
- --oidc-use-pkce
- --oidc-extra-scope=profile,email,groups
- --listen-address=127.0.0.1:27890
command: kubectl
env: null
provideClusterInfo: false
EOF
```
[^redisdead]: Redis has been replaced as redis-operatori couldn't handle itself: didn't reconcile after reboots, master URI was empty, and clients complained about missing masters. ArgoCD still hosts its own Redis.
For access control mapping see [cluster-role-bindings.yml](cluster-role-bindings.yml)
[^mongoproblems]: Mongo problems: Incompatible with rawfile csi (wiredtiger.wt corrupts), complicated resizing (PVCs from statefulset PVC template).
### systemd-resolved issues on access
```sh
Unable to connect to the server: dial tcp: lookup master.kube.k-space.ee on 127.0.0.53:53: no such host
```
```
Network → VPN → `IPv4` → Other nameservers (Muud nimeserverid): `172.21.0.1`
Network → VPN → `IPv6` → Other nameservers (Muud nimeserverid): `2001:bb8:4008:21::1`
Network → VPN → `IPv4` → Search domains (Otsingudomeenid): `kube.k-space.ee`
Network → VPN → `IPv6` → Search domains (Otsingudomeenid): `kube.k-space.ee`
```
# Technology mapping
Our self-hosted Kubernetes stack compared to AWS based deployments:
| Hipster startup | Self-hosted hackerspace | Purpose |
|-------------------|-------------------------------------|---------------------------------------------------------------------|
| AWS ALB | Traefik | Reverse proxy also known as ingress controller in Kubernetes jargon |
| AWS AMP | Prometheus Operator | Monitoring and alerting |
| AWS CloudTrail | ECK Operator | Log aggregation |
| AWS DocumentDB | MongoDB Community Operator | Highly available NoSQL database |
| AWS EBS | Longhorn | Block storage for arbitrary applications needing persistent storage |
| AWS EC2 | Proxmox | Virtualization layer |
| AWS ECR | Harbor | Docker registry |
| AWS EKS | kubeadm | Provision Kubernetes master nodes |
| AWS NLB | MetalLB | L2/L3 level load balancing |
| AWS RDS for MySQL | MySQL Operator | Provision highly available relational databases |
| AWS Route53 | Bind and RFC2136 | DNS records and Let's Encrypt DNS validation |
| AWS S3 | Minio Operator | Highly available object storage |
| AWS VPC | Calico | Overlay network |
| Dex | Passmower | ACL mapping and OIDC provider which integrates with GitHub/Samba |
| GitHub Actions | Drone | Build Docker images |
| GitHub | Gitea | Source code management, issue tracking |
| GitHub OAuth2 | Samba (Active Directory compatible) | Source of truth for authentication and authorization |
| Gmail | Wildduck | E-mail |
External dependencies running as classic virtual machines:
- Bind as DNS server
## Adding applications
Deploy applications via [ArgoCD](https://argocd.k-space.ee)
We use Treafik with Passmower for Ingress.
Applications where possible and where applicable should use `Remote-User`
authentication. This prevents application exposure on public Internet.
Otherwise use OpenID Connect for authentication,
see Argo itself as an example how that is done.
See `camtiler/ingress.yml` for commented Ingress example.
Note that we do not use IngressRoute objects because they don't
support `external-dns` out of the box.
Do NOT add nginx annotations, we use Traefik.
Do NOT manually add DNS records, they are added by `external-dns`.
Do NOT manually create Certificate objects,
these should be handled by `tls:` section in Ingress.
## Cluster formation
Created Ubuntu 22.04 VM-s on Proxmox with local storage.
Added some ARM64 workers by using Ubuntu 22.04 server on Raspberry Pi.
After machines have booted up and you can reach them via SSH:
```
# Disable Ubuntu caching DNS resolver
systemctl disable systemd-resolved.service
systemctl stop systemd-resolved
rm -fv /etc/resolv.conf
cat > /etc/resolv.conf << EOF
nameserver 1.1.1.1
nameserver 8.8.8.8
EOF
# Disable multipathd as Longhorn handles that itself
systemctl mask multipathd snapd
systemctl disable --now multipathd snapd bluetooth ModemManager hciuart wpa_supplicant packagekit
# Permit root login
sed -i -e 's/PermitRootLogin no/PermitRootLogin without-password/' /etc/ssh/sshd_config
systemctl reload ssh
cat ~ubuntu/.ssh/authorized_keys > /root/.ssh/authorized_keys
userdel -f ubuntu
apt-get install -yqq linux-image-generic
apt-get remove -yq cloud-init linux-image-*-kvm
```
On master:
```
kubeadm init --token-ttl=120m --pod-network-cidr=10.244.0.0/16 --control-plane-endpoint "master.kube.k-space.ee:6443" --upload-certs --apiserver-cert-extra-sans master.kube.k-space.ee --node-name master1.kube.k-space.ee
```
For the `kubeadm join` command specify FQDN via `--node-name $(hostname -f)`.
Set AZ labels:
```
for j in $(seq 1 9); do
for t in master mon worker storage; do
kubectl label nodes ${t}${j}.kube.k-space.ee topology.kubernetes.io/zone=node${j}
done
done
```
After forming the cluster add taints:
```bash
for j in $(seq 1 9); do
kubectl label nodes worker${j}.kube.k-space.ee node-role.kubernetes.io/worker=''
done
for j in $(seq 1 4); do
kubectl taint nodes mon${j}.kube.k-space.ee dedicated=monitoring:NoSchedule
kubectl label nodes mon${j}.kube.k-space.ee dedicated=monitoring
done
for j in $(seq 1 4); do
kubectl taint nodes storage${j}.kube.k-space.ee dedicated=storage:NoSchedule
kubectl label nodes storage${j}.kube.k-space.ee dedicated=storage
done
```
For `arm64` nodes add suitable taint to prevent scheduling non-multiarch images on them:
```bash
kubectl taint nodes worker9.kube.k-space.ee arch=arm64:NoSchedule
```
For door controllers:
```
for j in ground front back; do
kubectl taint nodes door-${j}.kube.k-space.ee dedicated=door:NoSchedule
kubectl label nodes door-${j}.kube.k-space.ee dedicated=door
kubectl taint nodes door-${j}.kube.k-space.ee arch=arm64:NoSchedule
done
```
To reduce wear on storage:
```
echo StandardOutput=null >> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
systemctl daemon-reload
systemctl restart kubelet
```
***
_This page is referenced by wiki [front page](https://wiki.k-space.ee) as **the** technical documentation for infra._

28
SLACK.md Normal file
View File

@@ -0,0 +1,28 @@
## Slack bots
### Doorboy3
https://api.slack.com/apps/A05NDB6FVJQ
Slack app author: rasmus
Managed by inventory-app:
- Incoming (open-commands) to `/api/slack/doorboy`, inventory-app authorizes based on command originating from #members or #work-shop && oidc access group (floor, workshop).
- Posts logs to a private channel. Restricted to 193.40.103.0/24.
Secrets as `SLACK_DOORLOG_CALLBACK` and `SLACK_VERIFICATION_TOKEN`.
### oidc-gateway
https://api.slack.com/apps/A05DART9PP1
Slack app author: eaas
Managed by passmower:
- Links e-mail to slackId.
- Login via Slack (not enabled).
Secrets as `slackId` and `slack-client`.
### podi-podi uuenduste spämmikoobas
https://api.slack.com/apps/A033RE9TUFK
Slack app author: rasmus
Posts Prometheus alerts to a private channel.
Secret as `slack-secrets`.

View File

@@ -1,81 +0,0 @@
---
- name: Reconfigure graceful shutdown for kubelet
hosts: kubernetes
tasks:
- name: Reconfigure shutdownGracePeriod
ansible.builtin.lineinfile:
path: /var/lib/kubelet/config.yaml
regexp: '^shutdownGracePeriod:'
line: 'shutdownGracePeriod: 5m'
- name: Reconfigure shutdownGracePeriodCriticalPods
ansible.builtin.lineinfile:
path: /var/lib/kubelet/config.yaml
regexp: '^shutdownGracePeriodCriticalPods:'
line: 'shutdownGracePeriodCriticalPods: 5m'
- name: Work around unattended-upgrades
ansible.builtin.lineinfile:
path: /lib/systemd/logind.conf.d/unattended-upgrades-logind-maxdelay.conf
regexp: '^InhibitDelayMaxSec='
line: 'InhibitDelayMaxSec=5m0s'
- name: Pin kube components
hosts: kubernetes
tasks:
- name: Pin packages
loop:
- kubeadm
- kubectl
- kubelet
ansible.builtin.copy:
dest: "/etc/apt/preferences.d/{{ item }}"
content: |
Package: {{ item }}
Pin: version 1.26.*
Pin-Priority: 1001
- name: Reset /etc/containers/registries.conf
hosts: kubernetes
tasks:
- name: Copy /etc/containers/registries.conf
ansible.builtin.copy:
content: "unqualified-search-registries = [\"docker.io\"]\n"
dest: /etc/containers/registries.conf
register: registries
- name: Restart CRI-O
service:
name: cri-o
state: restarted
when: registries.changed
- name: Reset /etc/modules
hosts: kubernetes
tasks:
- name: Copy /etc/modules
ansible.builtin.copy:
content: |
overlay
br_netfilter
dest: /etc/modules
register: kernel_modules
- name: Load kernel modules
ansible.builtin.shell: "cat /etc/modules | xargs -L 1 -t modprobe"
when: kernel_modules.changed
- name: Reset /etc/sysctl.d/99-k8s.conf
hosts: kubernetes
tasks:
- name: Copy /etc/sysctl.d/99-k8s.conf
ansible.builtin.copy:
content: |
net.ipv4.conf.all.accept_redirects = 0
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
vm.max_map_count = 524288
fs.inotify.max_user_instances = 1280
fs.inotify.max_user_watches = 655360
dest: /etc/sysctl.d/99-k8s.conf
register: sysctl
- name: Reload sysctl config
ansible.builtin.shell: "sysctl --system"
when: sysctl.changed

5
ansible/README.md Normal file
View File

@@ -0,0 +1,5 @@
#TODO:
- inventory
- running playbooks NB! about PWD
- ssh_config; updating
Include ssh_config (with known_hosts) to access all machines listed.

View File

@@ -1,12 +1,15 @@
[defaults]
ansible_managed = This file is managed by Ansible, manual changes will be overwritten.
inventory = inventory.yml
nocows = 1
pipelining = True
pattern =
pattern =
deprecation_warnings = False
fact_caching = jsonfile
fact_caching_connection = ~/.ansible/k-space-fact-cache
fact_caching_timeout = 7200
remote_user = root
[ssh_connection]
ssh_args = -F ssh_config
control_path = ~/.ssh/cm-%%r@%%h:%%p
ssh_args = -o ControlMaster=auto -o ControlPersist=8h -F ssh_config
pipelining = True

View File

@@ -1,5 +1,7 @@
# ansible doors -m shell -a "ctr image pull harbor.k-space.ee/k-space/mjpg-streamer:latest"
# journalctl -u mjpg_streamer@video0.service -f
# Referenced/linked and documented by https://wiki.k-space.ee/en/hosting/doors
- name: Setup doors
hosts: doors
tasks:
@@ -8,7 +10,7 @@
name: containerd
state: present
- name: Copy systemd service for Doorboy controller
- name: Copy systemd service for Doorboy controller # https://git.k-space.ee/k-space/godoor
copy:
dest: /etc/systemd/system/godoor.service
content: |
@@ -34,7 +36,7 @@
daemon_reload: yes
name: godoor.service
- name: Copy systemd service for mjpg-streamer
- name: Copy systemd service for mjpg-streamer # https://git.k-space.ee/k-space/mjpg-steramer
copy:
dest: /etc/systemd/system/mjpg_streamer@.service
content: |

83
ansible/inventory.yml Normal file
View File

@@ -0,0 +1,83 @@
# This file is linked from /README.md as 'all infra'.
##### Not otherwise linked:
# Homepage: https://git.k-space.ee/k-space/homepage (on GitLab)
# Slack: https://k-space-ee.slack.com
# Routers/Switches: https://git.k-space.ee/k-space/rosdump
all:
vars:
admins:
- lauri
- eaas
extra_admins: []
children:
# https://wiki.k-space.ee/en/hosting/storage
nasgroup:
hosts:
nas.k-space.ee: { ansible_host: 172.23.0.7 }
offsite:
ansible_host: 78.28.64.17
ansible_port: 10648
vars:
offsite_dataset: offsite/backup_zrepl
misc:
children:
nasgroup:
hosts:
# https://git.k-space.ee/k-space/kube: bind/README.md (primary DNS, PVE VM)
ns1.k-space.ee: { ansible_host: 172.20.0.2 }
# https://wiki.k-space.ee/hosting/proxmox (depends on nas.k-space.ee)
proxmox: # aka PVE, Proxmox Virtualization Environment
vars:
extra_admins:
- rasmus
hosts:
pve1: { ansible_host: 172.21.20.1 }
pve2: { ansible_host: 172.21.20.2 }
pve8: { ansible_host: 172.21.20.8 }
pve9: { ansible_host: 172.21.20.9 }
# https://git.k-space.ee/k-space/kube: README.md
# CLUSTER.md (PVE VMs + external nas.k-space.ee)
kubernetes:
children:
masters:
hosts:
master1.kube.k-space.ee: { ansible_host: 172.21.3.51 }
master2.kube.k-space.ee: { ansible_host: 172.21.3.52 }
master3.kube.k-space.ee: { ansible_host: 172.21.3.53 }
kubelets:
children:
mon: # they sit in a priviledged VLAN
hosts:
mon1.kube.k-space.ee: { ansible_host: 172.21.3.61 }
mon2.kube.k-space.ee: { ansible_host: 172.21.3.62 }
mon3.kube.k-space.ee: { ansible_host: 172.21.3.63 }
storage: # longhorn, to be replaced with a more direct CSI
hosts:
storage1.kube.k-space.ee: { ansible_host: 172.21.3.71 }
storage2.kube.k-space.ee: { ansible_host: 172.21.3.72 }
storage3.kube.k-space.ee: { ansible_host: 172.21.3.73 }
storage4.kube.k-space.ee: { ansible_host: 172.21.3.74 }
workers:
hosts:
worker1.kube.k-space.ee: { ansible_host: 172.20.3.81 }
worker2.kube.k-space.ee: { ansible_host: 172.20.3.82 }
worker3.kube.k-space.ee: { ansible_host: 172.20.3.83 }
worker4.kube.k-space.ee: { ansible_host: 172.20.3.84 }
worker9.kube.k-space.ee: { ansible_host: 172.21.3.89 } # Nvidia Tegra Jetson-AGX
# https://wiki.k-space.ee/en/hosting/doors
# See also: https://git.k-space.ee/k-space/kube: camtiler/README.md
doors:
vars:
extra_admins:
- arti
hosts:
grounddoor: { ansible_host: 100.102.3.1 }
frontdoor: { ansible_host: 100.102.3.2 }
backdoor: { ansible_host: 100.102.3.3 }
workshopdoor: { ansible_host: 100.102.3.4 }

27
ansible/known_hosts Normal file
View File

@@ -0,0 +1,27 @@
# Use `ansible-playbook update-ssh-config.yml` to update this file
100.102.3.3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN4SifLddYAz8CasmFwX5TQbiM8atAYMFuDQRchclHM0sq9Pi8wRxSZK8SHON4Y7YFsIY+cXnQ2Wx4FpzKmfJYE= # backdoor
100.102.3.2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE8/E7PDqTrTdU+MFurHkIPzTBTGcSJqXuv5n0Ugd/IlvOr2v+eYi3ma91pSBmF5Hjy9foWypCLZfH+vWMkV0gs= # frontdoor
100.102.3.1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFcH8D2AhnESw3uu2f4EHBhT9rORQQJJ3TlbwN+kro5tRZsZk4p3MKabBiuCSZw2KWjfu0MY4yHSCrUUQrggJDM= # grounddoor
172.21.3.51 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMYy07yLlOiFvXzmVDIULS9VDCMz7T+qOq4M+x8Lo3KEKamI6ZD737mvimPTW6K1FRBzzq67Mq495UnoFKVnQWE= # master1.kube.k-space.ee
172.21.3.52 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKRFfYDaTH58FUw+9stBVsyCviaPCGEbe9Y1a9WKvj98S7m+qU03YvtfPkRfEH/3iXHDvngEDVpJrTWW4y6e6MI= # master2.kube.k-space.ee
172.21.3.53 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIqIepuMkMo/KO3bb4X6lgb6YViAifPmgHXVrbtHwbOZLll5Qqr4pXdLDxkuZsmiE7iZBw2gSzZLcNMGdDEnWrY= # master3.kube.k-space.ee
172.21.3.61 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCJ9XgDz2NEzvjw/nDmRIKUJAmNqzsaXMJn4WFiWfTz1x2HrRcXgY3UXKWUxUvJO1jJ7hIvyE+V/8UtwYRDP1uY= # mon1.kube.k-space.ee
172.21.3.62 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLveng7H/2Gek+HYDYRWFD0Dy+4l/zjrbF2mnnkBI5CFOtqK0zwBh41IlizkpmmI5fqEIXwhLFHZEWXbUvev5oo= # mon2.kube.k-space.ee
172.21.3.63 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMMgOIL43dgCYlwAI2O269iHxo7ymweG7NoXjnk2F529G5mP+mp5We4lDZEJVyLYtemvhQ2hEHI/WVPWy3SNiuM= # mon3.kube.k-space.ee
172.23.0.7 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC15tWIbuBqd4UZLaRbpb6oTlwniS4cg2IYZYe5ys352azj2kzOnvtCGiPo0fynFadwfDHtge9JjK6Efwl87Wgc= # nas.k-space.ee
172.20.0.2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO77ffkJi903aA6cM7HnFfSyYbPP4jkydI/+/tIGeMv+c9BYOE27n+ylNERaEhYkyddIx93MB4M6GYRyQOjLWSc= # ns1.k-space.ee
[78.28.64.17]:10648 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE7J61p3YzsbRAYtXIrhQUeqc47LuVw1I38egHzi/kLG+CFPsyB9krd29yJMyLRjyM+m5qUjoxNiWK/x0g3jKOI= # offsite
172.21.20.1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHLHc3T/J5G1CIf33XeniJk5+D0cpaXe0OkHmpCQ3DoZC3KkFBpA+/U1mlo+qb8xf/GrMj6BMMMLXKSUxbEVGaU= # pve1
172.21.20.2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFGSRetFdHExRT69pHJAcuhqzAu+Xx4K2AEmWJhUZ2JYF7aa0JbltiYQs58Bpx9s9NA793tiHLZXABy56dI+D9Q= # pve2
172.21.20.8 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMzNvX3ga56EELcI9gV7moyFdKllSwb81V2tCWIjhFVSFTo3QKH/gX/MBnjcs+RxeVV3GF7zIIv8492bCvgiO9s= # pve8
172.21.20.9 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNU4YzKSzzUSnAgh4L1DF3dlC1VEaKVaIeTgsL5VJ0UMqjPr+8QMjIvo28cSLfIQYtfoQbt7ASVsm0uDQvKOldM= # pve9
172.21.3.71 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI2jy8EsMo7Voor4URCMdgiEzc0nmYDowV4gB2rZ6hnH7bcKGdaODsCyBH6nvbitgnESCC8136RmdxCnO9/TuJ0= # storage1.kube.k-space.ee
172.21.3.72 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKxa2PbOj7bV0AUkBZuPkQZ/3ZMeh1mUCD+rwB4+sXbvTc+ca+xgcPGdAozbY/cUA4GdaKelhjI9DEC46MeFymY= # storage2.kube.k-space.ee
172.21.3.73 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGYqNHAxwwoZqne/uv5syRb+tEwpbaGeK8oct4IjIHcmPdU32JlMiSqLX7d58t/b8tqE1z2rM4gCc4bpzvNrHMQ= # storage3.kube.k-space.ee
172.21.3.74 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI+FRuwbrUpMDg9gKf6AqcfovEkt8r5SgB4JXEuMD+I6pp+2PfbxMwrXQ8Xg3oHW+poG413KWw4FZOWv2gH4CEQ= # storage4.kube.k-space.ee
172.20.3.81 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPnmGiEWtWnNNcF872fhYKCD07QwOb75BDEwN3fC4QYmBAbiN0iX/UH96r02V5f7uga3a07/xxt5P0cfEOdtQwQ= # worker1.kube.k-space.ee
172.20.3.82 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBkSNAYeugxGvNmV3biY1s0BWPCEw3g3H0VWLomu/vPbg+GN10/A1pfgt62DHFCYDB6QZwkZM6HIFy8y0xhRl9g= # worker2.kube.k-space.ee
172.20.3.83 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBe+A9Bg54UwUvlPguKDyNAsX7mYbnfMOxhK2UP2YofPlzJ0KDUuH5mbmw76XWz0L6jhT6I7hyc0QsFBdO3ug68= # worker3.kube.k-space.ee
172.20.3.84 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKoNIL+kEYphi/yCdhIytxqRaucm2aTzFrmNN4gEjCrn4TK8A46fyqAuwmgyLQFm7RD5qcEKPWP57Cl0DhTU1T4= # worker4.kube.k-space.ee
172.21.3.89 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCoepYYnNMXkZ9dn4RSSMhFFsppPVkzmjkG3z9vK84454XkI4wizmhUlZ0p+Ovx2YbrjbKibfrrtk8RgWUMi0rY= # worker9.kube.k-space.ee
100.102.3.4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMpkSqEOyYrKXChxl6PAV+q0KypOPnKsXoXWO1JSZSIOwAs5YTzt8Q1Ryb+nQnAOlGj1AY1H7sRllTzdv0cA/EM= # workshopdoor

239
ansible/kubernetes.yml Normal file
View File

@@ -0,0 +1,239 @@
---
# ansible-galaxy install -r requirements.yaml
- name: Install cri-o
hosts:
- worker9.kube.k-space.ee
vars:
CRIO_VERSION: "v1.30"
tasks:
- name: ensure curl is installed
ansible.builtin.apt:
name: curl
state: present
- name: Ensure /etc/apt/keyrings exists
ansible.builtin.file:
path: /etc/apt/keyrings
state: directory
# TODO: fix
# - name: add k8s repo apt key
# ansible.builtin.shell: "curl -fsSL https://pkgs.k8s.io/addons:/cri-o:/stable:/{{ CRIO_VERSION }}/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/cri-o-apt-keyring.gpg"
- name: add k8s repo
ansible.builtin.apt_repository:
repo: "deb [signed-by=/etc/apt/keyrings/cri-o-apt-keyring.gpg] https://pkgs.k8s.io/addons:/cri-o:/stable:/{{ CRIO_VERSION }}/deb/ /"
state: present
filename: cri-o
- name: check current crictl version
command: "/usr/bin/crictl --version"
failed_when: false
changed_when: false
register: crictl_version_check
- name: download crictl
unarchive:
src: "https://github.com/kubernetes-sigs/cri-tools/releases/download/{{ CRIO_VERSION }}/crictl-{{ CRIO_VERSION }}-linux-{{ 'arm64' if ansible_architecture == 'aarch64' else 'amd64' }}.tar.gz"
dest: /tmp
remote_src: true
when: >
crictl_version_check.stdout is not defined or CRIO_VERSION not in crictl_version_check.stdout
register: crictl_download_check
- name: move crictl binary into place
copy:
src: /tmp/crictl
dest: "/usr/bin/crictl"
when: >
exporter_download_check is changed
- name: ensure crio is installed
ansible.builtin.apt:
name: cri-o
state: present
- name: Reconfigure Kubernetes worker nodes
hosts:
- storage
- workers
tasks:
- name: Configure grub defaults
copy:
dest: "/etc/default/grub"
content: |
GRUB_DEFAULT=0
GRUB_TIMEOUT_STYLE=countdown
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash memhp_default_state=online"
GRUB_CMDLINE_LINUX="memhp_default_state=online rootflags=pquota"
register: grub_defaults
when: ansible_architecture == 'x86_64'
- name: Load grub defaults
ansible.builtin.shell: update-grub
when: grub_defaults.changed
- name: Ensure nfs-common is installed
ansible.builtin.apt:
name: nfs-common
state: present
- name: Reconfigure Kubernetes nodes
hosts: kubernetes
vars:
KUBERNETES_VERSION: v1.30.3
IP: "{{ hostvars[inventory_hostname]['ansible_default_ipv4']['address'] }}"
tasks:
- name: Remove APT packages
ansible.builtin.apt:
name: "{{ item }}"
state: absent
loop:
- kubelet
- kubeadm
- kubectl
- name: Download kubectl, kubeadm, kubelet
ansible.builtin.get_url:
url: "https://cdn.dl.k8s.io/release/{{ KUBERNETES_VERSION }}/bin/linux/{{ 'arm64' if ansible_architecture == 'aarch64' else 'amd64' }}/{{ item }}"
dest: "/usr/bin/{{ item }}-{{ KUBERNETES_VERSION }}"
mode: '0755'
loop:
- kubelet
- kubectl
- kubeadm
- name: Create /etc/systemd/system/kubelet.service
ansible.builtin.copy:
content: |
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/home/
Wants=network-online.target
After=network-online.target
[Service]
ExecStart=/usr/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target
dest: /etc/systemd/system/kubelet.service
register: kubelet_service
- name: Create symlinks for kubectl, kubeadm, kubelet
ansible.builtin.file:
src: "/usr/bin/{{ item }}-{{ KUBERNETES_VERSION }}"
dest: "/usr/bin/{{ item }}"
state: link
loop:
- kubelet
- kubectl
- kubeadm
register: kubelet
- name: Restart Kubelet
service:
name: kubelet
enabled: true
state: restarted
daemon_reload: true
when: kubelet.changed or kubelet_service.changed
- name: Ensure /var/lib/kubelet exists
ansible.builtin.file:
path: /var/lib/kubelet
state: directory
- name: Configure kubelet
ansible.builtin.template:
src: kubelet.j2
dest: /var/lib/kubelet/config.yaml
mode: 644
- name: Ensure /etc/systemd/system/kubelet.service.d/ exists
ansible.builtin.file:
path: /etc/systemd/system/kubelet.service.d
state: directory
- name: Configure kubelet service
ansible.builtin.template:
src: 10-kubeadm.j2
dest: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
mode: 644
# TODO: register new node if needed
- name: Disable unneccesary services
ignore_errors: true
loop:
- gdm3
- snapd
- bluetooth
- multipathd
- zram
service:
name: "{{item}}"
state: stopped
enabled: no
- name: Ensure /etc/containers exists
ansible.builtin.file:
path: /etc/containers
state: directory
- name: Reset /etc/containers/registries.conf
ansible.builtin.copy:
content: "unqualified-search-registries = [\"docker.io\"]\n"
dest: /etc/containers/registries.conf
register: registries
- name: Restart CRI-O
service:
name: cri-o
state: restarted
when: registries.changed
- name: Reset /etc/modules
ansible.builtin.copy:
content: |
overlay
br_netfilter
dest: /etc/modules
register: kernel_modules
- name: Load kernel modules
ansible.builtin.shell: "cat /etc/modules | xargs -L 1 -t modprobe"
when: kernel_modules.changed
- name: Reset /etc/sysctl.d/99-k8s.conf
ansible.builtin.copy:
content: |
net.ipv4.conf.all.accept_redirects = 0
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
vm.max_map_count = 524288
fs.inotify.max_user_instances = 1280
fs.inotify.max_user_watches = 655360
dest: /etc/sysctl.d/99-k8s.conf
register: sysctl
- name: Reload sysctl config
ansible.builtin.shell: "sysctl --system"
when: sysctl.changed
- name: Reconfigure kube-apiserver to use Passmower OIDC endpoint
ansible.builtin.template:
src: kube-apiserver.j2
dest: /etc/kubernetes/manifests/kube-apiserver.yaml
mode: 600
register: apiserver
when:
- inventory_hostname in groups["masters"]
- name: Restart kube-apiserver
ansible.builtin.shell: "killall kube-apiserver"
when: apiserver.changed

211
ansible/ssh_config Normal file
View File

@@ -0,0 +1,211 @@
# Use `ansible-playbook update-ssh-config.yml` to update this file
# Use `ssh -F ssh_config ...` to connect to target machine or
# Add `Include ~/path/to/this/kube/ssh_config` in your ~/.ssh/config
Host backdoor 100.102.3.3
User root
Hostname 100.102.3.3
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host frontdoor 100.102.3.2
User root
Hostname 100.102.3.2
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host grounddoor 100.102.3.1
User root
Hostname 100.102.3.1
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host master1.kube.k-space.ee 172.21.3.51
User root
Hostname 172.21.3.51
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host master2.kube.k-space.ee 172.21.3.52
User root
Hostname 172.21.3.52
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host master3.kube.k-space.ee 172.21.3.53
User root
Hostname 172.21.3.53
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host mon1.kube.k-space.ee 172.21.3.61
User root
Hostname 172.21.3.61
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host mon2.kube.k-space.ee 172.21.3.62
User root
Hostname 172.21.3.62
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host mon3.kube.k-space.ee 172.21.3.63
User root
Hostname 172.21.3.63
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host nas.k-space.ee 172.23.0.7
User root
Hostname 172.23.0.7
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host ns1.k-space.ee 172.20.0.2
User root
Hostname 172.20.0.2
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host offsite 78.28.64.17
User root
Hostname 78.28.64.17
Port 10648
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host pve1 172.21.20.1
User root
Hostname 172.21.20.1
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host pve2 172.21.20.2
User root
Hostname 172.21.20.2
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host pve8 172.21.20.8
User root
Hostname 172.21.20.8
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host pve9 172.21.20.9
User root
Hostname 172.21.20.9
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host storage1.kube.k-space.ee 172.21.3.71
User root
Hostname 172.21.3.71
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host storage2.kube.k-space.ee 172.21.3.72
User root
Hostname 172.21.3.72
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host storage3.kube.k-space.ee 172.21.3.73
User root
Hostname 172.21.3.73
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host storage4.kube.k-space.ee 172.21.3.74
User root
Hostname 172.21.3.74
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host worker1.kube.k-space.ee 172.20.3.81
User root
Hostname 172.20.3.81
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host worker2.kube.k-space.ee 172.20.3.82
User root
Hostname 172.20.3.82
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host worker3.kube.k-space.ee 172.20.3.83
User root
Hostname 172.20.3.83
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host worker4.kube.k-space.ee 172.20.3.84
User root
Hostname 172.20.3.84
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host worker9.kube.k-space.ee 172.21.3.89
User root
Hostname 172.21.3.89
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host workshopdoor 100.102.3.4
User root
Hostname 100.102.3.4
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h

View File

@@ -0,0 +1,12 @@
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
StandardOutput=null

View File

@@ -0,0 +1,132 @@
apiVersion: v1
kind: Pod
metadata:
annotations:
kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: {{ IP }}:6443
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --advertise-address={{ IP }}
- --allow-privileged=true
- --authorization-mode=Node,RBAC
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --enable-admission-plugins=NodeRestriction
- --enable-bootstrap-token-auth=true
- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --oidc-client-id=passmower.kubelogin
- --oidc-groups-claim=groups
- --oidc-issuer-url=https://auth.k-space.ee/
- --oidc-username-claim=sub
- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- --requestheader-allowed-names=front-proxy-client
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-User
- --secure-port=6443
- --service-account-issuer=https://kubernetes.default.svc.cluster.local
- --service-account-key-file=/etc/kubernetes/pki/sa.pub
- --service-account-signing-key-file=/etc/kubernetes/pki/sa.key
- --service-cluster-ip-range=10.96.0.0/12
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
image: registry.k8s.io/kube-apiserver:{{ KUBERNETES_VERSION }}
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: {{ IP }}
path: /livez
port: 6443
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
name: kube-apiserver
readinessProbe:
failureThreshold: 3
httpGet:
host: {{ IP }}
path: /readyz
port: 6443
scheme: HTTPS
periodSeconds: 1
timeoutSeconds: 15
resources:
requests:
cpu: 250m
startupProbe:
failureThreshold: 24
httpGet:
host: {{ IP }}
path: /livez
port: 6443
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/ca-certificates
name: etc-ca-certificates
readOnly: true
- mountPath: /etc/pki
name: etc-pki
readOnly: true
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /usr/local/share/ca-certificates
name: usr-local-share-ca-certificates
readOnly: true
- mountPath: /usr/share/ca-certificates
name: usr-share-ca-certificates
readOnly: true
hostNetwork: true
priority: 2000001000
priorityClassName: system-node-critical
securityContext:
seccompProfile:
type: RuntimeDefault
volumes:
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/ca-certificates
type: DirectoryOrCreate
name: etc-ca-certificates
- hostPath:
path: /etc/pki
type: DirectoryOrCreate
name: etc-pki
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /usr/local/share/ca-certificates
type: DirectoryOrCreate
name: usr-local-share-ca-certificates
- hostPath:
path: /usr/share/ca-certificates
type: DirectoryOrCreate
name: usr-share-ca-certificates
status: {}

View File

@@ -0,0 +1,43 @@
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 0s
cacheUnauthorizedTTL: 0s
cgroupDriver: systemd
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
logging:
flushFrequency: 0
options:
json:
infoBufferSize: "0"
verbosity: 0
memorySwap: {}
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
rotateCertificates: true
runtimeRequestTimeout: 0s
shutdownGracePeriod: 5m
shutdownGracePeriodCriticalPods: 5m
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s

View File

@@ -0,0 +1,72 @@
---
- name: Collect servers SSH public keys to known_hosts
hosts: localhost
connection: local
vars:
targets: "{{ hostvars[groups['all']] }}"
tasks:
- name: Generate ssh_config
ansible.builtin.copy:
dest: ssh_config
content: |
# Use `ansible-playbook update-ssh-config.yml` to update this file
# Use `ssh -F ssh_config ...` to connect to target machine or
# Add `Include ~/path/to/this/kube/ssh_config` in your ~/.ssh/config
{% for host in groups['all'] | sort %}
Host {{ [host, hostvars[host].get('ansible_host', host)] | unique | join(' ') }}
User root
Hostname {{ hostvars[host].get('ansible_host', host) }}
Port {{ hostvars[host].get('ansible_port', 22) }}
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
{% endfor %}
- name: Generate known_hosts
ansible.builtin.copy:
dest: known_hosts
content: |
# Use `ansible-playbook update-ssh-config.yml` to update this file
{% for host in groups['all'] | sort %}
{{ lookup('ansible.builtin.pipe', 'ssh-keyscan -p %d -t ecdsa %s' % (
hostvars[host].get('ansible_port', 22),
hostvars[host].get('ansible_host', host))) }} # {{ host }}
{% endfor %}
- name: Pull authorized keys from Gitea
hosts: localhost
connection: local
vars:
targets: "{{ hostvars[groups['all']] }}"
tasks:
- name: Download https://git.k-space.ee/user.keys
loop:
- arti
- eaas
- lauri
- rasmus
ansible.builtin.get_url:
url: https://git.k-space.ee/{{ item }}.keys
dest: "./{{ item }}.keys"
- name: Push authorized keys to targets
hosts:
- misc
- kubernetes
- doors
tasks:
- name: Generate /root/.ssh/authorized_keys
ansible.builtin.copy:
dest: "/root/.ssh/authorized_keys"
owner: root
group: root
mode: '0644'
content: |
# Use `ansible-playbook update-ssh-config.yml` from https://git.k-space.ee/k-space/kube/ to update this file
{% for user in admins + extra_admins | unique | sort %}
{% for line in lookup("ansible.builtin.file", user + ".keys").split("\n") %}
{% if line.startswith("sk-") %}
{{ line }} # {{ user }}
{% endif %}
{% endfor %}
{% endfor %}

View File

@@ -0,0 +1,49 @@
# Referenced/linked and documented by https://wiki.k-space.ee/en/hosting/storage#zrepl
- name: zrepl
hosts: nasgroup
tasks:
- name: 'apt: zrepl gpg'
ansible.builtin.get_url:
url: 'https://zrepl.cschwarz.com/apt/apt-key.asc'
dest: /usr/share/keyrings/zrepl.asc
- name: 'apt: zrepl repo'
apt_repository:
repo: 'deb [arch=amd64 signed-by=/usr/share/keyrings/zrepl.asc] https://zrepl.cschwarz.com/apt/debian bookworm main'
- name: 'apt: ensure packages'
apt:
state: latest
pkg: zrepl
- name: 'zrepl: ensure config'
ansible.builtin.template:
src: "zrepl_{{ansible_hostname}}.yml.j2"
dest: /etc/zrepl/zrepl.yml
mode: 600
register: zreplconf
- name: 'zrepl: restart service after config change'
when: zreplconf.changed
service:
state: restarted
enabled: true
name: zrepl
- name: 'zrepl: ensure service'
when: not zreplconf.changed
service:
state: started
enabled: true
name: zrepl
# avoid accidental conflicts of changes on recv (would err 'will not overwrite without force')
- name: 'zfs: ensure recv mountpoint=off'
hosts: offsite
tasks:
- name: 'zfs: get mountpoint'
shell: zfs get mountpoint -H -o value {{offsite_dataset}}
register: result
changed_when: false
- when: result.stdout != "none"
name: 'zfs: ensure mountpoint=off'
changed_when: true
shell: zfs set mountpoint=none {{offsite_dataset}}
register: result

23
ansible/zrepl/prom.yaml Normal file
View File

@@ -0,0 +1,23 @@
---
apiVersion: monitoring.coreos.com/v1
kind: Probe
metadata:
name: zrepl
spec:
scrapeTimeout: 30s
targets:
staticConfig:
static:
- nas.mgmt.k-space.ee:9811
# - offsite.k-space.ee:9811 # TODO: unreachable
relabelingConfigs:
- sourceLabels: [__param_target]
targetLabel: instance
- sourceLabels: [__param_target]
targetLabel: __address__
prober:
url: localhost
path: /metrics
metricRelabelings:
- sourceLabels: [__address__]
targetLabel: target

View File

@@ -0,0 +1,47 @@
global:
logging:
- type: syslog
format: logfmt
level: warn
monitoring:
- type: prometheus
listen: ':9811'
jobs:
- name: k6zrepl
type: snap
# "<" aka recursive, https://zrepl.github.io/configuration/filter_syntax.html
filesystems:
'nas/k6<': true
snapshotting:
type: periodic
prefix: zrepl_
interval: 1h
pruning:
keep:
# Keep non-zrepl snapshots
- type: regex
negate: true
regex: '^zrepl_'
- type: last_n
regex: "^zrepl_.*"
count: 4
- type: grid
regex: "^zrepl_.*"
grid: 4x1h | 6x4h | 3x1d | 2x7d
- name: k6zrepl_offsite_src
type: source
send:
encrypted: true # zfs native already-encrypted, filesystems not encrypted will log to error-level
serve:
type: tcp
listen: "{{ansible_host}}:35566" # NAT-ed to 193.40.103.250
clients: {
"78.28.64.17": "offsite.k-space.ee",
}
filesystems:
'nas/k6': true
snapshotting: # handled by above job, separated for secuwurity (isolation of domains)
type: manual

View File

@@ -0,0 +1,41 @@
global:
logging:
- type: syslog
format: logfmt
level: warn
monitoring:
- type: prometheus
listen: ':9811'
jobs:
- name: k6zrepl_offsite_dest
type: pull
recv:
placeholder:
encryption: off # https://zrepl.github.io/configuration/sendrecvoptions.html#placeholders
# bandwidth_limit:
# max: 9 MiB # 75.5 Mbps
connect:
type: tcp
address: '193.40.103.250:35566' # firewall whitelisted to offsite
root_fs: {{offsite_dataset}}
interval: 10m # start interval, does nothing when no snapshots to recv
replication:
concurrency:
steps: 2
pruning:
keep_sender: # offsite does not dictate nas snapshot policy
- type: regex
regex: '.*'
keep_receiver:
# Keep non-zrepl snapshots
- negate: true
type: regex
regex: "^zrepl_"
- type: last_n
regex: "^zrepl_"
count: 4
- type: grid
regex: "^zrepl_"
grid: 4x1h | 6x4h | 3x1d | 2x7d

View File

@@ -57,7 +57,7 @@ Delete any other SSH keys associated with Gitea user `argocd`.
To update apps:
```
for j in asterisk bind camtiler drone drone-execution etherpad freescout gitea grafana hackerspace nextcloud nyancat rosdump traefik wiki wildduck woodpecker; do
for j in asterisk bind camtiler etherpad freescout gitea grafana hackerspace nextcloud nyancat rosdump traefik wiki wildduck woodpecker; do
cat << EOF >> applications/$j.yaml
---
apiVersion: argoproj.io/v1alpha1
@@ -74,7 +74,11 @@ spec:
destination:
server: 'https://kubernetes.default.svc'
namespace: $j
syncPolicy: {}
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true
EOF
done
find applications -name "*.yaml" -exec kubectl apply -n argocd -f {} \;

View File

@@ -1,6 +1,6 @@
---
apiVersion: codemowers.io/v1alpha1
kind: OIDCGWClient
apiVersion: codemowers.cloud/v1beta1
kind: OIDCClient
metadata:
name: argocd
namespace: argocd

View File

@@ -2,16 +2,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: whoami-oidc
name: argocd-applications
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: whoami-oidc
path: argocd/applications
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: whoami-oidc
namespace: argocd
syncPolicy:
automated: {}
automated:
prune: false

View File

@@ -13,4 +13,8 @@ spec:
destination:
server: 'https://kubernetes.default.svc'
namespace: asterisk
syncPolicy: {}
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -13,4 +13,8 @@ spec:
destination:
server: 'https://kubernetes.default.svc'
namespace: bind
syncPolicy: {}
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -1,16 +1,15 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: camtiler
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: camtiler
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: camtiler
syncPolicy: {}
# ---
# apiVersion: argoproj.io/v1alpha1
# kind: Application
# metadata:
# name: camtiler
# namespace: argocd
# spec:
# project: k-space.ee
# source:
# repoURL: 'git@git.k-space.ee:k-space/kube.git'
# path: camtiler
# targetRevision: HEAD
# destination:
# server: 'https://kubernetes.default.svc'
# namespace: camtiler

View File

@@ -13,4 +13,8 @@ spec:
destination:
server: 'https://kubernetes.default.svc'
namespace: etherpad
syncPolicy: {}
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -13,4 +13,8 @@ spec:
destination:
server: 'https://kubernetes.default.svc'
namespace: freescout
syncPolicy: {}
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -13,4 +13,8 @@ spec:
destination:
server: 'https://kubernetes.default.svc'
namespace: gitea
syncPolicy: {}
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -13,4 +13,8 @@ spec:
destination:
server: 'https://kubernetes.default.svc'
namespace: grafana
syncPolicy: {}
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -13,4 +13,8 @@ spec:
destination:
server: 'https://kubernetes.default.svc'
namespace: hackerspace
syncPolicy: {}
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: kubernetes-dashboard
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: kubernetes-dashboard
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: kubernetes-dashboard
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -2,15 +2,19 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: drone
name: logmower
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: drone
path: logmower
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: drone
syncPolicy: {}
namespace: logmower
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: minio-clusters
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: minio-clusters
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: minio-clusters
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: monitoring
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: monitoring
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: monitoring
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: mysql-clusters
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: mysql-clusters
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: mysql-clusters
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -13,4 +13,8 @@ spec:
destination:
server: 'https://kubernetes.default.svc'
namespace: nextcloud
syncPolicy: {}
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -13,4 +13,8 @@ spec:
destination:
server: 'https://kubernetes.default.svc'
namespace: nyancat
syncPolicy: {}
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: postgres-clusters
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: postgres-clusters
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: postgres-clusters
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: redis-clusters
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: redis-clusters
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: redis-clusters
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: reloader
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: reloader
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: reloader
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -13,4 +13,8 @@ spec:
destination:
server: 'https://kubernetes.default.svc'
namespace: rosdump
syncPolicy: {}
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -2,15 +2,19 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: drone-execution
name: signs
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: drone-execution
path: signs
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: drone-execution
syncPolicy: {}
namespace: signs
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -13,4 +13,8 @@ spec:
destination:
server: 'https://kubernetes.default.svc'
namespace: traefik
syncPolicy: {}
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: whoami
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: whoami
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: whoami
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -13,4 +13,8 @@ spec:
destination:
server: 'https://kubernetes.default.svc'
namespace: wiki
syncPolicy: {}
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -13,4 +13,8 @@ spec:
destination:
server: 'https://kubernetes.default.svc'
namespace: wildduck
syncPolicy: {}
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -13,4 +13,8 @@ spec:
destination:
server: 'https://kubernetes.default.svc'
namespace: woodpecker
syncPolicy: {}
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -84,7 +84,7 @@ configs:
oidc.config: |
name: OpenID Connect
issuer: https://auth2.k-space.ee/
issuer: https://auth.k-space.ee/
clientID: $oidc-client-argocd-owner-secrets:OIDC_CLIENT_ID
cliClientID: $oidc-client-argocd-owner-secrets:OIDC_CLIENT_ID
clientSecret: $oidc-client-argocd-owner-secrets:OIDC_CLIENT_SECRET

View File

@@ -1,3 +1,17 @@
#TODO:
- cert-manager talks to master to add domain names, and DNS-01 TLS through ns1.k-space.ee
^ both-side link to cert-manager
bind-services (zone transfer to HA replicas from ns1.k-space.ee)
### ns1.k-space.ee
Primary authoritive nameserver replica. Other replicas live on Kube nodes
Idea to move it to Zone.
dns.yaml files add DNS records
# Bind setup
The Bind primary resides outside Kubernetes at `193.40.103.2` and

View File

@@ -50,7 +50,7 @@ spec:
emptyDir: {}
containers:
- name: bind-secondary
image: internetsystemsconsortium/bind9:9.19
image: internetsystemsconsortium/bind9:9.20
volumeMounts:
- mountPath: /run/named
name: run

View File

@@ -16,7 +16,7 @@ spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: registry.k8s.io/external-dns/external-dns:v0.13.5
image: registry.k8s.io/external-dns/external-dns:v0.14.2
envFrom:
- secretRef:
name: tsig-secret

View File

@@ -16,7 +16,7 @@ spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: registry.k8s.io/external-dns/external-dns:v0.13.5
image: registry.k8s.io/external-dns/external-dns:v0.14.2
envFrom:
- secretRef:
name: tsig-secret

View File

@@ -16,7 +16,7 @@ spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: registry.k8s.io/external-dns/external-dns:v0.13.5
image: registry.k8s.io/external-dns/external-dns:v0.14.2
envFrom:
- secretRef:
name: tsig-secret

View File

@@ -1,5 +1,56 @@
To apply changes:
# Cameras
Camtiler is the umbrella name for our homegrown camera surveilance system.
Everything besides [Camera](#camera)s is deployed with Kubernetes.
## Components
![cameras.graphviz.svg](cameras.graphviz.svg)
<!-- Manually rendered with https://dreampuf.github.io/GraphvizOnline
digraph G {
"camera-operator" -> "camera-motion-detect" [label="deploys"]
"camera-tiler" -> "cam.k-space.ee/tiled"
camera -> "camera-tiler"
camera -> "camera-motion-detect" -> mongo
"camera-motion-detect" -> "Minio S3"
"cam.k-space.ee" -> mongo [label="queries events", decorate=true]
mongo -> "camtiler-event-broker" [label="transforms object to add (signed) URL to S3", ]
"camtiler-event-broker" -> "cam.k-space.ee"
"Minio S3" -> "cam.k-space.ee" [label="using signed URL from camtiler-event-broker", decorate=true]
camera [label="📸 camera"]
}
-->
### 📸 Camera
Cameras are listed in [application.yml](application.yml) as `kind: Camera`.
Two types of camera hosts:
- GL-AR150 with [openwrt-camera-images](https://git.k-space.ee/k-space/openwrt-camera-image).
- [Doors](https://wiki.k-space.ee/e/en/hosting/doors) (Raspberry Pi) with mjpg-streamer.
### camera-tiler (cam.k-space.ee/tiled)
Out-of-bound, connects to cameras and streams to web browser.
One instance per every camera
#### camera-operator
Functionally the same as a kubernetes deployment for camera-tiler.
Operator/deployer for camera-tiler.
### camera-motion-detect
Connects to cameras, on motion writes events to Mongo and frames to S3.
### cam.k-space.ee (logmower)
Fetches motion-detect events from mongo. Fetches referenced images from S3 (minio).
#### camtiler-event-broker
MitM between motion-detect -> mongo. Appends S3 URLs to the response.
## Kubernetes commands
Apply changes:
```
kubectl apply -n camtiler \
-f application.yml \
@@ -13,14 +64,12 @@ kubectl apply -n camtiler \
-f networkpolicy-base.yml
```
To deploy changes:
Deploy changes:
```
kubectl -n camtiler rollout restart deployment.apps/camtiler
```
To initialize secrets:
Initialize secrets:
```
kubectl create secret generic -n camtiler mongodb-application-readwrite-password --from-literal="password=$(cat /dev/urandom | base64 | head -c 30)"
kubectl create secret generic -n camtiler mongodb-application-readonly-password --from-literal="password=$(cat /dev/urandom | base64 | head -c 30)"
@@ -32,8 +81,7 @@ kubectl -n camtiler create secret generic camera-secrets \
--from-literal=password=...
```
To restart all deployments:
Restart all deployments:
```
for j in $(kubectl get deployments -n camtiler -o name); do kubectl rollout restart -n camtiler $j; done
```

View File

@@ -268,6 +268,7 @@ spec:
annotations:
summary: CPU limits are bottleneck
---
# Referenced/linked by README.md
apiVersion: k-space.ee/v1alpha1
kind: Camera
metadata:

View File

@@ -0,0 +1,131 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<!-- Generated by graphviz version 2.40.1 (20161225.0304)
-->
<!-- Title: G Pages: 1 -->
<svg width="658pt" height="387pt" viewBox="0.00 0.00 658.36 386.80" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 382.8)">
<title>G</title>
<polygon fill="#ffffff" stroke="transparent" points="-4,4 -4,-382.8 654.3562,-382.8 654.3562,4 -4,4"/>
<!-- camera&#45;operator -->
<g id="node1" class="node">
<title>camera-operator</title>
<ellipse fill="none" stroke="#000000" cx="356.22" cy="-360.8" rx="74.095" ry="18"/>
<text text-anchor="middle" x="356.22" y="-356.6" font-family="Times,serif" font-size="14.00" fill="#000000">camera-operator</text>
</g>
<!-- camera&#45;motion&#45;detect -->
<g id="node2" class="node">
<title>camera-motion-detect</title>
<ellipse fill="none" stroke="#000000" cx="356.22" cy="-272" rx="95.5221" ry="18"/>
<text text-anchor="middle" x="356.22" y="-267.8" font-family="Times,serif" font-size="14.00" fill="#000000">camera-motion-detect</text>
</g>
<!-- camera&#45;operator&#45;&gt;camera&#45;motion&#45;detect -->
<g id="edge1" class="edge">
<title>camera-operator-&gt;camera-motion-detect</title>
<path fill="none" stroke="#000000" d="M356.22,-342.4006C356.22,-330.2949 356.22,-314.2076 356.22,-300.4674"/>
<polygon fill="#000000" stroke="#000000" points="359.7201,-300.072 356.22,-290.072 352.7201,-300.0721 359.7201,-300.072"/>
<text text-anchor="middle" x="377.9949" y="-312.2" font-family="Times,serif" font-size="14.00" fill="#000000">deploys</text>
</g>
<!-- mongo -->
<g id="node6" class="node">
<title>mongo</title>
<ellipse fill="none" stroke="#000000" cx="292.22" cy="-199" rx="37.7256" ry="18"/>
<text text-anchor="middle" x="292.22" y="-194.8" font-family="Times,serif" font-size="14.00" fill="#000000">mongo</text>
</g>
<!-- camera&#45;motion&#45;detect&#45;&gt;mongo -->
<g id="edge5" class="edge">
<title>camera-motion-detect-&gt;mongo</title>
<path fill="none" stroke="#000000" d="M340.3997,-253.9551C332.3383,-244.76 322.4178,-233.4445 313.6783,-223.476"/>
<polygon fill="#000000" stroke="#000000" points="316.2049,-221.0485 306.9807,-215.8365 310.9413,-225.6632 316.2049,-221.0485"/>
</g>
<!-- Minio S3 -->
<g id="node7" class="node">
<title>Minio S3</title>
<ellipse fill="none" stroke="#000000" cx="396.22" cy="-145" rx="47.0129" ry="18"/>
<text text-anchor="middle" x="396.22" y="-140.8" font-family="Times,serif" font-size="14.00" fill="#000000">Minio S3</text>
</g>
<!-- camera&#45;motion&#45;detect&#45;&gt;Minio S3 -->
<g id="edge6" class="edge">
<title>camera-motion-detect-&gt;Minio S3</title>
<path fill="none" stroke="#000000" d="M361.951,-253.804C368.6045,-232.6791 379.6542,-197.5964 387.4031,-172.9935"/>
<polygon fill="#000000" stroke="#000000" points="390.8337,-173.7518 390.4996,-163.1622 384.157,-171.6489 390.8337,-173.7518"/>
</g>
<!-- camera&#45;tiler -->
<g id="node3" class="node">
<title>camera-tiler</title>
<ellipse fill="none" stroke="#000000" cx="527.22" cy="-272" rx="57.8558" ry="18"/>
<text text-anchor="middle" x="527.22" y="-267.8" font-family="Times,serif" font-size="14.00" fill="#000000">camera-tiler</text>
</g>
<!-- cam.k&#45;space.ee/tiled -->
<g id="node4" class="node">
<title>cam.k-space.ee/tiled</title>
<ellipse fill="none" stroke="#000000" cx="527.22" cy="-199" rx="89.7229" ry="18"/>
<text text-anchor="middle" x="527.22" y="-194.8" font-family="Times,serif" font-size="14.00" fill="#000000">cam.k-space.ee/tiled</text>
</g>
<!-- camera&#45;tiler&#45;&gt;cam.k&#45;space.ee/tiled -->
<g id="edge2" class="edge">
<title>camera-tiler-&gt;cam.k-space.ee/tiled</title>
<path fill="none" stroke="#000000" d="M527.22,-253.9551C527.22,-245.8828 527.22,-236.1764 527.22,-227.1817"/>
<polygon fill="#000000" stroke="#000000" points="530.7201,-227.0903 527.22,-217.0904 523.7201,-227.0904 530.7201,-227.0903"/>
</g>
<!-- camera -->
<g id="node5" class="node">
<title>camera</title>
<ellipse fill="none" stroke="#000000" cx="513.22" cy="-360.8" rx="51.565" ry="18"/>
<text text-anchor="middle" x="513.22" y="-356.6" font-family="Times,serif" font-size="14.00" fill="#000000">📸 camera</text>
</g>
<!-- camera&#45;&gt;camera&#45;motion&#45;detect -->
<g id="edge4" class="edge">
<title>camera-&gt;camera-motion-detect</title>
<path fill="none" stroke="#000000" d="M485.8726,-345.3322C460.8217,-331.1633 423.4609,-310.0318 395.271,-294.0875"/>
<polygon fill="#000000" stroke="#000000" points="396.8952,-290.9851 386.4679,-289.1084 393.449,-297.078 396.8952,-290.9851"/>
</g>
<!-- camera&#45;&gt;camera&#45;tiler -->
<g id="edge3" class="edge">
<title>camera-&gt;camera-tiler</title>
<path fill="none" stroke="#000000" d="M516.1208,-342.4006C518.0482,-330.175 520.6159,-313.8887 522.7961,-300.0599"/>
<polygon fill="#000000" stroke="#000000" points="526.2706,-300.4951 524.3708,-290.072 519.356,-299.4049 526.2706,-300.4951"/>
</g>
<!-- camtiler&#45;event&#45;broker -->
<g id="node9" class="node">
<title>camtiler-event-broker</title>
<ellipse fill="none" stroke="#000000" cx="95.22" cy="-91" rx="95.4404" ry="18"/>
<text text-anchor="middle" x="95.22" y="-86.8" font-family="Times,serif" font-size="14.00" fill="#000000">camtiler-event-broker</text>
</g>
<!-- mongo&#45;&gt;camtiler&#45;event&#45;broker -->
<g id="edge8" class="edge">
<title>mongo-&gt;camtiler-event-broker</title>
<path fill="none" stroke="#000000" d="M254.6316,-196.5601C185.4398,-191.6839 43.6101,-179.7471 28.9976,-163 18.4783,-150.9441 20.8204,-140.7526 28.9976,-127 32.2892,-121.4639 36.7631,-116.7259 41.8428,-112.6837"/>
<polygon fill="#000000" stroke="#000000" points="43.9975,-115.4493 50.2411,-106.8896 40.0224,-109.6875 43.9975,-115.4493"/>
<text text-anchor="middle" x="153.8312" y="-140.8" font-family="Times,serif" font-size="14.00" fill="#000000">transforms object to add (signed) URL to S3</text>
</g>
<!-- cam.k&#45;space.ee -->
<g id="node8" class="node">
<title>cam.k-space.ee</title>
<ellipse fill="none" stroke="#000000" cx="292.22" cy="-18" rx="70.0229" ry="18"/>
<text text-anchor="middle" x="292.22" y="-13.8" font-family="Times,serif" font-size="14.00" fill="#000000">cam.k-space.ee</text>
</g>
<!-- Minio S3&#45;&gt;cam.k&#45;space.ee -->
<g id="edge10" class="edge">
<title>Minio S3-&gt;cam.k-space.ee</title>
<path fill="none" stroke="#000000" d="M394.7596,-126.8896C392.7231,-111.3195 387.8537,-88.922 376.22,-73 366.0004,-59.0134 351.0573,-47.5978 336.5978,-38.8647"/>
<polygon fill="#000000" stroke="#000000" points="338.1215,-35.7041 327.7038,-33.7748 334.6446,-41.7796 338.1215,-35.7041"/>
<text text-anchor="middle" x="521.2881" y="-86.8" font-family="Times,serif" font-size="14.00" fill="#000000">using signed URL from camtiler-event-broker</text>
<polyline fill="none" stroke="#000000" points="650.3562,-82.6 392.22,-82.6 392.9753,-115.8309 "/>
</g>
<!-- cam.k&#45;space.ee&#45;&gt;mongo -->
<g id="edge7" class="edge">
<title>cam.k-space.ee-&gt;mongo</title>
<path fill="none" stroke="#000000" d="M292.22,-36.2125C292.22,-67.8476 292.22,-133.1569 292.22,-170.7273"/>
<polygon fill="#000000" stroke="#000000" points="288.7201,-170.9833 292.22,-180.9833 295.7201,-170.9833 288.7201,-170.9833"/>
<text text-anchor="middle" x="332.0647" y="-86.8" font-family="Times,serif" font-size="14.00" fill="#000000">queries events</text>
<polyline fill="none" stroke="#000000" points="371.9094,-82.6 292.22,-82.6 292.22,-91.3492 "/>
</g>
<!-- camtiler&#45;event&#45;broker&#45;&gt;cam.k&#45;space.ee -->
<g id="edge9" class="edge">
<title>camtiler-event-broker-&gt;cam.k-space.ee</title>
<path fill="none" stroke="#000000" d="M138.9406,-74.7989C169.6563,-63.417 210.7924,-48.1737 242.716,-36.3441"/>
<polygon fill="#000000" stroke="#000000" points="244.1451,-39.5472 252.3059,-32.7905 241.7128,-32.9833 244.1451,-39.5472"/>
</g>
</g>
</svg>

After

Width:  |  Height:  |  Size: 7.8 KiB

View File

@@ -1,11 +1,11 @@
---
apiVersion: codemowers.io/v1alpha1
kind: OIDCGWMiddlewareClient
apiVersion: codemowers.cloud/v1beta1
kind: OIDCMiddlewareClient
metadata:
name: sso
spec:
displayName: Cameras
uri: 'https://cams.k-space.ee/tiled'
uri: 'https://cam.k-space.ee/tiled'
allowedGroups:
- k-space:floor
- k-space:friends
@@ -17,21 +17,12 @@ metadata:
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: camtiler-sso@kubernetescrd,camtiler-redirect@kubernetescrd
traefik.ingress.kubernetes.io/router.middlewares: camtiler-sso@kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
external-dns.alpha.kubernetes.io/hostname: cams.k-space.ee,cam.k-space.ee
spec:
rules:
- host: cams.k-space.ee
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: logmower-frontend
port:
number: 8080
- host: cam.k-space.ee
http:
paths:
@@ -67,12 +58,28 @@ spec:
- hosts:
- "*.k-space.ee"
---
apiVersion: traefik.containo.us/v1alpha1
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: redirect
name: cams-redirect
spec:
redirectRegex:
regex: ^https://cams.k-space.ee/(.*)$
replacement: https://cam.k-space.ee/$1
permanent: false
permanent: true
---
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: cams
spec:
entryPoints:
- websecure
routes:
- match: Host(`cams.k-space.ee`)
kind: Rule
middlewares:
- name: cams-redirect
services:
- kind: TraefikService
name: api@internal

View File

@@ -85,7 +85,7 @@ spec:
- ReadWriteOnce
resources:
requests:
storage: 200Mi
storage: 100Mi
- metadata:
name: journal-volume
labels:

View File

@@ -152,3 +152,44 @@ spec:
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
# Config drift: Added by ArgoCD
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: minio
spec:
egress:
- ports:
- port: http
protocol: TCP
to:
- podSelector:
matchLabels:
app.kubernetes.io/name: minio
ingress:
- from:
- podSelector: {}
ports:
- port: http
protocol: TCP
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus
podSelector:
matchLabels:
app.kubernetes.io/name: minio
policyTypes:
- Ingress
- Egress

1
cert-manager/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
cert-manager.yaml

View File

@@ -5,13 +5,13 @@
Added manifest with:
```
curl -L https://github.com/jetstack/cert-manager/releases/download/v1.6.1/cert-manager.yaml -O
curl -L https://github.com/jetstack/cert-manager/releases/download/v1.15.1/cert-manager.yaml -O
```
To update certificate issuer
```
kubectl apply -f namespace.yml -f cert-manager.yaml
kubectl apply -f cert-manager.yaml
kubectl apply -f issuer.yml
kubectl -n cert-manager create secret generic tsig-secret \
--from-literal=TSIG_SECRET=<secret>

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -3,6 +3,6 @@
To deploy:
```
wget https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/release-1.20/releases/cnpg-1.20.2.yaml -O application.yml
kubectl apply -f application.yml
kubectl apply --server-side -f \
https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/release-1.23/releases/cnpg-1.23.2.yaml
```

File diff suppressed because it is too large Load Diff

View File

@@ -1,100 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: discourse
labels:
app: discourse
spec:
replicas: 1
selector:
matchLabels:
app: discourse
template:
metadata:
labels:
app: discourse
spec:
securityContext:
fsGroup: 0
fsGroupChangePolicy: Always
supplementalGroups: []
sysctls: []
initContainers:
containers:
- name: discourse
image: docker.io/bitnami/discourse:3.2.2-debian-12-r0
imagePullPolicy: "IfNotPresent"
securityContext:
allowPrivilegeEscalation: false
privileged: false
readOnlyRootFilesystem: false
runAsGroup: 0
runAsNonRoot: false
runAsUser: 0
seLinuxOptions: {}
seccompProfile:
type: RuntimeDefault
command:
- /bin/bash
args:
- -c
- |
/opt/bitnami/scripts/discourse/entrypoint.sh /opt/bitnami/scripts/discourse/run.sh
env:
- name: BITNAMI_DEBUG
value: "true"
- name: DISCOURSE_PORT_NUMBER
value: "8080"
- name: DISCOURSE_EXTERNAL_HTTP_PORT_NUMBER
value: "80"
- name: DISCOURSE_DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: postgresdatabase-discourse-owner-secrets
key: PGPASSWORD
- name: DISCOURSE_DATABASE_HOST
valueFrom:
secretKeyRef:
name: postgresdatabase-discourse-owner-secrets
key: PGHOST
- name: DISCOURSE_DATABASE_USER
valueFrom:
secretKeyRef:
name: postgresdatabase-discourse-owner-secrets
key: PGUSER
- name: DISCOURSE_DATABASE_NAME
valueFrom:
secretKeyRef:
name: postgresdatabase-discourse-owner-secrets
key: PGDATABASE
ports:
- name: http
containerPort: 8080
protocol: TCP
livenessProbe:
tcpSocket:
port: http
initialDelaySeconds: 500
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
readinessProbe:
httpGet:
path: /srv/status
port: http
initialDelaySeconds: 180
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
resources:
limits:
cpu: "3.0"
ephemeral-storage: 1024Mi
memory: 6144Mi
requests:
cpu: "1.5"
ephemeral-storage: 50Mi
memory: 4096Mi

View File

@@ -0,0 +1,5 @@
# Dragonfly Operator
```
kubectl apply -f https://raw.githubusercontent.com/dragonflydb/dragonfly-operator/v1.1.6/manifests/dragonfly-operator.yaml
```

View File

@@ -1,13 +0,0 @@
To deply:
```
kubectl apply -n drone-execution -f application.yml
```
To bootstrap secrets:
```
kubectl create secret generic -n drone-execution application-secrets \
--from-literal=DRONE_RPC_SECRET=$(kubectl get secret -n drone application-secrets -o jsonpath="{.data.DRONE_RPC_SECRET}" | base64 -d) \
--from-literal=DRONE_SECRET_PLUGIN_TOKEN=$(cat /dev/urandom | base64 | head -c 30)
```

View File

@@ -1,177 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: drone-runner-kube
---
apiVersion: v1
kind: ConfigMap
metadata:
name: application-config
data:
DRONE_DEBUG: "false"
DRONE_TRACE: "false"
DRONE_NAMESPACE_DEFAULT: "drone-execution"
DRONE_RPC_HOST: "drone.k-space.ee"
DRONE_RPC_PROTO: "https"
PLUGIN_MTU: "1300"
DRONE_SECRET_PLUGIN_ENDPOINT: "http://secrets:3000"
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: drone-runner-kube
namespace: "drone-execution"
labels:
app: drone-runner-kube
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- create
- delete
- apiGroups:
- ""
resources:
- pods
- pods/log
verbs:
- get
- create
- delete
- list
- watch
- update
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: drone-runner-kube
namespace: drone-execution
labels:
app: drone-runner-kube
subjects:
- kind: ServiceAccount
name: drone-runner-kube
namespace: drone-execution
roleRef:
kind: Role
name: drone-runner-kube
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Service
metadata:
name: drone-runner-kube
labels:
app: drone-runner-kube
spec:
type: ClusterIP
ports:
- port: 3000
targetPort: http
protocol: TCP
name: http
selector:
app: drone-runner-kube
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: drone-runner-kube
annotations:
keel.sh/policy: force
keel.sh/trigger: poll
keel.sh/pollSchedule: "@midnight"
spec:
replicas: 1
selector:
matchLabels:
app: drone-runner-kube
template:
metadata:
labels:
app: drone-runner-kube
spec:
serviceAccountName: drone-runner-kube
terminationGracePeriodSeconds: 3600
containers:
- name: server
securityContext:
{}
image: drone/drone-runner-kube
imagePullPolicy: Always
ports:
- name: http
containerPort: 3000
protocol: TCP
envFrom:
- configMapRef:
name: application-config
- secretRef:
name: application-secrets
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: drone-kubernetes-secrets
annotations:
keel.sh/policy: force
keel.sh/trigger: poll
keel.sh/pollSchedule: "@midnight"
spec:
replicas: 1
selector:
matchLabels:
app: drone-kubernetes-secrets
template:
metadata:
labels:
app: drone-kubernetes-secrets
spec:
containers:
- name: secrets
image: drone/kubernetes-secrets
imagePullPolicy: Always
ports:
- containerPort: 3000
env:
- name: SECRET_KEY
valueFrom:
secretKeyRef:
name: application-secrets
key: DRONE_SECRET_PLUGIN_TOKEN
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: drone-kubernetes-secrets
spec:
podSelector:
matchLabels:
app: drone-kubernetes-secrets
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: drone-runner-kube
ports:
- port: 3000
---
# Following should block access to pods in other namespaces, but should permit
# Git checkout, pip install, talking to Traefik via public IP etc
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: drone-runner-kube
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0

View File

@@ -1 +0,0 @@
../shared/networkpolicy-base.yml

View File

@@ -1,25 +0,0 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/
# Chart dirs/files
docs/
ci/

View File

@@ -1,155 +0,0 @@
# Deployment
To deploy:
```
kubectl apply -n drone -f application.yml
```
To bootstrap secrets:
```
kubectl create secret generic -n drone application-secrets \
--from-literal=DRONE_GITEA_CLIENT_ID=... \
--from-literal=DRONE_GITEA_CLIENT_SECRET=... \
--from-literal=DRONE_RPC_SECRET=$(cat /dev/urandom | base64 | head -c 30)
```
# Integrating with Docker registry
We use harbor.k-space.ee to host own images.
Set up robot account `robot$k-space+drone` in Harbor first.
In Drone associate `docker_username` and `docker_password` secrets with the
`k-space`.
Instead of click marathon you can also pull the CLI configuration for Drone
from https://drone.k-space.ee/account
```
drone orgsecret add k-space docker_username 'robot$k-space+drone'
drone orgsecret add k-space docker_password '...'
```
# Integrating with e-mail
To (re)set e-mail credentials:
```
drone orgsecret add k-space email_password '...'
```
To issue build hit the button in Drone web interface or alternatively:
```
drone build create k-space/...
```
# Using templates
Templates unfortunately aren't pulled in from this Git repo.
Current `docker.yaml` template includes following:
```
kind: pipeline
type: kubernetes
name: build-arm64
platform:
arch: arm64
os: linux
node_selector:
kubernetes.io/arch: arm64
tolerations:
- key: arch
operator: Equal
value: arm64
effect: NoSchedule
steps:
- name: submodules
image: alpine/git
commands:
- touch .gitmodules
- sed -i -e 's/git@git.k-space.ee:/https:\\/\\/git.k-space.ee\\//g' .gitmodules
- git submodule update --init --recursive
- echo "ENV GIT_COMMIT=$(git rev-parse HEAD)" >> Dockerfile
- echo "ENV GIT_COMMIT_TIMESTAMP=$(git log -1 --format=%cd --date=iso-strict)" >> Dockerfile
- cat Dockerfile
- name: docker
image: harbor.k-space.ee/k-space/drone-kaniko
settings:
repo: ${DRONE_REPO}
tags: latest-arm64
registry: harbor.k-space.ee
username:
from_secret: docker_username
password:
from_secret: docker_password
---
kind: pipeline
type: kubernetes
name: build-amd64
platform:
arch: amd64
os: linux
node_selector:
kubernetes.io/arch: amd64
steps:
- name: submodules
image: alpine/git
commands:
- touch .gitmodules
- sed -i -e 's/git@git.k-space.ee:/https:\\/\\/git.k-space.ee\\//g' .gitmodules
- git submodule update --init --recursive
- echo "ENV GIT_COMMIT=$(git rev-parse HEAD)" >> Dockerfile
- echo "ENV GIT_COMMIT_TIMESTAMP=$(git log -1 --format=%cd --date=iso-strict)" >> Dockerfile
- cat Dockerfile
- name: docker
image: harbor.k-space.ee/k-space/drone-kaniko
settings:
repo: ${DRONE_REPO}
tags: latest-amd64
registry: harbor.k-space.ee
storage_driver: vfs
username:
from_secret: docker_username
password:
from_secret: docker_password
---
kind: pipeline
type: kubernetes
name: manifest
steps:
- name: manifest
image: plugins/manifest
settings:
target: ${DRONE_REPO}:latest
template: ${DRONE_REPO}:latest-ARCH
platforms:
- linux/amd64
- linux/arm64
username:
from_secret: docker_username
password:
from_secret: docker_password
depends_on:
- build-amd64
- build-arm64
---
kind: pipeline
type: kubernetes
name: gitlint
steps:
- name: gitlint
image: harbor.k-space.ee/k-space/gitlint-bundle
# https://git.k-space.ee/k-space/gitlint-bundle
---
kind: pipeline
type: kubernetes
name: flake8
steps:
- name: flake8
image: harbor.k-space.ee/k-space/flake8-bundle
# https://git.k-space.ee/k-space/flake8-bundle
```

View File

@@ -1,117 +0,0 @@
---
apiVersion: v1
kind: Service
metadata:
name: drone
spec:
type: ClusterIP
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
selector:
app: drone
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: drone
annotations:
keel.sh/policy: minor
keel.sh/trigger: poll
keel.sh/pollSchedule: "@midnight"
spec:
serviceName: drone
replicas: 1
selector:
matchLabels:
app: drone
template:
metadata:
labels:
app: drone
spec:
automountServiceAccountToken: false
securityContext:
{}
containers:
- name: server
securityContext:
{}
image: drone/drone:2
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
env:
- name: DRONE_GITEA_SERVER
value: https://git.k-space.ee
- name: DRONE_GIT_ALWAYS_AUTH
value: "false"
- name: DRONE_SERVER_HOST
value: drone.k-space.ee
- name: DRONE_SERVER_PROTO
value: https
- name: DRONE_USER_CREATE
value: username:lauri,admin:true
- name: DRONE_DEBUG
value: "true"
- name: DRONE_TRACE
value: "true"
envFrom:
- secretRef:
name: application-secrets
volumeMounts:
- name: drone-data
mountPath: /data
volumeClaimTemplates:
- metadata:
name: drone-data
spec:
storageClassName: longhorn
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
---
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: redirect
spec:
redirectRegex:
regex: ^https://(.*)/register$
replacement: https://${1}/
permanent: false
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: drone
annotations:
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
traefik.ingress.kubernetes.io/router.middlewares: drone-redirect@kubernetescrd
spec:
tls:
- hosts:
- "*.k-space.ee"
rules:
- host: "drone.k-space.ee"
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: drone
port:
number: 80

2
elastic-system/.gitignore vendored Normal file
View File

@@ -0,0 +1,2 @@
crds.yaml
operator.yaml

View File

@@ -1,7 +1,7 @@
# elastic-operator
```
wget https://download.elastic.co/downloads/eck/2.4.0/crds.yaml
wget https://download.elastic.co/downloads/eck/2.4.0/operator.yaml
wget https://download.elastic.co/downloads/eck/2.13.0/crds.yaml
wget https://download.elastic.co/downloads/eck/2.13.0/operator.yaml
kubectl apply -n elastic-system -f application.yml -f crds.yaml -f operator.yaml
```

View File

@@ -5,7 +5,7 @@ metadata:
name: filebeat
spec:
type: filebeat
version: 8.4.3
version: 8.14.3
elasticsearchRef:
name: elasticsearch
config:
@@ -218,10 +218,12 @@ kind: Elasticsearch
metadata:
name: elasticsearch
spec:
version: 8.4.3
version: 8.14.3
nodeSets:
- name: default
count: 1
count: 2
config:
node.roles: [ "data_content", "data_hot", "ingest", "master", "remote_cluster_client", "data_cold", "remote_cluster_client" ]
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
@@ -232,17 +234,13 @@ spec:
requests:
storage: 5Gi
storageClassName: longhorn
http:
tls:
selfSignedCertificate:
disabled: true
---
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: kibana
spec:
version: 8.4.3
version: 8.14.3
count: 1
elasticsearchRef:
name: elasticsearch
@@ -254,23 +252,13 @@ spec:
server.publicBaseUrl: https://kibana.k-space.ee
xpack.reporting.enabled: false
xpack.apm.ui.enabled: false
xpack.security.authc.providers:
anonymous.anonymous1:
order: 0
credentials:
username: "elastic"
secureSettings:
- secretName: elasticsearch-es-elastic-user
entries:
- key: elastic
path: xpack.security.authc.providers.anonymous.anonymous1.credentials.password
podTemplate:
metadata:
annotations:
co.elastic.logs/enabled: 'false'
spec:
containers:
- name: kibana
- name: kibana
readinessProbe:
httpGet:
path: /app/home
@@ -329,3 +317,28 @@ spec:
app.kubernetes.io/name: elasticsearch-exporter
podMetricsEndpoints:
- port: exporter
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kibana
annotations:
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: 'true'
spec:
tls:
- hosts:
- '*.k-space.ee'
rules:
- host: kibana.k-space.ee
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kibana-kb-http
port:
number: 5601

File diff suppressed because it is too large Load Diff

View File

@@ -9,12 +9,13 @@ metadata:
# Source: eck-operator/templates/service-account.yaml
apiVersion: v1
kind: ServiceAccount
automountServiceAccountToken: true
metadata:
name: elastic-operator
namespace: elastic-system
labels:
control-plane: elastic-operator
app.kubernetes.io/version: "2.4.0"
app.kubernetes.io/version: "2.13.0"
---
# Source: eck-operator/templates/webhook.yaml
apiVersion: v1
@@ -24,7 +25,7 @@ metadata:
namespace: elastic-system
labels:
control-plane: elastic-operator
app.kubernetes.io/version: "2.4.0"
app.kubernetes.io/version: "2.13.0"
---
# Source: eck-operator/templates/configmap.yaml
apiVersion: v1
@@ -34,7 +35,7 @@ metadata:
namespace: elastic-system
labels:
control-plane: elastic-operator
app.kubernetes.io/version: "2.4.0"
app.kubernetes.io/version: "2.13.0"
data:
eck.yaml: |-
log-verbosity: 0
@@ -45,6 +46,7 @@ data:
ca-cert-rotate-before: 24h
cert-validity: 8760h
cert-rotate-before: 24h
disable-config-watch: false
exposed-node-labels: [topology.kubernetes.io/.*,failure-domain.beta.kubernetes.io/.*]
set-default-security-context: auto-detect
kube-client-timeout: 60s
@@ -54,7 +56,11 @@ data:
validate-storage-class: true
enable-webhook: true
webhook-name: elastic-webhook.k8s.elastic.co
webhook-port: 9443
operator-namespace: elastic-system
enable-leader-election: true
elasticsearch-observation-interval: 10s
ubi-only: false
---
# Source: eck-operator/templates/cluster-roles.yaml
apiVersion: rbac.authorization.k8s.io/v1
@@ -63,7 +69,7 @@ metadata:
name: elastic-operator
labels:
control-plane: elastic-operator
app.kubernetes.io/version: "2.4.0"
app.kubernetes.io/version: "2.13.0"
rules:
- apiGroups:
- "authorization.k8s.io"
@@ -151,6 +157,19 @@ rules:
- create
- update
- patch
- apiGroups:
- autoscaling.k8s.elastic.co
resources:
- elasticsearchautoscalers
- elasticsearchautoscalers/status
- elasticsearchautoscalers/finalizers # needed for ownerReferences with blockOwnerDeletion on OCP
verbs:
- get
- list
- watch
- create
- update
- patch
- apiGroups:
- kibana.k8s.elastic.co
resources:
@@ -229,6 +248,32 @@ rules:
- create
- update
- patch
- apiGroups:
- stackconfigpolicy.k8s.elastic.co
resources:
- stackconfigpolicies
- stackconfigpolicies/status
- stackconfigpolicies/finalizers # needed for ownerReferences with blockOwnerDeletion on OCP
verbs:
- get
- list
- watch
- create
- update
- patch
- apiGroups:
- logstash.k8s.elastic.co
resources:
- logstashes
- logstashes/status
- logstashes/finalizers # needed for ownerReferences with blockOwnerDeletion on OCP
verbs:
- get
- list
- watch
- create
- update
- patch
- apiGroups:
- storage.k8s.io
resources:
@@ -268,11 +313,14 @@ metadata:
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-admin: "true"
control-plane: elastic-operator
app.kubernetes.io/version: "2.4.0"
app.kubernetes.io/version: "2.13.0"
rules:
- apiGroups: ["elasticsearch.k8s.elastic.co"]
resources: ["elasticsearches"]
verbs: ["get", "list", "watch"]
- apiGroups: ["autoscaling.k8s.elastic.co"]
resources: ["elasticsearchautoscalers"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apm.k8s.elastic.co"]
resources: ["apmservers"]
verbs: ["get", "list", "watch"]
@@ -291,6 +339,12 @@ rules:
- apiGroups: ["maps.k8s.elastic.co"]
resources: ["elasticmapsservers"]
verbs: ["get", "list", "watch"]
- apiGroups: ["stackconfigpolicy.k8s.elastic.co"]
resources: ["stackconfigpolicies"]
verbs: ["get", "list", "watch"]
- apiGroups: ["logstash.k8s.elastic.co"]
resources: ["logstashes"]
verbs: ["get", "list", "watch"]
---
# Source: eck-operator/templates/cluster-roles.yaml
apiVersion: rbac.authorization.k8s.io/v1
@@ -301,11 +355,14 @@ metadata:
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-admin: "true"
control-plane: elastic-operator
app.kubernetes.io/version: "2.4.0"
app.kubernetes.io/version: "2.13.0"
rules:
- apiGroups: ["elasticsearch.k8s.elastic.co"]
resources: ["elasticsearches"]
verbs: ["create", "delete", "deletecollection", "patch", "update"]
- apiGroups: ["autoscaling.k8s.elastic.co"]
resources: ["elasticsearchautoscalers"]
verbs: ["create", "delete", "deletecollection", "patch", "update"]
- apiGroups: ["apm.k8s.elastic.co"]
resources: ["apmservers"]
verbs: ["create", "delete", "deletecollection", "patch", "update"]
@@ -324,6 +381,12 @@ rules:
- apiGroups: ["maps.k8s.elastic.co"]
resources: ["elasticmapsservers"]
verbs: ["create", "delete", "deletecollection", "patch", "update"]
- apiGroups: ["stackconfigpolicy.k8s.elastic.co"]
resources: ["stackconfigpolicies"]
verbs: ["create", "delete", "deletecollection", "patch", "update"]
- apiGroups: ["logstash.k8s.elastic.co"]
resources: ["logstashes"]
verbs: ["create", "delete", "deletecollection", "patch", "update"]
---
# Source: eck-operator/templates/role-bindings.yaml
apiVersion: rbac.authorization.k8s.io/v1
@@ -332,7 +395,7 @@ metadata:
name: elastic-operator
labels:
control-plane: elastic-operator
app.kubernetes.io/version: "2.4.0"
app.kubernetes.io/version: "2.13.0"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
@@ -350,7 +413,7 @@ metadata:
namespace: elastic-system
labels:
control-plane: elastic-operator
app.kubernetes.io/version: "2.4.0"
app.kubernetes.io/version: "2.13.0"
spec:
ports:
- name: https
@@ -367,7 +430,7 @@ metadata:
namespace: elastic-system
labels:
control-plane: elastic-operator
app.kubernetes.io/version: "2.4.0"
app.kubernetes.io/version: "2.13.0"
spec:
selector:
matchLabels:
@@ -380,21 +443,29 @@ spec:
# Rename the fields "error" to "error.message" and "source" to "event.source"
# This is to avoid a conflict with the ECS "error" and "source" documents.
"co.elastic.logs/raw": "[{\"type\":\"container\",\"json.keys_under_root\":true,\"paths\":[\"/var/log/containers/*${data.kubernetes.container.id}.log\"],\"processors\":[{\"convert\":{\"mode\":\"rename\",\"ignore_missing\":true,\"fields\":[{\"from\":\"error\",\"to\":\"_error\"}]}},{\"convert\":{\"mode\":\"rename\",\"ignore_missing\":true,\"fields\":[{\"from\":\"_error\",\"to\":\"error.message\"}]}},{\"convert\":{\"mode\":\"rename\",\"ignore_missing\":true,\"fields\":[{\"from\":\"source\",\"to\":\"_source\"}]}},{\"convert\":{\"mode\":\"rename\",\"ignore_missing\":true,\"fields\":[{\"from\":\"_source\",\"to\":\"event.source\"}]}}]}]"
"checksum/config": a99a5f63f628a1ca8df440c12506cdfbf17827a1175dc5765b05f22f92b12b95
"checksum/config": 8b10381ca4067cf2c56aecc94c799473b09486202e146d2d7e5d6714f4c2e533
labels:
control-plane: elastic-operator
spec:
terminationGracePeriodSeconds: 10
serviceAccountName: elastic-operator
automountServiceAccountToken: true
securityContext:
runAsNonRoot: true
containers:
- image: "docker.elastic.co/eck/eck-operator:2.4.0"
- image: "docker.elastic.co/eck/eck-operator:2.13.0"
imagePullPolicy: IfNotPresent
name: manager
args:
- "manager"
- "--config=/conf/eck.yaml"
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
env:
- name: OPERATOR_NAMESPACE
valueFrom:
@@ -440,10 +511,9 @@ metadata:
name: elastic-webhook.k8s.elastic.co
labels:
control-plane: elastic-operator
app.kubernetes.io/version: "2.4.0"
app.kubernetes.io/version: "2.13.0"
webhooks:
- clientConfig:
caBundle: Cg==
service:
name: elastic-webhook-server
namespace: elastic-system
@@ -451,7 +521,7 @@ webhooks:
failurePolicy: Ignore
name: elastic-agent-validation-v1alpha1.k8s.elastic.co
matchPolicy: Exact
admissionReviewVersions: [v1beta1]
admissionReviewVersions: [v1, v1beta1]
sideEffects: None
rules:
- apiGroups:
@@ -464,7 +534,6 @@ webhooks:
resources:
- agents
- clientConfig:
caBundle: Cg==
service:
name: elastic-webhook-server
namespace: elastic-system
@@ -472,7 +541,7 @@ webhooks:
failurePolicy: Ignore
name: elastic-apm-validation-v1.k8s.elastic.co
matchPolicy: Exact
admissionReviewVersions: [v1beta1]
admissionReviewVersions: [v1, v1beta1]
sideEffects: None
rules:
- apiGroups:
@@ -485,7 +554,6 @@ webhooks:
resources:
- apmservers
- clientConfig:
caBundle: Cg==
service:
name: elastic-webhook-server
namespace: elastic-system
@@ -493,7 +561,7 @@ webhooks:
failurePolicy: Ignore
name: elastic-apm-validation-v1beta1.k8s.elastic.co
matchPolicy: Exact
admissionReviewVersions: [v1beta1]
admissionReviewVersions: [v1, v1beta1]
sideEffects: None
rules:
- apiGroups:
@@ -506,7 +574,6 @@ webhooks:
resources:
- apmservers
- clientConfig:
caBundle: Cg==
service:
name: elastic-webhook-server
namespace: elastic-system
@@ -514,7 +581,7 @@ webhooks:
failurePolicy: Ignore
name: elastic-beat-validation-v1beta1.k8s.elastic.co
matchPolicy: Exact
admissionReviewVersions: [v1beta1]
admissionReviewVersions: [v1, v1beta1]
sideEffects: None
rules:
- apiGroups:
@@ -527,7 +594,6 @@ webhooks:
resources:
- beats
- clientConfig:
caBundle: Cg==
service:
name: elastic-webhook-server
namespace: elastic-system
@@ -535,7 +601,7 @@ webhooks:
failurePolicy: Ignore
name: elastic-ent-validation-v1.k8s.elastic.co
matchPolicy: Exact
admissionReviewVersions: [v1beta1]
admissionReviewVersions: [v1, v1beta1]
sideEffects: None
rules:
- apiGroups:
@@ -548,7 +614,6 @@ webhooks:
resources:
- enterprisesearches
- clientConfig:
caBundle: Cg==
service:
name: elastic-webhook-server
namespace: elastic-system
@@ -556,7 +621,7 @@ webhooks:
failurePolicy: Ignore
name: elastic-ent-validation-v1beta1.k8s.elastic.co
matchPolicy: Exact
admissionReviewVersions: [v1beta1]
admissionReviewVersions: [v1, v1beta1]
sideEffects: None
rules:
- apiGroups:
@@ -569,7 +634,6 @@ webhooks:
resources:
- enterprisesearches
- clientConfig:
caBundle: Cg==
service:
name: elastic-webhook-server
namespace: elastic-system
@@ -577,7 +641,7 @@ webhooks:
failurePolicy: Ignore
name: elastic-es-validation-v1.k8s.elastic.co
matchPolicy: Exact
admissionReviewVersions: [v1beta1]
admissionReviewVersions: [v1, v1beta1]
sideEffects: None
rules:
- apiGroups:
@@ -590,7 +654,6 @@ webhooks:
resources:
- elasticsearches
- clientConfig:
caBundle: Cg==
service:
name: elastic-webhook-server
namespace: elastic-system
@@ -598,7 +661,7 @@ webhooks:
failurePolicy: Ignore
name: elastic-es-validation-v1beta1.k8s.elastic.co
matchPolicy: Exact
admissionReviewVersions: [v1beta1]
admissionReviewVersions: [v1, v1beta1]
sideEffects: None
rules:
- apiGroups:
@@ -611,7 +674,26 @@ webhooks:
resources:
- elasticsearches
- clientConfig:
caBundle: Cg==
service:
name: elastic-webhook-server
namespace: elastic-system
path: /validate-ems-k8s-elastic-co-v1alpha1-mapsservers
failurePolicy: Ignore
name: elastic-ems-validation-v1alpha1.k8s.elastic.co
matchPolicy: Exact
admissionReviewVersions: [v1, v1beta1]
sideEffects: None
rules:
- apiGroups:
- maps.k8s.elastic.co
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- mapsservers
- clientConfig:
service:
name: elastic-webhook-server
namespace: elastic-system
@@ -619,7 +701,7 @@ webhooks:
failurePolicy: Ignore
name: elastic-kb-validation-v1.k8s.elastic.co
matchPolicy: Exact
admissionReviewVersions: [v1beta1]
admissionReviewVersions: [v1, v1beta1]
sideEffects: None
rules:
- apiGroups:
@@ -632,7 +714,6 @@ webhooks:
resources:
- kibanas
- clientConfig:
caBundle: Cg==
service:
name: elastic-webhook-server
namespace: elastic-system
@@ -640,7 +721,7 @@ webhooks:
failurePolicy: Ignore
name: elastic-kb-validation-v1beta1.k8s.elastic.co
matchPolicy: Exact
admissionReviewVersions: [v1beta1]
admissionReviewVersions: [v1, v1beta1]
sideEffects: None
rules:
- apiGroups:
@@ -652,4 +733,64 @@ webhooks:
- UPDATE
resources:
- kibanas
- clientConfig:
service:
name: elastic-webhook-server
namespace: elastic-system
path: /validate-autoscaling-k8s-elastic-co-v1alpha1-elasticsearchautoscaler
failurePolicy: Ignore
name: elastic-esa-validation-v1alpha1.k8s.elastic.co
matchPolicy: Exact
admissionReviewVersions: [v1, v1beta1]
sideEffects: None
rules:
- apiGroups:
- autoscaling.k8s.elastic.co
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- elasticsearchautoscalers
- clientConfig:
service:
name: elastic-webhook-server
namespace: elastic-system
path: /validate-scp-k8s-elastic-co-v1alpha1-stackconfigpolicies
failurePolicy: Ignore
name: elastic-scp-validation-v1alpha1.k8s.elastic.co
matchPolicy: Exact
admissionReviewVersions: [v1, v1beta1]
sideEffects: None
rules:
- apiGroups:
- stackconfigpolicy.k8s.elastic.co
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- stackconfigpolicies
- clientConfig:
service:
name: elastic-webhook-server
namespace: elastic-system
path: /validate-logstash-k8s-elastic-co-v1alpha1-logstash
failurePolicy: Ignore
name: elastic-logstash-validation-v1alpha1.k8s.elastic.co
matchPolicy: Exact
admissionReviewVersions: [v1, v1beta1]
sideEffects: None
rules:
- apiGroups:
- logstash.k8s.elastic.co
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- logstashes

View File

@@ -12,10 +12,6 @@ kind: StatefulSet
metadata:
name: etherpad
namespace: etherpad
annotations:
keel.sh/policy: minor
keel.sh/trigger: poll
keel.sh/pollSchedule: "@midnight"
spec:
# Etherpad does NOT support running multiple replicas due to
# in-application caching https://github.com/ether/etherpad-lite/issues/3680
@@ -31,7 +27,7 @@ spec:
spec:
containers:
- name: etherpad
image: etherpad/etherpad:1
image: etherpad/etherpad:2
securityContext:
# Etherpad writes session key during start
readOnlyRootFilesystem: false

View File

@@ -1,6 +1,6 @@
---
apiVersion: codemowers.io/v1alpha1
kind: OIDCGWMiddlewareClient
apiVersion: codemowers.cloud/v1beta1
kind: OIDCMiddlewareClient
metadata:
name: freescout
spec:
@@ -14,8 +14,8 @@ spec:
name: Remote-Name
user: Remote-User
---
apiVersion: codemowers.io/v1alpha1
kind: OIDCGWClient
apiVersion: codemowers.cloud/v1beta1
kind: OIDCClient
metadata:
name: freescout
spec:
@@ -34,11 +34,77 @@ spec:
- openid
- profile
pkce: false
secretRefreshPod:
apiVersion: v1
kind: Pod
spec:
volumes:
- name: tmp
emptyDir: {}
initContainers:
- name: jq
image: >-
alpine/k8s:1.24.16@sha256:06f8942d87fa17b40795bb9a8eff029a9be3fc3c9bcc13d62071de4cc3324153
command:
- /bin/bash
- '-c'
- >-
rm -fv /tmp/update.sql; jq
'{"name":"oauth.client_id","value":$ENV.OIDC_CLIENT_ID} | "UPDATE
options SET value=\(.value|tostring|@sh) WHERE
name=\(.name|tostring|@sh) LIMIT 1;"' -n -r >> /tmp/update.sql; jq
'{"name":"oauth.client_secret","value":$ENV.OIDC_CLIENT_SECRET} |
"UPDATE options SET value=\(.value|tostring|@sh) WHERE
name=\(.name|tostring|@sh) LIMIT 1;"' -n -r >> /tmp/update.sql; jq
'{"name":"oauth.auth_url","value":$ENV.OIDC_IDP_AUTH_URI} |
"UPDATE options SET value=\(.value + "?scope=openid+profile"
|tostring|@sh) WHERE name=\(.name|tostring|@sh) LIMIT 1;"' -n -r
>> /tmp/update.sql; jq
'{"name":"oauth.token_url","value":$ENV.OIDC_IDP_TOKEN_URI} |
"UPDATE options SET value=\(.value|tostring|@sh) WHERE
name=\(.name|tostring|@sh) LIMIT 1;"' -n -r >> /tmp/update.sql; jq
'{"name":"oauth.user_url","value":$ENV.OIDC_IDP_USERINFO_URI}
| "UPDATE options SET value=\(.value|tostring|@sh) WHERE
name=\(.name|tostring|@sh) LIMIT 1;"' -n -r >> /tmp/update.sql;
cat /tmp/update.sql
envFrom:
- secretRef:
name: oidc-client-freescout-owner-secrets
resources: {}
volumeMounts:
- name: tmp
mountPath: /tmp
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
containers:
- name: mysql
image: mysql
command:
- /bin/bash
- '-c'
- >-
mysql -u kspace_freescout kspace_freescout -h 172.20.36.1
-p${MYSQL_PWD} < /tmp/update.sql
env:
- name: MYSQL_PWD
valueFrom:
secretKeyRef:
name: freescout-secrets
key: DB_PASS
resources: {}
volumeMounts:
- name: tmp
mountPath: /tmp
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
restartPolicy: OnFailure
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: oidc-gateway
name: freescout
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure
@@ -92,7 +158,7 @@ spec:
spec:
containers:
- name: freescout
image: harbor.k-space.ee/k-space/freescout@sha256:de1a6c8bd1f285f6f6c61aa48921a884fe7a1496655b31c9536805397c01ee58
image: harbor.k-space.ee/k-space/freescout
ports:
- containerPort: 8080
env:
@@ -153,7 +219,7 @@ spec:
spec:
containers:
- name: freescout-cron
image: harbor.k-space.ee/k-space/freescout@sha256:de1a6c8bd1f285f6f6c61aa48921a884fe7a1496655b31c9536805397c01ee58
image: harbor.k-space.ee/k-space/freescout
imagePullPolicy: Always
command:
- php

View File

@@ -32,8 +32,8 @@ spec:
- key: secret
value: "%(plaintext)s"
---
apiVersion: codemowers.io/v1alpha1
kind: OIDCGWClient
apiVersion: codemowers.cloud/v1beta1
kind: OIDCClient
metadata:
name: gitea
spec:
@@ -53,6 +53,46 @@ spec:
- openid
- profile
pkce: false
secretRefreshPod:
apiVersion: v1
kind: Pod
metadata:
name: reset-oidc-config
spec:
volumes:
- name: tmp
emptyDir: {}
initContainers:
- name: jq
image: alpine/k8s:1.24.16@sha256:06f8942d87fa17b40795bb9a8eff029a9be3fc3c9bcc13d62071de4cc3324153
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /tmp
name: tmp
envFrom:
- secretRef:
name: oidc-client-gitea-owner-secrets
command:
- /bin/bash
- -c
- jq '{"strategyKey":"OpenID","config":{"Provider":"openidConnect","ClientID":$ENV.OIDC_CLIENT_ID,"ClientSecret":$ENV.OIDC_CLIENT_SECRET,"OpenIDConnectAutoDiscoveryURL":"https://auth.k-space.ee/.well-known/openid-configuration","CustomURLMapping":null,"IconURL":"","Scopes":null,"RequiredClaimName":"","RequiredClaimValue":"","GroupClaimName":"","AdminGroup":"","GroupTeamMap":"","GroupTeamMapRemoval":false,"RestrictedGroup":""}} | "UPDATE login_source SET cfg=\(.config|tostring|@sh) WHERE name=\(.strategyKey|tostring|@sh) LIMIT 1"' -n -r > /tmp/update.sql
containers:
- name: mysql
image: mysql
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /tmp
name: tmp
env:
- name: MYSQL_PWD
valueFrom:
secretKeyRef:
name: gitea-secrets
key: GITEA__DATABASE__PASSWD
command:
- /bin/bash
- -c
- mysql -u kspace_git kspace_git -h 172.20.36.1 -p${MYSQL_PWD} < /tmp/update.sql
---
apiVersion: apps/v1
kind: StatefulSet
@@ -80,7 +120,7 @@ spec:
runAsNonRoot: true
containers:
- name: gitea
image: gitea/gitea:1.21.5-rootless
image: gitea/gitea:1.22.1-rootless
imagePullPolicy: IfNotPresent
securityContext:
readOnlyRootFilesystem: true

View File

@@ -1,6 +1,6 @@
---
apiVersion: codemowers.io/v1alpha1
kind: OIDCGWClient
apiVersion: codemowers.cloud/v1beta1
kind: OIDCClient
metadata:
name: grafana
spec:
@@ -31,6 +31,8 @@ data:
[server]
domain = grafana.k-space.ee
root_url = https://%(domain)s/
[auth]
oauth_allow_insecure_email_lookup=true
[auth.generic_oauth]
name = OAuth
icon = signin
@@ -38,7 +40,7 @@ data:
empty_scopes = false
allow_sign_up = true
use_pkce = true
role_attribute_path = contains(groups[*], 'github.com:codemowers') && 'Admin' || 'Viewer'
role_attribute_path = contains(groups[*], 'k-space:kubernetes:admins') && 'Admin' || 'Viewer'
[security]
disable_initial_admin_creation = true
---
@@ -63,7 +65,7 @@ spec:
fsGroup: 472
containers:
- name: grafana
image: grafana/grafana:8.5.24
image: grafana/grafana:11.1.0
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
@@ -73,7 +75,7 @@ spec:
valueFrom:
secretKeyRef:
name: oidc-client-grafana-owner-secrets
key: OIDC_GATEWAY_URI
key: OIDC_IDP_URI
- name: GF_AUTH_GENERIC_OAUTH_CLIENT_ID
valueFrom:
secretKeyRef:
@@ -93,17 +95,32 @@ spec:
valueFrom:
secretKeyRef:
name: oidc-client-grafana-owner-secrets
key: OIDC_GATEWAY_AUTH_URI
key: OIDC_IDP_AUTH_URI
- name: GF_AUTH_GENERIC_OAUTH_TOKEN_URL
valueFrom:
secretKeyRef:
name: oidc-client-grafana-owner-secrets
key: OIDC_GATEWAY_TOKEN_URI
key: OIDC_IDP_TOKEN_URI
- name: GF_AUTH_GENERIC_OAUTH_API_URL
valueFrom:
secretKeyRef:
name: oidc-client-grafana-owner-secrets
key: OIDC_GATEWAY_USERINFO_URI
key: OIDC_IDP_USERINFO_URI
- name: GF_DATABASE_TYPE
value: mysql
- name: GF_DATABASE_HOST
value: 172.20.36.1:3306
- name: GF_DATABASE_SSL_MODE
value: disable
- name: GF_DATABASE_NAME
value: kspace_grafana
- name: GF_DATABASE_USER
value: kspace_grafana
- name: GF_DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: grafana-database
key: password
ports:
- containerPort: 3000
name: http-grafana

8
hackerspace/README.md Normal file
View File

@@ -0,0 +1,8 @@
## inventory.k-space.ee
Reads-writes to mongo.
<!-- Referenced/linked by https://wiki.k-space.ee/en/hosting/doors -->
A component of inventory is 'doorboy' (https://wiki.k-space.ee/en/hosting/doors)
## k6.ee
Reads from mongo, HTTP redirect to //inventory.k-space.ee/m/inventory/{uuid}/view

View File

@@ -1,11 +1,9 @@
# Referenced/linked and documented by https://wiki.k-space.ee/en/hosting/doors
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: doorboy-proxy
annotations:
keel.sh/policy: force
keel.sh/trigger: poll
spec:
revisionHistoryLimit: 0
replicas: 3
@@ -26,20 +24,30 @@ spec:
operator: In
values:
- doorboy-proxy
topologyKey: kubernetes.io/hostname
topologyKey: topology.kubernetes.io/zone
weight: 100
containers:
- name: doorboy-proxy
image: harbor.k-space.ee/k-space/doorboy-proxy:latest
envFrom:
- secretRef:
name: inventory-mongodb
- secretRef:
name: doorboy-api
env:
- name: MONGO_URI
- name: FLOOR_ACCESS_GROUP
value: 'k-space:floor'
- name: WORKSHOP_ACCESS_GROUP
value: 'k-space:workshop'
- name: CARD_URI
value: 'https://inventory.k-space.ee/cards'
- name: SWIPE_URI
value: 'https://inventory.k-space.ee/m/doorboy/swipe'
- name: INVENTORY_API_KEY
valueFrom:
secretKeyRef:
name: mongo-application-readwrite
key: connectionString.standard
name: inventory-api-key
key: INVENTORY_API_KEY
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true

View File

@@ -37,8 +37,8 @@ spec:
- name: MONGO_URI
valueFrom:
secretKeyRef:
key: connectionString.standard
name: inventory-mongodb-application-readwrite
key: MONGO_URI
name: inventory-mongodb
name: goredirect
ports:
- containerPort: 8080

View File

@@ -0,0 +1,25 @@
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: members-inventory-redirect
spec:
redirectRegex:
regex: ^https://members.k-space.ee/(.*)
replacement: https://inventory.k-space.ee/${1}
permanent: false
---
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: members-inventory
spec:
entryPoints:
- websecure
routes:
- match: Host(`members.k-space.ee`)
kind: Rule
middlewares:
- name: members-inventory-redirect
services:
- kind: TraefikService
name: api@internal

View File

@@ -1,3 +1,4 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
@@ -22,17 +23,10 @@ spec:
value: PROD
- name: PYTHONUNBUFFERED
value: "1"
- name: MEMBERS_HOST
value: https://members.k-space.ee
- name: INVENTORY_ASSETS_BASE_URL
value: https://minio-cluster-shared.k-space.ee/inventory-5b342be1-60a1-4290-8061-e0b8fc17d40d/
- name: OIDC_USERS_NAMESPACE
value: oidc-gateway
- name: MONGO_URI
valueFrom:
secretKeyRef:
key: connectionString.standard
name: inventory-mongodb-application-readwrite
value: passmower
- name: SECRET_KEY
valueFrom:
secretKeyRef:
@@ -58,6 +52,8 @@ spec:
name: miniobucket-inventory-owner-secrets
- secretRef:
name: oidc-client-inventory-app-owner-secrets
- secretRef:
name: inventory-mongodb
name: inventory
ports:
- containerPort: 5000
@@ -88,113 +84,92 @@ spec:
volumes:
- name: tmp
---
apiVersion: codemowers.cloud/v1beta1
kind: SecretClaim
apiVersion: v1
kind: Service
metadata:
name: inventory-mongodb-readwrite-password
name: inventory-app
labels:
app: inventory-app
spec:
size: 32
mapping:
- key: password
value: "%(plaintext)s"
selector:
app: inventory-app
ports:
- protocol: TCP
port: 5000
---
apiVersion: mongodbcommunity.mongodb.com/v1
kind: MongoDBCommunity
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: inventory-mongodb
name: inventory-app
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
external-dns.alpha.kubernetes.io/hostname: members.k-space.ee,inventory.k-space.ee
spec:
agent:
logLevel: ERROR
maxLogFileDurationHours: 1
additionalMongodConfig:
systemLog:
quiet: true
members: 3
type: ReplicaSet
version: "6.0.3"
security:
authentication:
modes: ["SCRAM"]
users:
- name: readwrite
db: application
passwordSecretRef:
name: inventory-mongodb-readwrite-password
roles:
- name: readWrite
db: application
scramCredentialsSecretName: inventory-mongodb-readwrite
statefulSet:
spec:
logLevel: WARN
template:
spec:
containers:
- name: mongod
resources:
requests:
cpu: 100m
memory: 1Gi
limits:
cpu: 4000m
memory: 1Gi
volumeMounts:
- name: journal-volume
mountPath: /data/journal
- name: mongodb-agent
resources:
requests:
cpu: 1m
memory: 100Mi
limits: {}
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- inventory-mongodb-svc
topologyKey: kubernetes.io/hostname
nodeSelector:
dedicated: monitoring
tolerations:
- key: dedicated
operator: Equal
value: monitoring
effect: NoSchedule
volumeClaimTemplates:
- metadata:
name: logs-volume
labels:
usecase: logs
spec:
storageClassName: mongo
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
- metadata:
name: journal-volume
labels:
usecase: journal
spec:
storageClassName: mongo
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 512Mi
- metadata:
name: data-volume
labels:
usecase: data
spec:
storageClassName: mongo
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
ingressClassName: shared
rules:
- host: inventory.k-space.ee
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: inventory-app
port:
number: 5000
tls:
- hosts:
- "*.k-space.ee"
---
apiVersion: codemowers.cloud/v1beta1
kind: OIDCClient
metadata:
name: inventory-app
spec:
uri: 'https://inventory.k-space.ee'
redirectUris:
- 'https://inventory.k-space.ee/login-callback'
grantTypes:
- 'authorization_code'
responseTypes:
- 'code'
availableScopes:
- 'openid'
- 'profile'
tokenEndpointAuthMethod: 'client_secret_basic'
pkce: false
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: inventory
rules:
- verbs:
- get
- list
- watch
apiGroups:
- codemowers.cloud
resources:
- oidcusers
- oidcusers/status
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: inventory
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: inventory
subjects:
- kind: ServiceAccount
name: inventory
namespace: hackerspace
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: inventory

View File

@@ -1,8 +1,14 @@
Deploy with:
```
kubectl create namespace harbor
kubectl apply -n harbor -f application.yml -f application-secrets.yml
kubectl create namespace harbor-operator
kubectl -n harbor-operator create secret generic harbor-minio-credentials --from-literal REGISTRY_STORAGE_S3_ACCESSKEY=...--from-literal=REGISTRY_STORAGE_S3_SECRETKEY=...
kubectl -n harbor-operator create secret generic harbor-postgres-password --from-literal password=...
helm repo add harbor https://helm.goharbor.io
helm template -n harbor-operator --release-name harbor harbor/harbor --include-crds -f harbor/values.yaml > harbor/application.yml
kubectl apply -n harbor-operator -f harbor/application.yml -f harbor/application-extras.yml
```
After deployment login with Harbor admin credentials and configure OIDC:

View File

@@ -0,0 +1,57 @@
---
apiVersion: codemowers.cloud/v1beta1
kind: OIDCClient
metadata:
name: harbor
namespace: harbor-operator
spec:
displayName: Harbor
uri: https://harbor.k-space.ee
redirectUris:
- https://harbor.k-space.ee/c/oidc/callback
allowedGroups:
- k-space:floor
grantTypes:
- authorization_code
- refresh_token
responseTypes:
- code
availableScopes:
- openid
- profile
pkce: false
---
apiVersion: codemowers.cloud/v1beta1
kind: MinioBucketClaim
metadata:
name: harbor
namespace: harbor-operator
spec:
capacity: 1Ti
class: external
---
apiVersion: codemowers.cloud/v1beta1
kind: SecretClaim
metadata:
name: dragonfly-auth
spec:
size: 32
mapping:
- key: REDIS_PASSWORD
value: "%(plaintext)s"
- key: REDIS_URI
value: "redis://:%(plaintext)s@dragonfly"
---
apiVersion: dragonflydb.io/v1alpha1
kind: Dragonfly
metadata:
name: dragonfly
spec:
authentication:
passwordFromSecret:
key: REDIS_PASSWORD
name: dragonfly-auth
replicas: 3
resources:
limits:
memory: 5Gi

File diff suppressed because it is too large Load Diff

141
harbor/values.yaml Normal file
View File

@@ -0,0 +1,141 @@
expose:
type: ingress
tls:
enabled: true
ingress:
hosts:
core: harbor.k-space.ee
annotations:
cert-manager.io/cluster-issuer: default
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
labels: {}
externalURL: https://harbor.k-space.ee
# The persistence is enabled by default and a default StorageClass
# is needed in the k8s cluster to provision volumes dynamically.
# Specify another StorageClass in the "storageClass" or set "existingClaim"
# if you already have existing persistent volumes to use
#
# For storing images and charts, you can also use "azure", "gcs", "s3",
# "swift" or "oss". Set it in the "imageChartStorage" section
persistence:
enabled: true
# Define which storage backend is used for registry to store
# images and charts. Refer to
# https://github.com/distribution/distribution/blob/main/docs/content/about/configuration.md#storage
# for the detail.
persistentVolumeClaim:
jobservice:
jobLog:
existingClaim: ""
storageClass: "longhorn"
subPath: ""
accessMode: ReadWriteMany
size: 5Gi
annotations: {}
imageChartStorage:
# Specify whether to disable `redirect` for images and chart storage, for
# backends which not supported it (such as using minio for `s3` storage type), please disable
# it. To disable redirects, simply set `disableredirect` to `true` instead.
# Refer to
# https://github.com/distribution/distribution/blob/main/docs/configuration.md#redirect
# for the detail.
disableredirect: false
type: s3
s3:
# Set an existing secret for S3 accesskey and secretkey
# keys in the secret should be REGISTRY_STORAGE_S3_ACCESSKEY and REGISTRY_STORAGE_S3_SECRETKEY for registry
existingSecret: "harbor-minio-credentials"
region: us-east-1
bucket: harbor-operator-e60e5943-234a-496d-ae74-933f6a67c530
#accesskey: awsaccesskey
#secretkey: awssecretkey
regionendpoint: https://external.minio-clusters.k-space.ee
#encrypt: false
#keyid: mykeyid
#secure: true
#skipverify: false
#v4auth: true
#chunksize: "5242880"
#rootdirectory: /s3/object/name/prefix
#storageclass: STANDARD
#multipartcopychunksize: "33554432"
#multipartcopymaxconcurrency: 100
#multipartcopythresholdsize: "33554432"
# The initial password of Harbor admin. Change it from portal after launching Harbor
# or give an existing secret for it
# key in secret is given via (default to HARBOR_ADMIN_PASSWORD)
# existingSecretAdminPassword:
existingSecretAdminPasswordKey: HARBOR_ADMIN_PASSWORD
# debug, info, warning, error or fatal
logLevel: debug
# Run the migration job via helm hook
enableMigrateHelmHook: false
metrics:
enabled: true
core:
path: /metrics
port: 8001
registry:
path: /metrics
port: 8001
jobservice:
path: /metrics
port: 8001
exporter:
path: /metrics
port: 8001
serviceMonitor:
enabled: true
additionalLabels: {}
# Scrape interval. If not set, the Prometheus default scrape interval is used.
interval: ""
# Metric relabel configs to apply to samples before ingestion.
metricRelabelings:
[]
# - action: keep
# regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'
# sourceLabels: [__name__]
# Relabel configs to apply to samples before ingestion.
relabelings:
[]
# - sourceLabels: [__meta_kubernetes_pod_node_name]
# separator: ;
# regex: ^(.*)$
# targetLabel: nodename
# replacement: $1
# action: replace
trivy:
enabled: false
database:
type: "external"
external:
host: "172.20.43.1"
port: "5432"
username: "kspace_harbor"
coreDatabase: "kspace_harbor"
existingSecret: "harbor-postgres-password"
sslmode: "disable"
redis:
type: external
external:
# support redis, redis+sentinel
# addr for redis: <host_redis>:<port_redis>
# addr for redis+sentinel: <host_sentinel1>:<port_sentinel1>,<host_sentinel2>:<port_sentinel2>,<host_sentinel3>:<port_sentinel3>
addr: "dragonfly:6379"
username: ""
password: "MvYcuU0RaIu1SX7fY1m1JrgLUSaZJjge"

View File

@@ -1,38 +0,0 @@
all:
children:
bind:
hosts:
ns1.k-space.ee:
kubernetes:
children:
masters:
hosts:
master1.kube.k-space.ee:
master2.kube.k-space.ee:
master3.kube.k-space.ee:
kubelets:
children:
mon:
hosts:
mon1.kube.k-space.ee:
mon2.kube.k-space.ee:
mon3.kube.k-space.ee:
storage:
hosts:
storage1.kube.k-space.ee:
storage2.kube.k-space.ee:
storage3.kube.k-space.ee:
storage4.kube.k-space.ee:
workers:
hosts:
worker1.kube.k-space.ee:
worker2.kube.k-space.ee:
worker3.kube.k-space.ee:
worker4.kube.k-space.ee:
worker9.kube.k-space.ee:
doors:
hosts:
100.102.3.1:
100.102.3.2:
100.102.3.3:
100.102.3.4:

View File

@@ -272,7 +272,7 @@ metadata:
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd
traefik.ingress.kubernetes.io/router.middlewares: kubernetes-dashboard-sso@kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true"
spec:
rules:
@@ -289,3 +289,19 @@ spec:
tls:
- hosts:
- "*.k-space.ee"
---
apiVersion: codemowers.cloud/v1beta1
kind: OIDCMiddlewareClient
metadata:
name: sso
spec:
displayName: Kubernetes dashboard
uri: 'https://dashboard.k-space.ee'
allowedGroups:
- k-space:kubernetes:developers
- k-space:kubernetes:admins
headerMapping:
email: Remote-Email
groups: Remote-Groups
name: Remote-Name
user: Remote-Username

View File

@@ -1,6 +1,6 @@
---
apiVersion: codemowers.io/v1alpha1
kind: OIDCGWMiddlewareClient
apiVersion: codemowers.cloud/v1beta1
kind: OIDCMiddlewareClient
metadata:
name: frontend
spec:

View File

@@ -1,6 +1,6 @@
---
apiVersion: codemowers.io/v1alpha1
kind: OIDCGWMiddlewareClient
apiVersion: codemowers.cloud/v1beta1
kind: OIDCMiddlewareClient
metadata:
name: ui
spec:

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More