384 Commits

Author SHA1 Message Date
5e04a1bd43 provision new worker nodes with ansible 2024-08-09 12:07:03 +03:00
8a1b0b52af add new worker9 2024-08-08 22:39:35 +03:00
6b24ede7ac Upgrade to Kubernetes 1.30 2024-08-08 19:45:46 +03:00
e0cf532e42 Upgrade to Kubernetes 1.29 2024-08-08 18:55:02 +03:00
Erki Aas
59373041cc passmower: run in 3 replicas 2024-08-08 15:53:53 +03:00
4e80899c77 Prepare for separation of ansible Git repo 2024-08-08 12:56:25 +03:00
Erki Aas
9c2b5c39ee fix/update harbor 2024-08-08 12:45:57 +03:00
d3eb888d58 doc: inventory: reference rosdump 2024-08-08 12:40:54 +03:00
3714b174e7 camtiler: disable, it broken 2024-08-03 09:03:14 +03:00
a1acb06e12 traefik: publish services (for argo healthy) 2024-08-03 09:03:13 +03:00
0b6ab650a2 argo: add apps (already) in argo to git (config drift) 2024-08-03 09:03:11 +03:00
35404464f4 argo: strongarm autosync to prevent further config drift
Commenting empty syncPolicy, otherwise argocd sees it as diff
2024-08-03 08:01:55 +03:00
41da5931f9 auth migra: whoami 2024-08-03 06:04:27 +03:00
6879a4e5a5 argo: drone no longer exists 2024-08-03 06:04:27 +03:00
9b2c655a02 camtiler: unify to cam.k-space.ee 2024-08-03 06:04:27 +03:00
8876300dc4 argo config drift: camtiler 2024-08-03 06:04:24 +03:00
8199b3b732 argo config drift: wildduck
Change for apps/StatefulSet/wildduck/wildduck-operator
caused by 2d25377090 applied by ArgoCD:
-      serviceAccountName: codemowers-io-wildduck-operator
+      serviceAccountName: codemowers-cloud-wildduck-operator
2024-08-03 05:35:31 +03:00
43c9b3aa93 argo config drift: woodpecker 2024-08-03 05:35:31 +03:00
504bd3012e argo config drift: doorboy 2024-08-03 04:27:31 +03:00
75b5d39880 signs: deploy with argo 2024-08-03 04:27:31 +03:00
7377b62b3f doc: readme tip + todo for argo 'user-facing' doc 2024-08-03 04:27:31 +03:00
cd13de6cee doc: Reword backlink warning
we already got more broken links :/
I don't really want it to be an agressive warn.
2024-08-03 04:27:31 +03:00
13da9a8877 Add redirects sign.k-space.ee, members.k-space.ee
There still are dead inventory links with members.k-space.ee
2024-08-03 04:27:31 +03:00
490770485d fixup auth2 → auth rename 2024-08-03 04:27:20 +03:00
ba48643a37 inventory: tls host is k-space.ee, not codemowers
seems like copy-pasta typo
2024-08-03 01:44:15 +03:00
Erki Aas
18a0079a21 chore: add eaas as contributor 2024-07-30 14:15:13 +03:00
Erki Aas
885b13ecd7 chore: move doorboy to hackerspace 2024-07-30 14:13:25 +03:00
Erki Aas
e17caa9c2d passmower: update login link template 2024-07-30 14:12:54 +03:00
Erki Aas
336ab2efa2 update readme 2024-07-30 12:40:01 +03:00
27a5fe14c7 docs: commit todo items 2024-07-30 11:03:00 +03:00
66034d2463 docs: mega refactor
Also bunch of edits at wiki.k-space.ee
2024-07-30 10:51:34 +03:00
186ea5d947 docs: hackerspace / Inventory-app 2024-07-30 10:33:25 +03:00
470d4f3459 docs: Slack bots 2024-07-30 10:32:57 +03:00
8ad6b989e5 Migrate signs.k-space.ee from GitLab to kube
copy from ripe87
2024-07-30 10:18:40 +03:00
b6bf3ab225 passmower users: list prefix before name 2024-07-30 08:00:14 +03:00
7cac31964d docs: camtiler & doors 2024-07-30 06:13:56 +03:00
a250363bb0 rm replaced-unused mysql-operator 2024-07-30 02:56:50 +03:00
Erki Aas
480ff4f426 update passmower deployment 2024-07-29 15:59:45 +03:00
b737d37b9c fmt ansible: compact and more readable 2024-07-28 22:28:30 +03:00
b4ad080e95 zrepl: enable prometheus for offsite 2024-07-28 21:46:26 +03:00
Simon
a5ad80d8cd Make login url clickable in emails 2024-07-28 18:42:38 +00:00
62be47c2e1 inventory: add ingress and other manifests 2024-07-28 20:58:25 +03:00
249ad2e9ed fix and update harbor install 2024-07-28 20:22:08 +03:00
0c38d2369b attempt to get kibana working 2024-07-28 20:22:08 +03:00
b07a5b9bc0 reconfigure grub only on x86 nodes 2024-07-28 20:22:08 +03:00
2d25377090 wildduck: migrate to dragonfly, disable network policies, upgrade wildduck-operator 2024-07-28 20:22:08 +03:00
73d185b2ee fix redirects 2024-07-28 20:22:08 +03:00
0eb2dc6503 deprecate crunchydata postgres operator 2024-07-28 20:22:08 +03:00
34f1b53544 zrepl: prometheus target 2024-07-28 20:00:51 +03:00
fd1aeaa1a3 Upgrade Calico 2024-07-28 10:38:25 +03:00
b8477de6a8 Upgrade cert-manager 2024-07-28 10:37:34 +03:00
2f712a935e fixup: nas root is not encrypted and failed 2024-07-28 03:32:11 +03:00
792ff38bea mv zrepl.yml to playbook.yml 2024-07-28 03:31:16 +03:00
e929b52e6d Fix ansible.cfg 2024-07-28 01:42:55 +03:00
b2b93879c2 mv to ansible/ 2024-07-27 23:55:16 +03:00
c222f22768 fix zrepl playbook 2024-07-27 23:54:29 +03:00
28ed62c40e migrate wildflock to new passmower 2024-07-27 23:51:04 +03:00
74600efb4c zrepl 2024-07-27 23:49:45 +03:00
79aaaf7498 add todo 2024-07-27 23:08:39 +03:00
f0b78f7b17 migrate grafana to new passmower and external db 2024-07-27 23:08:29 +03:00
ba520da57e update readme 2024-07-27 23:08:15 +03:00
30503ad121 update readme 2024-07-27 23:06:20 +03:00
fbe4a55251 migrate gitea to new passmower 2024-07-27 22:57:01 +03:00
37567eccf9 migrate wiki to new passmower 2024-07-27 22:57:01 +03:00
d3ba1cc05f add openebs-localpath 2024-07-27 22:57:01 +03:00
61b1b1d6ef migrate woodpecker to external mysql 2024-07-27 22:57:01 +03:00
1e8bccbfa3 migrate to new passmower 2024-07-27 22:57:01 +03:00
e89edca340 enable xfs quotas on worker node rootfs 2024-07-27 22:57:01 +03:00
2bb13ef505 manage kube-apiserver manifest with ansible 2024-07-27 22:57:01 +03:00
c44cfb8bc8 fix kubelogin 2024-07-27 22:57:01 +03:00
417f3ddcb8 Update storage nodes and readd Raspberry Pi 400 2024-07-27 22:11:38 +03:00
32fbd498cf Fix typo 2024-07-27 11:46:39 +03:00
97563e8092 Upgrade ECK operator 2024-07-27 10:50:17 +03:00
4141c6b8ae Add OpenSearch operator 2024-07-27 08:42:16 +03:00
bd26aa46b4 Upgrade Etherpad 2024-07-27 08:31:56 +03:00
92459ed68b Reorder SSH key update playbook 2024-07-27 08:30:53 +03:00
9cf57d8bc6 Upgrade MetalLB 2024-07-27 08:30:53 +03:00
af1c78dea6 deprecate members.k-space.ee 2024-07-27 03:17:24 +03:00
2e77813162 migrate to new passmower 2024-07-27 03:17:24 +03:00
ca623c11fd Update kubeadm, kubectl, kubelet deployment 2024-07-27 01:06:20 +03:00
047cbb5c6b traefik: upgrade to 3.1, migrate dashboard via ingressroute 2024-07-27 00:06:07 +03:00
3e52f37cde Add DragonflyDB operator 2024-07-26 17:46:45 +03:00
b955369e2a Upgrade CloudNativePG to 1.23.2 2024-07-26 17:35:42 +03:00
5e765e9788 Use Codemower's image for mikrotik-exporter 2024-07-26 14:15:18 +03:00
5d4f49409c Remove Keel annotations 2024-07-26 13:56:13 +03:00
de573721bd Deprecate Drone as it's devs moved on to develop Gitness 2024-07-26 13:51:55 +03:00
c868a62ab7 Update to Woodpecker 2.7.0 2024-07-26 13:26:24 +03:00
7b6f6252a5 Update external-dns 2024-07-26 13:16:49 +03:00
9223c956c0 Update Bind 9.19 to 9.20 2024-07-26 13:16:22 +03:00
1d4e5051d8 Add Prusa 3D printer web endpoint 2024-07-26 13:03:20 +03:00
56bb5be8a9 grafana: Upgrade and fix ACL
:# Please enter the commit message for your changes. Lines starting
2024-07-26 12:36:08 +03:00
d895360510 monitoring: Upgrade node-exporter 2024-07-25 19:17:24 +03:00
bc8de58ca8 monitoring: Upgrade blackbox-exporter 2024-07-25 19:17:24 +03:00
8d355ff9dc Update Prometheus operator 2024-07-25 19:17:24 +03:00
Erki Aas
dc2a08dc78 goredirect: fix mongo uri 2024-07-24 12:51:53 +03:00
19a0b70b9e woodpecker: fix agent 2024-07-19 19:49:32 +03:00
9c656b0ef9 woodpecker: restore storage from backup 2024-07-19 18:13:09 +03:00
278817249e Add Ansible tasks to update authorized SSH keys 2024-07-19 14:08:51 +03:00
cb5644c7f3 Ansible SSH multiplexing fixes 2024-07-19 12:55:40 +03:00
78ef148f83 Add Ansible playbook to update known_hosts and ssh_config 2024-07-19 11:49:47 +03:00
Erki Aas
c2b9ed0368 inventory: migrate to external mogno 2024-07-17 23:58:38 +03:00
Erki Aas
43abf125a9 pve: add pve-internal.k-space.ee for pve-csi in whitelisted codemowers.cloud cluster 2024-07-17 17:59:59 +03:00
Erki Aas
71d968a815 Upgrade longhorn to 1.6.2 2024-07-07 14:38:02 +03:00
Erki Aas
9b4976450f Upgrade longhorn to 1.5.5 2024-07-07 14:00:27 +03:00
27eb0aa6cc Bump Gitea to 1.22.1 2024-07-04 16:26:06 +03:00
f97a77e5aa rm dev.k-space.ee, VM deprecated 2024-06-20 17:27:35 +03:00
73faa9f89c argocd: Update Helm values for new Helm chart 2024-05-21 13:00:18 +03:00
51808b3c6b Update ansible-kubernetes.yml 2024-05-02 12:46:20 +00:00
07af1aa0bd mirror.gcr.io for harbor 2024-04-28 05:09:50 +03:00
f3cceca1c3 use gcr mirror for images with full docker.io path
cluster constantly failing due to rate limits,
please find a better solution
2024-04-28 05:01:02 +03:00
87bc4f1077 fix(camtiler): increase minio bucket quota to 150Gi 2024-02-23 15:54:52 +02:00
aa4ffcd1ad fix(camtiler): add minio console ingress 2024-02-23 15:54:24 +02:00
80ffdbbb80 fix(camtiler): disable broken egress network policies 2024-02-22 12:43:20 +02:00
51895a2a2b fix(camtiler): increase minio bucker quota 2024-02-22 12:43:20 +02:00
c6ea938214 fix(camtiler): add temporary ingress for camtiler dedicated s3 2024-02-22 12:43:20 +02:00
d40f7d2681 mongoexpress: fix usage 2024-02-22 12:43:20 +02:00
b990861040 mongodb: use mirror.gcr.io 2024-02-19 05:24:09 +02:00
477ba83ba4 mon: dc1 is decomissioned 2024-02-19 00:10:19 +02:00
3672197944 debug 2024-02-12 09:29:00 +02:00
0e884305cc change image for whoami-oidc
how about not using custom-patched 3yo stuff
2024-02-12 08:13:51 +02:00
4eb3649649 doc: adding new argocd app 2024-02-12 08:13:51 +02:00
d29a1a3531 whoami-oidc 2024-02-12 08:13:51 +02:00
a055c739c1 Revert "Add GH to Woodpecker"
This reverts commit ab3815e653.

https://github.com/woodpecker-ci/woodpecker/issues/138
2024-02-12 06:59:24 +02:00
ab3815e653 Add GH to Woodpecker 2024-02-12 06:57:27 +02:00
8d2ec43101 update woodpecker 2024-02-12 06:33:40 +02:00
a95f00aaf2 gitea: try fixing registration 2024-02-12 05:41:17 +02:00
3bcaa09004 whoami: update to non-deprecated image 2024-02-12 05:41:17 +02:00
b88165d2b3 fixup: int must be str 2024-02-12 03:40:21 +02:00
13d1f7bd88 gitea: upgrade
rm ENABLE_XORM_LOG: effectively replaced by LOG_SQL
MAILER: follow env deprecation
2024-02-12 03:38:21 +02:00
a6b1fb0752 nextcloud: migration done 2024-02-05 00:50:17 +02:00
4aec3b54ab nextcloud: add default phone region
Nextcloud complains if it is missing.
2024-02-05 00:49:48 +02:00
109855231b nextcloud: continue with migration 2024-02-05 00:02:21 +02:00
0bff249397 nextcloud: prepare for migration from ad.k-space.ee 2024-02-04 23:48:07 +02:00
d2b362f57d nextcloud: enable oidc registration 2024-02-04 21:06:02 +02:00
d92522b8e4 nextcloud: disable skeleton files 2024-02-04 20:42:19 +02:00
5b75e489e7 texts: remove unneeded <br/>
Paragraphs induce spacing automatically.
2024-02-04 20:00:59 +02:00
29c56b8d26 texts: auth → auth2 2024-02-04 19:56:43 +02:00
cf0650db06 hotfix: double camtiler storage
might work, might not
2024-02-03 12:31:51 +02:00
b9f1c376af Revert "nextcloud: use DNS for minio"
This reverts commit 290d1176fe.
2024-02-03 12:07:45 +02:00
290d1176fe nextcloud: use DNS for minio 2024-02-03 11:46:00 +02:00
ab7e4d10e4 Update README: Cluster access OIDC Client ID 2024-02-01 19:38:47 +02:00
776535d6d5 freescout: update image 2024-02-01 15:33:28 +02:00
f5bfc1c908 Revert "Add k-space:non-floor-nextcloud"
This reverts commit e6456b202d.
2024-01-31 23:02:11 +02:00
80370d1034 oidc: update image 2024-01-31 16:24:35 +02:00
e6456b202d Add k-space:non-floor-nextcloud
Temporary™ workaround
2024-01-31 14:48:19 +02:00
15606ee465 oidc: make k-space:onboarding members Passmower admins 2023-11-19 16:46:17 +02:00
0a9985edcc ripe87: add ripe87.k-space.ee website 2023-11-19 16:45:51 +02:00
Arti Zirk
9bcffbaff3 Fix godoor service restart 2023-10-13 16:08:06 +03:00
3f8f141d94 metallb-system: Switch Elisa, Eenet to ARP 2023-10-09 18:36:09 +03:00
09ff829c50 asterisk: update network policy 2023-10-09 13:45:23 +03:00
a76cfca7f2 monitoring: add ping-exporter 2023-10-04 20:46:25 +03:00
1e0bdf0559 monitoring: Switch Prometheus to local path provisioner 2023-09-23 11:55:56 +03:00
6f6a132e97 camtiler: Switch to dedicated Minio 2023-09-22 23:16:30 +03:00
5cf7cbb450 monitoring: Add BIND secondaries 2023-09-22 09:35:14 +03:00
98707c0d1c freescout: Update image 2023-09-21 07:49:29 +03:00
f0db5849c8 etherpad: Add network policy 2023-09-20 15:08:03 +03:00
efc76d7a10 wildduck: Add network policies for ZoneMTA and webmail 2023-09-17 11:52:52 +03:00
a0d48d4243 wildduck: Make sure Haraka uses DH params as well 2023-09-17 11:51:26 +03:00
3f5b90a546 wildduck: Make sure Haraka, Wildduck and ZoneMTA are scheduled on same hosts for MetalLB 2023-09-17 10:22:25 +03:00
13a2430e9d wildduck: configure hostname for haraka 2023-09-16 17:19:28 +03:00
4b76181210 wildduck: Fix mail submission from Wildduck and webmail 2023-09-16 15:08:58 +03:00
473a81521c wildduck: Bump replica count to 4 2023-09-16 14:49:01 +03:00
9a92c83b5a wildduck: Bump replica count to 3 2023-09-16 14:14:00 +03:00
f05cb6f9de wildduck: Switch to operator managed Mongo 2023-09-15 18:09:17 +03:00
671348a674 asterisk: allow prometheus in network policy 2023-09-15 13:23:23 +03:00
8482f77a47 asterisk: add network policies 2023-09-15 11:41:00 +03:00
0eafcfea18 Add inventory and k6.ee redirector 2023-09-15 10:24:36 +03:00
f40a61946d Remove minio.infra.k-space.ee 2023-09-15 02:29:01 +03:00
6dd2d17298 etherpad: Remove SSO requirement 2023-09-13 12:22:32 +03:00
4e1dbab080 freescout: fix cronjob, update images 2023-09-05 20:04:17 +03:00
1995358e99 openebs: Fix rawfile provisioner image digest 2023-09-02 11:49:13 +03:00
2c5721d5cf mysql-clusters: Fix external cluster 2023-09-02 11:49:13 +03:00
abb25a7eb0 mysql-clusters: Refer to generated phpMyAdmin config 2023-09-02 11:49:13 +03:00
36932bfcaa asterisk: forward voice ports from service 2023-09-01 13:59:45 +03:00
b11ac8bcae Updates and cleanups 2023-08-29 09:29:36 +03:00
4fa554da57 gitea: Allow access for k-space:friends 2023-08-28 21:11:43 +03:00
78931bbb4b oidc-gateway: Cleanups 2023-08-28 21:10:28 +03:00
c6eacfc9f2 metallb-system: Add Wildduck IP 2023-08-28 21:08:37 +03:00
f217f8eae7 monitoring: Clean up blackbox-exporter 2023-08-28 20:57:06 +03:00
fc92b0ce75 kube-system: Remove noisy KubernetesJobSlowCompletion alert 2023-08-28 20:55:28 +03:00
ae00e766d7 Merge remote-tracking branch 'origin/master' 2023-08-28 20:11:47 +03:00
912d15a23b nextcloud: add cron via readinessProbe; block external webcron; run as UID 1000 2023-08-28 20:11:40 +03:00
48567f0630 wildduck: Clean up configs 2023-08-27 20:24:36 +03:00
40445c299d wildduck: ZoneMTA config fixes 2023-08-27 16:55:48 +03:00
54207c482c rosdump: Easier to navigate commit messages 2023-08-26 08:54:04 +03:00
09a9bc4115 wildduck: Use toml files for ZoneMTA config 2023-08-25 09:40:03 +03:00
eafae2af3b Setup godoor before mjpeg-streamer for door controllers 2023-08-25 09:00:36 +03:00
3b31b9c94c Make image pull fail gracefully on door controllers 2023-08-25 08:59:00 +03:00
bec78de2f3 Deploy godoor from Docker image 2023-08-24 22:42:38 +03:00
9b2631f16c wildduck: Add README 2023-08-24 20:45:43 +03:00
f10ff329b7 wildduck: Update dedicated Mongo for Wildduck 2023-08-24 20:04:32 +03:00
a3539de9e0 wildduck: Remove haraka's redis as it's not used 2023-08-24 19:55:10 +03:00
0ed3010fed Migrate the rest of Wildduck stack 2023-08-24 19:53:07 +03:00
b98f173441 wildduck: Add operator 2023-08-24 08:48:33 +03:00
2500342e47 wildduck: Add ClamAV 2023-08-24 08:34:30 +03:00
430f5b0f0f wildduck: Add rspamd 2023-08-24 08:34:21 +03:00
e6a903cfef wildduck: Use updated image for wildflock 2023-08-22 10:14:18 +03:00
6752ca55ae wildduck: Allow sending only from @k-space.ee address in webmail 2023-08-22 07:29:18 +03:00
820c954319 Update mjpg-streamer service 2023-08-21 10:41:15 +03:00
cc51f3731a Elaborate how to configure additional domains for Bind 2023-08-20 09:35:26 +03:00
9dae1a832b gitea: Set imagePullPolicy to IfNotPresent 2023-08-20 08:04:13 +03:00
883da46a3b Update whole Bind setup 2023-08-19 23:39:13 +03:00
aacbb20e13 camtiler: Namespace change related fixes 2023-08-19 10:20:37 +03:00
90076f2dde wildduck: Updates 2023-08-19 10:01:09 +03:00
06757a81e5 logmower: Namespace change fixes 2023-08-19 09:50:53 +03:00
f67bd391bc kube-system: Bump kube-state-metrics to 2.9.2 2023-08-19 09:37:57 +03:00
e5e72de45b monitoring: Namespace change fixes 2023-08-19 09:30:10 +03:00
2e67269b5b prometheus-operator: Bump to 0.67.1 2023-08-19 09:26:13 +03:00
6e2f353916 Move Prometheus instance to monitoring namespace 2023-08-19 09:24:48 +03:00
62661efc42 prometheus-operator: Drop mfp-cyber.pub.k-space.ee 2023-08-19 08:41:44 +03:00
8f07b2ef89 camtiler: Fix event broker config 2023-08-18 16:40:18 +03:00
b80d566927 asterisk: Add pod monitor and alerting rules 2023-08-18 08:45:04 +03:00
b0fd37de01 asterisk: Disable colored logging and add metrics container port 2023-08-18 07:42:37 +03:00
95597c3103 prometheus-operator: Pin SNMP exporter 2023-08-18 00:50:27 +03:00
3a69c1a210 camtiler: Use external bucket 2023-08-17 23:53:24 +03:00
1361c9ec22 Migrate Asterisk 2023-08-17 23:38:26 +03:00
c8a7aecc2f camtiler: Allow hackerspace friends to see cams 2023-08-17 11:59:49 +03:00
bc5dcce5f7 wildflock: Limit ACL-s 2023-08-17 11:58:38 +03:00
d56348f9a6 wildduck: Add Prometheus exporter 2023-08-17 11:57:32 +03:00
a828b602d6 freescout: Add Job for resetting OIDC config 2023-08-16 22:41:41 +03:00
14a5d703cb wikijs: Add Job for resetting OIDC config 2023-08-16 22:18:30 +03:00
4fa49dbf8a mysql-clusters: Rename phpMyAdmin manifest 2023-08-16 15:56:29 +03:00
ebd723d8fd Remove keel 2023-08-16 11:38:05 +03:00
d4d44bc6d3 rosdump: Fix NetworkPolicies for in-kube Gitea 2023-08-16 11:35:41 +03:00
d6a1d48c03 logmower: Cleanups 2023-08-16 10:45:55 +03:00
6adcb53e96 gitea: Disable releases and wiki 2023-08-16 10:41:43 +03:00
af83e1783b Clean up operatorlib related stuff 2023-08-16 10:39:20 +03:00
49412781ea openebs: Pin specific image 2023-08-16 10:35:57 +03:00
d419ac56e1 Add CloudNativePG 2023-08-16 10:29:09 +03:00
5df71506cf etherpad: Switch to Passmower 2023-08-16 10:11:05 +03:00
508c03268e woodpecker-agent: Drop privileges 2023-08-16 10:10:21 +03:00
3dce3d07fd Work around unattended-upgrades quirk
https://github.com/kubernetes/kubernetes/issues/107043#issuecomment-997769940
2023-08-15 22:41:54 +03:00
f9393fd0da Add Ansible task to configure graceful shutdown of Kubelet 2023-08-15 21:58:23 +03:00
4c5a58f67d logmower: Reduce mongo agent noise 2023-08-15 13:54:49 +03:00
2e49b842a9 camtiler: Fix spammy mongo agent 2023-08-15 11:23:43 +03:00
46677df2a3 gitea: Switch to rootless image 2023-08-15 08:08:46 +03:00
ca4ded3d0d gitea: Cleanup config and rotate secrets 2023-08-14 23:38:01 +03:00
f0c4be9b7d prometheus-operator: Remove cone nodes 2023-08-14 22:25:56 +03:00
ce7f5f51fb prometheus-operator: Fix alertmanager config 2023-08-14 19:03:04 +03:00
e02a10b192 external-dns: Migrate k6.ee and kspace.ee 2023-08-14 18:59:15 +03:00
4d2071a5bd Move Kubernetes cluster bootstrap partially to Ansible 2023-08-13 20:21:15 +03:00
ecf9111f8f wildduck: Add session secret for wildflock 2023-08-13 18:48:45 +03:00
14617aad39 wildduck: Make wildflock HA 2023-08-13 18:38:26 +03:00
a00c85d5f6 Move whoami 2023-08-13 18:35:25 +03:00
0fce65b6a5 camtiler: Update cameras config 2023-08-13 08:17:32 +03:00
2ef01e2b28 camtiler: Require floor access ACL 2023-08-13 08:16:26 +03:00
d492b400fa Add door controller setup 2023-08-12 13:20:03 +03:00
612e788d9b gitea: Disable third party OIDC login 2023-08-11 15:02:36 +03:00
b3fe86ea90 drone: Clean up configs 2023-08-11 14:26:55 +03:00
ade71fffad gitea: Bump to 1.20.2 2023-08-11 14:10:40 +03:00
7a92a18bba gitea: Fix HTTP to HTTPS redirect and Git URI format 2023-08-11 14:05:05 +03:00
fe25d03989 Add Ansible config 2023-08-10 19:35:17 +03:00
d0bfdf5147 wildduck: Add ACL-s for webmail 2023-08-04 18:09:16 +03:00
66f2a9ada0 wildduck: Use upstream image for Wildduck webmail 2023-08-04 18:08:36 +03:00
c338ca3bed grafana: Use direct link for OIDC app listing
# Please enter the commit message for your changes. Lines starting
2023-08-04 18:06:36 +03:00
a97b664485 Update Kube API OIDC configuration 2023-08-03 17:05:11 +03:00
603b237091 Add Wikijs 2023-08-03 08:41:51 +03:00
29be7832c7 freescout: Fix S3 URL-s 2023-08-02 09:06:13 +03:00
06de7c53ba minio-clusters: Clean up ingresses 2023-08-01 21:11:13 +03:00
79f9704cf5 freescout: add deadline as workaround 2023-08-01 15:41:23 +03:00
7e1c99f12d freescout: refactor deployment for custom image and s3 support 2023-08-01 14:50:09 +03:00
cf8ca7457b oidc: update CRDs 2023-08-01 02:02:07 +03:00
5680b4df49 oidc-gateway: Better visualization for broken users 2023-07-31 12:38:33 +03:00
b01f073ced Add Woodpecker to application listing 2023-07-30 21:04:20 +03:00
222ba974e6 Direct OIDC login link for Gitea 2023-07-30 20:59:13 +03:00
1bf85cfd7b Add Freescout 2023-07-30 20:59:13 +03:00
aba2327740 Switch to Vanilla Redis 2023-07-30 20:59:13 +03:00
19ad42bd2b Rename alias generator to wildflock 2023-07-30 20:59:13 +03:00
a3b2f76652 oidc: update CRDs 2023-07-30 12:34:42 +03:00
fb55cd2ac7 Add Wildduck mail alias generator 2023-07-30 00:23:29 +03:00
c5cae07624 Migrate Nextcloud to Kube 2023-07-30 00:14:56 +03:00
21b583dc5b Remove irrelevant group membership checks 2023-07-29 15:02:15 +03:00
fe662dc408 More Gitea cleanups 2023-07-29 10:51:18 +03:00
6a9254da33 Clean up Etherpad 2023-07-29 09:19:42 +03:00
5259a7df04 gitea: Restore s6 init because of git zombie processes 2023-07-29 09:13:27 +03:00
8712786cfe oidc: update CRDs 2023-07-28 20:45:37 +03:00
b56376624e Migrate Gitea 2023-07-28 18:00:48 +03:00
5c8a166218 Set up Longhorn backups to ZFS box 2023-07-28 13:06:00 +03:00
c90a5bbf5e Deprecate Authelia 2023-07-28 12:23:29 +03:00
1db064a38a woodpecker: Pin specific Woodpecker Docker image 2023-07-28 12:22:05 +03:00
36a7eaa805 Bump Kubernetes deployment to 1.25.12 2023-07-28 12:22:05 +03:00
5d8670104a Upgrade to Longhorn 1.5.1 2023-07-28 12:22:05 +03:00
0b5c14903a Remove Woodpecker org membership limit 2023-07-28 12:22:05 +03:00
8d61764893 Bump Traefik from 2.9 to 2.10 2023-07-28 12:22:05 +03:00
2f1c0c3cc8 Bump metallb operator from v0.13.4 to v0.13.11 2023-07-28 12:22:05 +03:00
9a2fd034bb oidc: revert to Docker Hub images 2023-07-26 20:31:16 +03:00
6afda40b93 oidc: update CRDs 2023-07-26 20:25:39 +03:00
dd1ab10624 Merge remote-tracking branch 'origin/master' 2023-06-29 15:30:51 +03:00
2493266aed oidc: fix deployment 2023-06-29 15:30:40 +03:00
5a0821da0d Migrate whoami to new OIDC gateway 2023-06-29 14:49:10 +03:00
be330ad121 oidc: require custom username 2023-06-27 22:24:30 +03:00
045a8bb574 oidc: add oidc-gateway manifests 2023-06-27 14:01:44 +03:00
1d3d58f1a0 Add Woodpecker CI 2023-05-27 10:09:15 +03:00
5dc6dca28e external-dns: Enable support for DNSEndpoint CRD-s 2023-05-18 23:15:58 +03:00
e82fd3f543 authelia: Switch to KeyDB 2023-05-18 23:15:14 +03:00
8b0719234c camtiler: Updates 2023-05-18 22:55:40 +03:00
7abac4db0a nyancat: Move to internal IP 2023-05-18 22:54:50 +03:00
f14d2933d0 Upgrade to Longhorn 1.4.2 2023-05-18 22:46:54 +03:00
b415b8ca56 Upgrade to Grafana 8.5.24 2023-05-18 22:44:55 +03:00
8e796361c3 logmower: Switch to images from Docker Hub 2023-03-11 11:01:12 +02:00
a8bf83f9e5 Add minio-clusters namespace 2023-03-02 08:46:05 +02:00
0b0d9046d8 Add redis-clusters namespace 2023-03-02 07:53:37 +02:00
2343edbe6b Add mysql-clusters namespace 2023-02-26 11:15:48 +02:00
41b7b509f4 Add Crunchydata PGO 2023-02-26 11:09:11 +02:00
a51b041621 Upgrade to Kubernetes 1.24 and Longhorn 1.4.0 2023-02-20 11:16:12 +02:00
1d6cf0a521 camtiler: Restore cams on members site 2023-01-25 09:55:04 +02:00
19d66801df prometheus-operator: Update node-exporter and add pve2 2023-01-07 10:27:05 +02:00
d2a719af43 README: Improve cluster formation docs
- Begin code block with sudo to remind the following shall be ran as root.
- Remove hardcoded key, instead copy from ubutnu user.
2023-01-03 16:09:17 +00:00
34369d211b Add nyancat server 2023-01-03 10:25:08 +02:00
cadb38126b prometheus-operator: Prevent scrape timeouts 2022-12-26 14:15:05 +02:00
414d044909 prometheus-operator: Less noisy alerting from node-exporter 2022-12-24 21:11:00 +02:00
ea23a52d6b prometheus-operator: Remove bundle.yml 2022-12-24 21:07:07 +02:00
3458cbd694 Update README 2022-12-24 21:01:49 +02:00
0a40686c16 logmower: Remove explicit command for event source 2022-12-24 00:02:01 +02:00
222fca8b8f camtiler: Fix scheduling issues 2022-12-23 23:32:18 +02:00
75df3e2a41 logmower: Fix Mongo affinity rules 2022-12-23 23:31:10 +02:00
5516ad195c Add descheduler 2022-12-23 23:30:39 +02:00
d0ac3b0361 prometheus-operator: Remove noisy kube-state-metrics alerts 2022-12-23 23:30:13 +02:00
c7daada4f4 Bump kube-state-metrics to v2.7.0 2022-12-22 20:04:05 +02:00
3a11207783 prometheus-operator: Remove useless KubernetesCronjobTooLong alert 2022-12-21 14:59:16 +02:00
3586309c4e prometheus-operator: Post only critical alerts to Slack 2022-12-21 14:13:57 +02:00
960103eb40 prometheus-operator: Bump bundle version 2022-12-21 14:08:23 +02:00
34b48308ff camtiler: Split up manifests 2022-12-18 16:28:45 +02:00
d8471da75f Migrate doorboy to Kubernetes 2022-12-17 17:49:57 +02:00
3dfa8e3203 camtiler: Clean ups 2022-12-14 19:50:55 +02:00
2a8c685345 camtiler: Scale down motion detectors 2022-12-14 18:58:32 +02:00
bccd2c6458 logmower: Updates 2022-12-14 18:56:08 +02:00
c65835c6a4 Update external-dns 2022-12-14 18:46:00 +02:00
76cfcd083b camtiler: Specify Mongo collection for event source 2022-12-13 13:10:11 +02:00
98ae369b41 camtiler: Fix event broker image name 2022-12-13 12:51:52 +02:00
4ccfd3d21a Replace old log viewer with Logmower + camera-event-broker 2022-12-13 12:43:38 +02:00
ea9b63b7cc camtiler: Dozen updates 2022-12-12 20:37:03 +02:00
b5ee891c97 Introduce separated storage classes per workload type 2022-12-06 09:06:07 +02:00
eccfb43aa1 Add rawfile-localpv 2022-12-02 00:10:04 +02:00
8f99b1b03d Source meta-operator from separate repo 2022-11-13 07:19:56 +02:00
024897a083 kube-system: Record pod labels with kube-state-metrics 2022-11-12 17:52:59 +02:00
18c4764687 prometheus-exporter: Fix antiaffinity rule for Mikrotik exporter 2022-11-12 16:50:31 +02:00
7b9cb6184b prometheus-operator: Reduce retention size 2022-11-12 16:07:42 +02:00
9dd32af3cb logmower: Update shipper arguments 2022-11-10 21:07:54 +02:00
a1cc066927 README: Bump sysctl limits 2022-11-10 07:56:13 +02:00
029572872e logmower: Update env vars 2022-11-09 11:49:13 +02:00
30f1c32815 harbor: Reduce logging verbosity 2022-11-05 22:43:00 +02:00
0c14283136 Add logmower 2022-11-05 20:55:52 +02:00
587748343d traefik: Namespace filtering breaks allowExternalNameServices 2022-11-04 12:20:30 +02:00
1bcfbed130 traefik: Bump version 2022-10-21 08:30:04 +03:00
3b1cda8a58 traefik: Pull resources only from trusted namespaces 2022-10-21 08:27:53 +03:00
2fd0112c28 elastic-system: Exclude logging ECK stack itself 2022-10-21 00:57:11 +03:00
9275f745ce elastic-system: Remove Filebeat's dependency on Kibana 2022-10-21 00:56:54 +03:00
3d86b6acde elastic-system: Bump to 8.4.3 2022-10-14 20:18:28 +03:00
4a94cd4af0 longhorn-system: Remove Prometheus annotation as we use PodMonitor already
All checks were successful
continuous-integration/drone Build is passing
2022-10-14 15:03:48 +03:00
a27f273c0b Add Grafana 2022-10-14 14:38:23 +03:00
4686108f42 Switch to wildcard *.k-space.ee certificate 2022-10-14 14:32:36 +03:00
30b7e50afb kube-system: Add metrics-server 2022-10-14 14:23:21 +03:00
e4c9675b99 tigera-operator: Remove unrelated files 2022-10-14 14:05:40 +03:00
017bdd9fd8 tigera-operator: Upgrade Calico 2022-10-14 14:03:34 +03:00
0fd0094ba0 playground: Initial commit 2022-10-14 00:14:35 +03:00
d20fdf350d drone: Switch templates to drone-kaniko plugin 2022-10-12 14:24:57 +03:00
bac5040d2a README: access/auth: collapse bootstrapping
For 'how to connect to cluster', server-side setup
is not needed from connecting clients.
Hiding the section makes the steps more concise.
2022-10-11 10:47:41 +03:00
Danyliuk
4d5851259d Update .gitignore file. Add IntelliJ IDEA part 2022-10-08 16:43:48 +00:00
8ee1896a55 harbor: Move to storage nodes 2022-10-04 13:39:25 +03:00
04b786b18d prometheus-operator: Bump blackbox exporter replica count to 3 2022-10-04 10:11:53 +03:00
1d1764093b prometheus-operator: Remove pulled UPS-es 2022-10-03 10:04:24 +03:00
df6e268eda elastic-system: Add PodMonitor for exporter 2022-09-30 10:33:41 +03:00
00f8bfef6c elastic-system: Update sharding, enable memory-mapped IO, move to Longhorn 2022-09-30 10:21:10 +03:00
109859e07b elastic-system: Reduce replica count for Kibana 2022-09-28 11:01:08 +03:00
7e518da638 elastic-system: Make Kibana healthcheck work with anonymous auth 2022-09-28 11:00:38 +03:00
5ef5e14866 prometheus-operator: Specify priorityClassName: system-node-critical for node-exporters 2022-09-28 10:33:44 +03:00
310b2faaef prometheus-operator: Add node label to node-exporters 2022-09-28 09:32:31 +03:00
6b65de65d4 Move kube-state-metrics 2022-09-26 15:50:58 +03:00
02d1236eba elastic-system: Add Syslog ingestion 2022-09-23 16:37:29 +03:00
610ce0d490 elastic-system: Bump version to 2.4.0 2022-09-23 16:16:22 +03:00
051e300359 Update tech mapping 2022-09-21 17:12:24 +03:00
5b11b7f3a6 phpmyadmin: Use 6446 for MySQL Operator instances 2022-09-21 11:38:13 +03:00
546dc71450 prometheus-operator: Fix SNMP for older HP printers 2022-09-20 23:26:09 +03:00
26a35cd0c3 prometheus-operator: Add snmp_ prefix 2022-09-20 17:09:26 +03:00
790ffa175b prometheus-operator: Fix Alertmanager integration
All checks were successful
continuous-integration/drone Build is passing
2022-09-20 12:22:49 +03:00
9a672d7ef3 logging: Bump ZincSearch memory limit 2022-09-18 10:05:54 +03:00
d1cb00ff83 Reduce Filebeat logging verbosity 2022-09-17 08:06:42 +03:00
9cc39fcd17 argocd: Add members repo 2022-09-17 08:06:19 +03:00
ae8d03ec03 argocd: Add elastic-system 2022-09-17 08:05:47 +03:00
bf9d063b2c mysql-operator: Bump to version 8.0.30-2.0.6 2022-09-16 08:41:07 +03:00
2efaf7b456 mysql-operator: Fix network policy 2022-09-16 08:40:31 +03:00
c4208037e2 logging: Replace Graylog with ZincSearch 2022-09-16 08:34:53 +03:00
edcb6399df elastic-system: Fixes and cleanups 2022-09-16 08:24:13 +03:00
262 changed files with 30904 additions and 74355 deletions

View File

@@ -1,10 +0,0 @@
---
kind: pipeline
type: kubernetes
name: gitleaks
steps:
- name: gitleaks
image: zricethezav/gitleaks
commands:
- gitleaks detect --source=/drone/src

5
.gitignore vendored
View File

@@ -1,5 +1,10 @@
*.keys
*secrets.yml
*secret.yml
*.swp
*.save
*.1
### IntelliJ IDEA ###
.idea
*.iml

170
CLUSTER.md Normal file
View File

@@ -0,0 +1,170 @@
# Kubernetes cluster
Kubernetes hosts run on [PVE Cluster](https://wiki.k-space.ee/en/hosting/proxmox). Hosts are listed in Ansible [inventory](ansible/inventory.yml).
## `kubectl`
- Authorization [ACLs](cluster-role-bindings.yml)
- [Troubleshooting `no such host`](#systemd-resolved-issues)
Authenticate to auth.k-space.ee:
```bash
kubectl krew install oidc-login
mkdir -p ~/.kube
cat << EOF > ~/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1EVXdNakEzTXpVMU1Wb1hEVE15TURReU9UQTNNelUxTVZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS2J2CjY3UFlXVHJMc3ZCQTZuWHUvcm55SlVhNnppTnNWTVN6N2w4ekhxM2JuQnhqWVNPUDJhN1RXTnpUTmZDanZBWngKTmlNbXJya1hpb2dYQWpVVkhSUWZlYm81TFIrb0JBOTdLWlcrN01UMFVJRXBuWVVaaTdBRHlaS01vcEJFUXlMNwp1SlU5UDhnNUR1T29FRHZieGJSMXFuV1JZRXpteFNmSFpocllpMVA3bFd4emkxR243eGRETFZaMjZjNm0xR3Y1CnViRjZyaFBXK1JSVkhiQzFKakJGeTBwRXdhYlUvUTd0Z2dic0JQUjk5NVZvMktCeElBelRmbHhVanlYVkJ3MjEKU2d3ZGI1amlpemxEM0NSbVdZZ0ZrRzd0NTVZeGF3ZmpaQjh5bW4xYjhUVjkwN3dRcG8veU8zM3RaaEE3L3BFUwpBSDJYeDk5bkpMbFVGVUtSY1A4Q0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZKNnZKeVk1UlJ1aklQWGxIK2ZvU3g2QzFRT2RNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBQ04zcGtCTVM3ekkrbUhvOWdTZQp6SzdXdjl3bXlCTVE5Q3crQXBSNnRBQXg2T1VIN0d1enc5TTV2bXNkYjkrYXBKMHBlZFB4SUg3YXZ1aG9SUXNMCkxqTzRSVm9BMG9aNDBZV3J3UStBR0dvdkZuaWNleXRNcFVSNEZjRXc0ZDRmcGl6V3d0TVNlRlRIUXR6WG84V2MKNFJGWC9xUXNVR1NWa01PaUcvcVVrSFpXQVgyckdhWXZ1Tkw2eHdSRnh5ZHpsRTFSUk56TkNvQzVpTXhjaVRNagpackEvK0pqVEFWU2FuNXZnODFOSmthZEphbmNPWmEwS3JEdkZzd1JJSG5CMGpMLzh3VmZXSTV6czZURU1VZUk1ClF6dU01QXUxUFZ4VXZJUGhlMHl6UXZjWDV5RlhnMkJGU3MzKzJBajlNcENWVTZNY2dSSTl5TTRicitFTUlHL0kKY0pjPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://master.kube.k-space.ee:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: oidc
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: oidc
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- oidc-login
- get-token
- --oidc-issuer-url=https://auth.k-space.ee/
- --oidc-client-id=passmower.kubelogin
- --oidc-use-pkce
- --oidc-extra-scope=profile,email,groups
- --listen-address=127.0.0.1:27890
command: kubectl
env: null
provideClusterInfo: false
EOF
# Test it:
kubectl get nodes # opens browser for authentication
```
### systemd-resolved issues
```sh
Unable to connect to the server: dial tcp: lookup master.kube.k-space.ee on 127.0.0.53:53: no such host
```
```
Network → VPN → `IPv4` → Other nameservers (Muud nimeserverid): `172.21.0.1`
Network → VPN → `IPv6` → Other nameservers (Muud nimeserverid): `2001:bb8:4008:21::1`
Network → VPN → `IPv4` → Search domains (Otsingudomeenid): `kube.k-space.ee`
Network → VPN → `IPv6` → Search domains (Otsingudomeenid): `kube.k-space.ee`
```
## Cluster formation
Created Ubuntu 22.04 VM-s on Proxmox with local storage.
Added some ARM64 workers by using Ubuntu 22.04 server on Raspberry Pi.
After machines have booted up and you can reach them via SSH:
```
# Disable Ubuntu caching DNS resolver
systemctl disable systemd-resolved.service
systemctl stop systemd-resolved
rm -fv /etc/resolv.conf
cat > /etc/resolv.conf << EOF
nameserver 1.1.1.1
nameserver 8.8.8.8
EOF
# Disable multipathd as Longhorn handles that itself
systemctl mask multipathd snapd
systemctl disable --now multipathd snapd bluetooth ModemManager hciuart wpa_supplicant packagekit
# Permit root login
sed -i -e 's/PermitRootLogin no/PermitRootLogin without-password/' /etc/ssh/sshd_config
systemctl reload ssh
cat ~ubuntu/.ssh/authorized_keys > /root/.ssh/authorized_keys
userdel -f ubuntu
apt-get install -yqq linux-image-generic
apt-get remove -yq cloud-init linux-image-*-kvm
```
On master:
```
kubeadm init --token-ttl=120m --pod-network-cidr=10.244.0.0/16 --control-plane-endpoint "master.kube.k-space.ee:6443" --upload-certs --apiserver-cert-extra-sans master.kube.k-space.ee --node-name master1.kube.k-space.ee
```
For the `kubeadm join` command specify FQDN via `--node-name $(hostname -f)`.
Set AZ labels:
```
for j in $(seq 1 9); do
for t in master mon worker storage; do
kubectl label nodes ${t}${j}.kube.k-space.ee topology.kubernetes.io/zone=node${j}
done
done
```
After forming the cluster add taints:
```bash
for j in $(seq 1 9); do
kubectl label nodes worker${j}.kube.k-space.ee node-role.kubernetes.io/worker=''
done
for j in $(seq 1 4); do
kubectl taint nodes mon${j}.kube.k-space.ee dedicated=monitoring:NoSchedule
kubectl label nodes mon${j}.kube.k-space.ee dedicated=monitoring
done
for j in $(seq 1 4); do
kubectl taint nodes storage${j}.kube.k-space.ee dedicated=storage:NoSchedule
kubectl label nodes storage${j}.kube.k-space.ee dedicated=storage
done
```
For `arm64` nodes add suitable taint to prevent scheduling non-multiarch images on them:
```bash
kubectl taint nodes worker9.kube.k-space.ee arch=arm64:NoSchedule
```
For door controllers:
```
for j in ground front back; do
kubectl taint nodes door-${j}.kube.k-space.ee dedicated=door:NoSchedule
kubectl label nodes door-${j}.kube.k-space.ee dedicated=door
kubectl taint nodes door-${j}.kube.k-space.ee arch=arm64:NoSchedule
done
```
To reduce wear on storage:
```
echo StandardOutput=null >> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
systemctl daemon-reload
systemctl restart kubelet
```
## Technology mapping
Our self-hosted Kubernetes stack compared to AWS based deployments:
| Hipster startup | Self-hosted hackerspace | Purpose |
|-------------------|-------------------------------------|---------------------------------------------------------------------|
| AWS ALB | Traefik | Reverse proxy also known as ingress controller in Kubernetes jargon |
| AWS AMP | Prometheus Operator | Monitoring and alerting |
| AWS CloudTrail | ECK Operator | Log aggregation |
| AWS DocumentDB | MongoDB Community Operator | Highly available NoSQL database |
| AWS EBS | Longhorn | Block storage for arbitrary applications needing persistent storage |
| AWS EC2 | Proxmox | Virtualization layer |
| AWS ECR | Harbor | Docker registry |
| AWS EKS | kubeadm | Provision Kubernetes master nodes |
| AWS NLB | MetalLB | L2/L3 level load balancing |
| AWS RDS for MySQL | MySQL Operator | Provision highly available relational databases |
| AWS Route53 | Bind and RFC2136 | DNS records and Let's Encrypt DNS validation |
| AWS S3 | Minio Operator | Highly available object storage |
| AWS VPC | Calico | Overlay network |
| Dex | Passmower | ACL mapping and OIDC provider which integrates with GitHub/Samba |
| GitHub Actions | Woodpecker | Build Docker images |
| GitHub | Gitea | Source code management, issue tracking |
| GitHub OAuth2 | Samba (Active Directory compatible) | Source of truth for authentication and authorization |
| Gmail | Wildduck | E-mail |

View File

@@ -10,3 +10,4 @@ this Git repository happen:
* Song Meo <songmeo@k-space.ee>
* Rasmus Kallas <rasmus@k-space.ee>
* Kristjan Kuusk <kkuusk@k-space.ee>
* Erki Aas <eaas@k-space.ee>

280
README.md
View File

@@ -1,258 +1,46 @@
# Kubernetes cluster manifests
# k-space.ee infrastructure
Kubernetes manifests, Ansible [playbooks](ansible/README.md), and documentation for K-SPACE services.
## Introduction
<!-- TODO: Docs for adding to ArgoCD (auto-)sync -->
- Repo is deployed with [ArgoCD](https://argocd.k-space.ee). For `kubectl` access, see [CLUSTER.md](CLUSTER.md#kubectl).
- Debugging Kubernetes [on Wiki](https://wiki.k-space.ee/en/hosting/debugging-kubernetes)
- Need help? → [`#kube`](https://k-space-ee.slack.com/archives/C02EYV1NTM2)
This is the Kubernetes manifests of services running on k-space.ee domains:
Jump to docs: [inventory-app](hackerspace/README.md) / [cameras](camtiler/README.md) / [doors](https://wiki.k-space.ee/en/hosting/doors) / [list of apps](https://auth.k-space.ee) // [all infra](ansible/inventory.yml) / [network](https://wiki.k-space.ee/en/hosting/network/sensitive) / [retro](https://wiki.k-space.ee/en/hosting/retro) / [non-infra](https://wiki.k-space.ee)
- [Authelia](https://auth.k-space.ee) for authentication
- [Drone.io](https://drone.k-space.ee) for building Docker images
- [Harbor](https://harbor.k-space.ee) for hosting Docker images
- [ArgoCD](https://argocd.k-space.ee) for deploying Kubernetes manifests and
Helm charts into the cluster
- [camtiler](https://cams.k-space.ee) for cameras
- [Longhorn Dashboard](https://longhorn.k-space.ee) for administering
Longhorn storage
- [Kubernetes Dashboard](https://kubernetes-dashboard.k-space.ee/) for read-only overview
of the Kubernetes cluster
- [Wildduck Webmail](https://webmail.k-space.ee/)
Tip: Search the repo for `kind: xyz` for examples.
Most endpoints are protected by OIDC autentication or Authelia SSO middleware.
## Supporting services
- Build [Git](https://git.k-space.ee) repositories with [Woodpecker](https://woodpecker.k-space.ee).
- Passmower: Authz with `kind: OIDCClient` (or `kind: OIDCMiddlewareClient`[^authz]).
- Traefik[^nonginx]: Expose services with `kind: Service` + `kind: Ingress` (TLS and DNS **included**).
### Additional
- bind: Manage _additional_ DNS records with `kind: DNSEndpoint`.
- [Prometheus](https://wiki.k-space.ee/en/hosting/monitoring): Collect metrics with `kind: PodMonitor` (alerts with `kind: PrometheusRule`).
- [Slack bots](SLACK.md) and Kubernetes [CLUSTER.md](CLUSTER.md) itself.
<!-- TODO: Redirects: external-dns.alpha.kubernetes.io/hostname + in -extras.yaml: IngressRoute and Middleware -->
## Cluster access
[^nonginx]: No nginx annotations! Use `kind: Ingress` instead. `IngressRoute` is not used as it doesn't support [`external-dns`](bind/README.md) out of the box.
[^authz]: Applications should use OpenID Connect (`kind: OIDCClient`) for authentication, whereever possible. If not possible, use `kind: OIDCMiddlewareClient` client, which will provide authentication via a Traefik middleware (`traefik.ingress.kubernetes.io/router.middlewares: passmower-proxmox@kubernetescrd`). Sometimes you might use both for extra security.
General discussion is happening in the `#kube` Slack channel.
<!-- Linked to by https://wiki.k-space.ee/e/en/hosting/storage -->
### Databases / -stores:
- KeyDB: `kind: KeydbClaim` (replaces Redis[^redisdead])
- Dragonfly: `kind: Dragonfly` (replaces Redis[^redisdead])
- Longhorn: `storageClassName: longhorn` (filesystem storage)
- Mongo[^mongoproblems]: `kind: MongoDBCommunity` (NAS* `inventory-mongodb`)
- Minio S3: `kind: MinioBucketClaim` with `class: dedicated` (NAS*: `class: external`)
- MariaDB*: search for `mysql`, `mariadb`[^mariadb] (replaces MySQL)
- Postgres*: hardcoded to [harbor/application.yml](harbor/application.yml)
For bootstrap access obtain `/etc/kubernetes/admin.conf` from one of the master
nodes and place it under `~/.kube/config` on your machine.
\* External, hosted directly on [nas.k-space.ee](https://wiki.k-space.ee/en/hosting/storage)
Once Authelia is working, OIDC access for others can be enabled with
running following on Kubernetes masters:
[^mariadb]: As of 2024-07-30 used by auth, authelia, bitwarden, etherpad, freescout, git, grafana, nextcloud, wiki, woodpecker
```bash
patch /etc/kubernetes/manifests/kube-apiserver.yaml - << EOF
@@ -23,6 +23,10 @@
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379
+ - --oidc-issuer-url=https://auth.k-space.ee
+ - --oidc-client-id=kubelogin
+ - --oidc-username-claim=preferred_username
+ - --oidc-groups-claim=groups
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
EOF
sudo systemctl daemon-reload
systemctl restart kubelet
```
[^redisdead]: Redis has been replaced as redis-operatori couldn't handle itself: didn't reconcile after reboots, master URI was empty, and clients complained about missing masters. ArgoCD still hosts its own Redis.
Afterwards following can be used to talk to the Kubernetes cluster using
OIDC credentials:
[^mongoproblems]: Mongo problems: Incompatible with rawfile csi (wiredtiger.wt corrupts), complicated resizing (PVCs from statefulset PVC template).
```bash
kubectl krew install oidc-login
mkdir -p ~/.kube
cat << EOF > ~/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1EVXdNakEzTXpVMU1Wb1hEVE15TURReU9UQTNNelUxTVZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS2J2CjY3UFlXVHJMc3ZCQTZuWHUvcm55SlVhNnppTnNWTVN6N2w4ekhxM2JuQnhqWVNPUDJhN1RXTnpUTmZDanZBWngKTmlNbXJya1hpb2dYQWpVVkhSUWZlYm81TFIrb0JBOTdLWlcrN01UMFVJRXBuWVVaaTdBRHlaS01vcEJFUXlMNwp1SlU5UDhnNUR1T29FRHZieGJSMXFuV1JZRXpteFNmSFpocllpMVA3bFd4emkxR243eGRETFZaMjZjNm0xR3Y1CnViRjZyaFBXK1JSVkhiQzFKakJGeTBwRXdhYlUvUTd0Z2dic0JQUjk5NVZvMktCeElBelRmbHhVanlYVkJ3MjEKU2d3ZGI1amlpemxEM0NSbVdZZ0ZrRzd0NTVZeGF3ZmpaQjh5bW4xYjhUVjkwN3dRcG8veU8zM3RaaEE3L3BFUwpBSDJYeDk5bkpMbFVGVUtSY1A4Q0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZKNnZKeVk1UlJ1aklQWGxIK2ZvU3g2QzFRT2RNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBQ04zcGtCTVM3ekkrbUhvOWdTZQp6SzdXdjl3bXlCTVE5Q3crQXBSNnRBQXg2T1VIN0d1enc5TTV2bXNkYjkrYXBKMHBlZFB4SUg3YXZ1aG9SUXNMCkxqTzRSVm9BMG9aNDBZV3J3UStBR0dvdkZuaWNleXRNcFVSNEZjRXc0ZDRmcGl6V3d0TVNlRlRIUXR6WG84V2MKNFJGWC9xUXNVR1NWa01PaUcvcVVrSFpXQVgyckdhWXZ1Tkw2eHdSRnh5ZHpsRTFSUk56TkNvQzVpTXhjaVRNagpackEvK0pqVEFWU2FuNXZnODFOSmthZEphbmNPWmEwS3JEdkZzd1JJSG5CMGpMLzh3VmZXSTV6czZURU1VZUk1ClF6dU01QXUxUFZ4VXZJUGhlMHl6UXZjWDV5RlhnMkJGU3MzKzJBajlNcENWVTZNY2dSSTl5TTRicitFTUlHL0kKY0pjPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://master.kube.k-space.ee:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: oidc
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: oidc
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- oidc-login
- get-token
- --oidc-issuer-url=https://auth.k-space.ee
- --oidc-client-id=kubelogin
- --oidc-use-pkce
- --oidc-extra-scope=profile,email,groups
- --listen-address=127.0.0.1:27890
command: kubectl
env: null
provideClusterInfo: false
EOF
```
For access control mapping see [cluster-role-bindings.yml](cluster-role-bindings.yml)
# Technology mapping
Our self-hosted Kubernetes stack compared to AWS based deployments:
| Hipster startup | Self-hosted hackerspace | Purpose |
|-----------------|-------------------------------------|---------------------------------------------------------------------|
| AWS EC2 | Proxmox | Virtualization layer |
| AWS EKS | kubeadm | Provision Kubernetes master nodes |
| AWS EBS | Longhorn | Block storage for arbitrary applications needing persistent storage |
| AWS NLB | MetalLB | L2/L3 level load balancing |
| AWS ALB | Traefik | Reverse proxy also known as ingress controller in Kubernetes jargon |
| AWS ECR | Harbor | Docker registry |
| AWS DocumentDB | MongoDB | NoSQL database |
| AWS S3 | Minio | Object storage |
| GitHub OAuth2 | Samba (Active Directory compatible) | Source of truth for authentication and authorization |
| Dex | Authelia | ACL mapping and OIDC provider which integrates with GitHub/Samba |
| GitHub | Gitea | Source code management, issue tracking |
| GitHub Actions | Drone | Build Docker images |
| Gmail | Wildduck | E-mail |
| AWS Route53 | Bind and RFC2136 | DNS records and Let's Encrypt DNS validation |
| AWS VPC | Calico | Overlay network |
External dependencies running as classic virtual machines:
- Samba as Authelia's source of truth
- Bind as DNS server
## Adding applications
Deploy applications via [ArgoCD](https://argocd.k-space.ee)
We use Treafik with Authelia for Ingress.
Applications where possible and where applicable should use `Remote-User`
authentication. This prevents application exposure on public Internet.
Otherwise use OpenID Connect for authentication,
see Argo itself as an example how that is done.
See `kspace-camtiler/ingress.yml` for commented Ingress example.
Note that we do not use IngressRoute objects because they don't
support `external-dns` out of the box.
Do NOT add nginx annotations, we use Traefik.
Do NOT manually add DNS records, they are added by `external-dns`.
Do NOT manually create Certificate objects,
these should be handled by `tls:` section in Ingress.
## Cluster formation
Create Ubuntu 20.04 VM-s on Proxmox with local storage.
After machines have booted up and you can reach them via SSH:
```bash
# Enable required kernel modules
cat > /etc/modules << EOF
overlay
br_netfilter
EOF
cat /etc/modules | xargs -L 1 -t modprobe
# Finetune sysctl:
cat > /etc/sysctl.d/99-k8s.conf << EOF
net.ipv4.conf.all.accept_redirects = 0
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sysctl --system
# Disable Ubuntu caching DNS resolver
systemctl disable systemd-resolved.service
systemctl stop systemd-resolved
rm -fv /etc/resolv.conf
cat > /etc/resolv.conf << EOF
nameserver 1.1.1.1
nameserver 8.8.8.8
EOF
# Disable multipathd as Longhorn handles that itself
systemctl mask multipathd
systemctl disable multipathd
systemctl stop multipathd
# Disable Snapcraft
systemctl mask snapd
systemctl disable snapd
systemctl stop snapd
# Permit root login
sed -i -e 's/PermitRootLogin no/PermitRootLogin without-password/' /etc/ssh/sshd_config
systemctl reload ssh
cat << EOF > /root/.ssh/authorized_keys
sk-ecdsa-sha2-nistp256@openssh.com AAAAInNrLWVjZHNhLXNoYTItbmlzdHAyNTZAb3BlbnNzaC5jb20AAAAIbmlzdHAyNTYAAABBBD4/e9SWYWYoNZMkkF+NirhbmHuUgjoCap42kAq0pLIXFwIqgVTCre03VPoChIwBClc8RspLKqr5W3j0fG8QwnQAAAAEc3NoOg== lauri@lauri-x13
EOF
userdel -f ubuntu
apt-get remove -yq cloud-init
```
Install packages, for Raspbian set `OS=Debian_11`
```bash
OS=xUbuntu_20.04
VERSION=1.23
cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /
EOF
cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list
deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /
EOF
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers.gpg add -
curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers-cri-o.gpg add -
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list
apt-get update
apt-get install -yqq apt-transport-https curl cri-o cri-o-runc kubelet=1.23.5-00 kubectl=1.23.5-00 kubeadm=1.23.5-00
sudo systemctl daemon-reload
sudo systemctl enable crio --now
apt-mark hold kubelet kubeadm kubectl
sed -i -e 's/unqualified-search-registries = .*/unqualified-search-registries = ["docker.io"]/' /etc/containers/registries.conf
```
On master:
```
kubeadm init --token-ttl=120m --pod-network-cidr=10.244.0.0/16 --control-plane-endpoint "master.kube.k-space.ee:6443" --upload-certs --apiserver-cert-extra-sans master.kube.k-space.ee --node-name master1.kube.k-space.ee
```
For the `kubeadm join` command specify FQDN via `--node-name $(hostname -f)`.
After forming the cluster add taints:
```bash
for j in $(seq 1 9); do
kubectl label nodes worker${j}.kube.k-space.ee node-role.kubernetes.io/worker=''
done
for j in $(seq 1 3); do
kubectl taint nodes mon${j}.kube.k-space.ee dedicated=monitoring:NoSchedule
kubectl label nodes mon${j}.kube.k-space.ee dedicated=monitoring
done
for j in $(seq 1 4); do
kubectl taint nodes storage${j}.kube.k-space.ee dedicated=storage:NoSchedule
kubectl label nodes storage${j}.kube.k-space.ee dedicated=storage
done
```
On Raspberry Pi you need to take additonal steps:
* Manually enable cgroups by appending
`cgroup_memory=1 cgroup_enable=memory` to `/boot/cmdline.txt`,
* Disable swap with `swapoff -a; apt-get purge -y dphys-swapfile`
* For mounting Longhorn volumes on Rasbian install `open-iscsi`
For `arm64` nodes add suitable taint to prevent scheduling non-multiarch images on them:
```bash
kubectl taint nodes worker9.kube.k-space.ee arch=arm64:NoSchedule
```
***
_This page is referenced by wiki [front page](https://wiki.k-space.ee) as **the** technical documentation for infra._

28
SLACK.md Normal file
View File

@@ -0,0 +1,28 @@
## Slack bots
### Doorboy3
https://api.slack.com/apps/A05NDB6FVJQ
Slack app author: rasmus
Managed by inventory-app:
- Incoming (open-commands) to `/api/slack/doorboy`, inventory-app authorizes based on command originating from #members or #work-shop && oidc access group (floor, workshop).
- Posts logs to a private channel. Restricted to 193.40.103.0/24.
Secrets as `SLACK_DOORLOG_CALLBACK` and `SLACK_VERIFICATION_TOKEN`.
### oidc-gateway
https://api.slack.com/apps/A05DART9PP1
Slack app author: eaas
Managed by passmower:
- Links e-mail to slackId.
- Login via Slack (not enabled).
Secrets as `slackId` and `slack-client`.
### podi-podi uuenduste spämmikoobas
https://api.slack.com/apps/A033RE9TUFK
Slack app author: rasmus
Posts Prometheus alerts to a private channel.
Secret as `slack-secrets`.

5
ansible/README.md Normal file
View File

@@ -0,0 +1,5 @@
#TODO:
- inventory
- running playbooks NB! about PWD
- ssh_config; updating
Include ssh_config (with known_hosts) to access all machines listed.

15
ansible/ansible.cfg Normal file
View File

@@ -0,0 +1,15 @@
[defaults]
inventory = inventory.yml
nocows = 1
pattern =
deprecation_warnings = False
fact_caching = jsonfile
fact_caching_connection = ~/.ansible/k-space-fact-cache
fact_caching_timeout = 7200
remote_user = root
[ssh_connection]
control_path = ~/.ssh/cm-%%r@%%h:%%p
ssh_args = -o ControlMaster=auto -o ControlPersist=8h -F ssh_config
pipelining = True

76
ansible/bind-primary.yml Normal file
View File

@@ -0,0 +1,76 @@
- name: Setup primary nameserver
hosts: ns1.k-space.ee
tasks:
- name: Make sure bind9 is installed
ansible.builtin.apt:
name: bind9
state: present
- name: Configure Bind
register: bind
copy:
dest: /etc/bind/named.conf
content: |
# This file is managed by Ansible
# https://git.k-space.ee/k-space/kube/src/branch/master/ansible-bind-primary.yml
# Do NOT modify manually
include "/etc/bind/named.conf.local";
include "/etc/bind/readwrite.key";
include "/etc/bind/readonly.key";
options {
directory "/var/cache/bind";
version "";
listen-on { any; };
listen-on-v6 { any; };
pid-file "/var/run/named/named.pid";
notify explicit; also-notify { 172.20.53.1; 172.20.53.2; 172.20.53.3; };
allow-recursion { none; };
recursion no;
check-names master ignore;
dnssec-validation no;
auth-nxdomain no;
};
# https://kb.isc.org/docs/aa-00723
acl allowed {
172.20.3.0/24;
172.20.4.0/24;
};
acl rejected { !allowed; any; };
zone "." {
type hint;
file "/var/lib/bind/db.root";
};
zone "k-space.ee" {
type master;
file "/var/lib/bind/db.k-space.ee";
allow-update { !rejected; key readwrite; };
allow-transfer { !rejected; key readonly; key readwrite; };
};
zone "k6.ee" {
type master;
file "/var/lib/bind/db.k6.ee";
allow-update { !rejected; key readwrite; };
allow-transfer { !rejected; key readonly; key readwrite; };
};
zone "kspace.ee" {
type master;
file "/var/lib/bind/db.kspace.ee";
allow-update { !rejected; key readwrite; };
allow-transfer { !rejected; key readonly; key readwrite; };
};
- name: Check Bind config
ansible.builtin.shell: "named-checkconf"
- name: Reload Bind config
service:
name: bind9
state: reloaded
when: bind.changed

65
ansible/doors.yml Normal file
View File

@@ -0,0 +1,65 @@
# ansible doors -m shell -a "ctr image pull harbor.k-space.ee/k-space/mjpg-streamer:latest"
# journalctl -u mjpg_streamer@video0.service -f
# Referenced/linked and documented by https://wiki.k-space.ee/en/hosting/doors
- name: Setup doors
hosts: doors
tasks:
- name: Make sure containerd is installed
ansible.builtin.apt:
name: containerd
state: present
- name: Copy systemd service for Doorboy controller # https://git.k-space.ee/k-space/godoor
copy:
dest: /etc/systemd/system/godoor.service
content: |
[Unit]
Description=Doorboy service
Documentation=https://git.k-space.ee/k-space/godoor
After=network.target
[Service]
Environment=IMAGE=harbor.k-space.ee/k-space/godoor:latest
ExecStartPre=-ctr task kill --signal=9 %N
ExecStartPre=-ctr task rm %N
ExecStartPre=-ctr c rm %N
ExecStartPre=-ctr image pull $IMAGE
ExecStart=ctr run --rm --pid-file=/run/%N.pid --privileged --read-only --env-file=/etc/godoor --env=KDOORPI_API_ALLOWED=https://doorboy-proxy.k-space.ee/allowed --env=KDOORPI_API_LONGPOLL=https://doorboy-proxy.k-space.ee/longpoll --env=KDOORPI_API_SWIPE=https://doorboy-proxy.k-space.ee/swipe --env=KDOORPI_DOOR=%H --net-host --net-host --cwd /app $IMAGE %N /godoor
ExecStopPost=ctr task rm %N
ExecStopPost=ctr c rm %N
Restart=always
[Install]
WantedBy=multi-user.target
- name: Enable Doorboy controller
ansible.builtin.systemd:
state: restarted
daemon_reload: yes
name: godoor.service
- name: Copy systemd service for mjpg-streamer # https://git.k-space.ee/k-space/mjpg-steramer
copy:
dest: /etc/systemd/system/mjpg_streamer@.service
content: |
[Unit]
Description=A server for streaming Motion-JPEG from a video capture device
After=network.target
ConditionPathExists=/dev/%I
[Service]
Environment=IMAGE=harbor.k-space.ee/k-space/mjpg-streamer:latest
StandardOutput=tty
Type=forking
ExecStartPre=-ctr task kill --signal=9 %p_%i
ExecStartPre=-ctr task rm %p_%i
ExecStartPre=-ctr c rm %p_%i
ExecStartPre=-ctr image pull $IMAGE
ExecStart=ctr run --tty -d --rm --pid-file=/run/%i.pid --privileged --read-only --net-host $IMAGE %p_%i /usr/local/bin/mjpg_streamer -i 'input_uvc.so -d /dev/%I -r 1280x720 -f 10' -o 'output_http.so -w /usr/share/mjpg_streamer/www'
ExecStopPost=ctr task rm %p_%i
ExecStopPost=ctr c rm %p_%i
PIDFile=/run/%i.pid
[Install]
WantedBy=multi-user.target
- name: Enable mjpg-streamer
ansible.builtin.systemd:
state: restarted
daemon_reload: yes
name: mjpg_streamer@video0.service

83
ansible/inventory.yml Normal file
View File

@@ -0,0 +1,83 @@
# This file is linked from /README.md as 'all infra'.
##### Not otherwise linked:
# Homepage: https://git.k-space.ee/k-space/homepage (on GitLab)
# Slack: https://k-space-ee.slack.com
# Routers/Switches: https://git.k-space.ee/k-space/rosdump
all:
vars:
admins:
- lauri
- eaas
extra_admins: []
children:
# https://wiki.k-space.ee/en/hosting/storage
nasgroup:
hosts:
nas.k-space.ee: { ansible_host: 172.23.0.7 }
offsite:
ansible_host: 78.28.64.17
ansible_port: 10648
vars:
offsite_dataset: offsite/backup_zrepl
misc:
children:
nasgroup:
hosts:
# https://git.k-space.ee/k-space/kube: bind/README.md (primary DNS, PVE VM)
ns1.k-space.ee: { ansible_host: 172.20.0.2 }
# https://wiki.k-space.ee/hosting/proxmox (depends on nas.k-space.ee)
proxmox: # aka PVE, Proxmox Virtualization Environment
vars:
extra_admins:
- rasmus
hosts:
pve1: { ansible_host: 172.21.20.1 }
pve2: { ansible_host: 172.21.20.2 }
pve8: { ansible_host: 172.21.20.8 }
pve9: { ansible_host: 172.21.20.9 }
# https://git.k-space.ee/k-space/kube: README.md
# CLUSTER.md (PVE VMs + external nas.k-space.ee)
kubernetes:
children:
masters:
hosts:
master1.kube.k-space.ee: { ansible_host: 172.21.3.51 }
master2.kube.k-space.ee: { ansible_host: 172.21.3.52 }
master3.kube.k-space.ee: { ansible_host: 172.21.3.53 }
kubelets:
children:
mon: # they sit in a priviledged VLAN
hosts:
mon1.kube.k-space.ee: { ansible_host: 172.21.3.61 }
mon2.kube.k-space.ee: { ansible_host: 172.21.3.62 }
mon3.kube.k-space.ee: { ansible_host: 172.21.3.63 }
storage: # longhorn, to be replaced with a more direct CSI
hosts:
storage1.kube.k-space.ee: { ansible_host: 172.21.3.71 }
storage2.kube.k-space.ee: { ansible_host: 172.21.3.72 }
storage3.kube.k-space.ee: { ansible_host: 172.21.3.73 }
storage4.kube.k-space.ee: { ansible_host: 172.21.3.74 }
workers:
hosts:
worker1.kube.k-space.ee: { ansible_host: 172.20.3.81 }
worker2.kube.k-space.ee: { ansible_host: 172.20.3.82 }
worker3.kube.k-space.ee: { ansible_host: 172.20.3.83 }
worker4.kube.k-space.ee: { ansible_host: 172.20.3.84 }
worker9.kube.k-space.ee: { ansible_host: 172.21.3.89 } # Nvidia Tegra Jetson-AGX
# https://wiki.k-space.ee/en/hosting/doors
# See also: https://git.k-space.ee/k-space/kube: camtiler/README.md
doors:
vars:
extra_admins:
- arti
hosts:
grounddoor: { ansible_host: 100.102.3.1 }
frontdoor: { ansible_host: 100.102.3.2 }
backdoor: { ansible_host: 100.102.3.3 }
workshopdoor: { ansible_host: 100.102.3.4 }

27
ansible/known_hosts Normal file
View File

@@ -0,0 +1,27 @@
# Use `ansible-playbook update-ssh-config.yml` to update this file
100.102.3.3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN4SifLddYAz8CasmFwX5TQbiM8atAYMFuDQRchclHM0sq9Pi8wRxSZK8SHON4Y7YFsIY+cXnQ2Wx4FpzKmfJYE= # backdoor
100.102.3.2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE8/E7PDqTrTdU+MFurHkIPzTBTGcSJqXuv5n0Ugd/IlvOr2v+eYi3ma91pSBmF5Hjy9foWypCLZfH+vWMkV0gs= # frontdoor
100.102.3.1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFcH8D2AhnESw3uu2f4EHBhT9rORQQJJ3TlbwN+kro5tRZsZk4p3MKabBiuCSZw2KWjfu0MY4yHSCrUUQrggJDM= # grounddoor
172.21.3.51 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMYy07yLlOiFvXzmVDIULS9VDCMz7T+qOq4M+x8Lo3KEKamI6ZD737mvimPTW6K1FRBzzq67Mq495UnoFKVnQWE= # master1.kube.k-space.ee
172.21.3.52 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKRFfYDaTH58FUw+9stBVsyCviaPCGEbe9Y1a9WKvj98S7m+qU03YvtfPkRfEH/3iXHDvngEDVpJrTWW4y6e6MI= # master2.kube.k-space.ee
172.21.3.53 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIqIepuMkMo/KO3bb4X6lgb6YViAifPmgHXVrbtHwbOZLll5Qqr4pXdLDxkuZsmiE7iZBw2gSzZLcNMGdDEnWrY= # master3.kube.k-space.ee
172.21.3.61 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCJ9XgDz2NEzvjw/nDmRIKUJAmNqzsaXMJn4WFiWfTz1x2HrRcXgY3UXKWUxUvJO1jJ7hIvyE+V/8UtwYRDP1uY= # mon1.kube.k-space.ee
172.21.3.62 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLveng7H/2Gek+HYDYRWFD0Dy+4l/zjrbF2mnnkBI5CFOtqK0zwBh41IlizkpmmI5fqEIXwhLFHZEWXbUvev5oo= # mon2.kube.k-space.ee
172.21.3.63 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMMgOIL43dgCYlwAI2O269iHxo7ymweG7NoXjnk2F529G5mP+mp5We4lDZEJVyLYtemvhQ2hEHI/WVPWy3SNiuM= # mon3.kube.k-space.ee
172.23.0.7 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC15tWIbuBqd4UZLaRbpb6oTlwniS4cg2IYZYe5ys352azj2kzOnvtCGiPo0fynFadwfDHtge9JjK6Efwl87Wgc= # nas.k-space.ee
172.20.0.2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO77ffkJi903aA6cM7HnFfSyYbPP4jkydI/+/tIGeMv+c9BYOE27n+ylNERaEhYkyddIx93MB4M6GYRyQOjLWSc= # ns1.k-space.ee
[78.28.64.17]:10648 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE7J61p3YzsbRAYtXIrhQUeqc47LuVw1I38egHzi/kLG+CFPsyB9krd29yJMyLRjyM+m5qUjoxNiWK/x0g3jKOI= # offsite
172.21.20.1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHLHc3T/J5G1CIf33XeniJk5+D0cpaXe0OkHmpCQ3DoZC3KkFBpA+/U1mlo+qb8xf/GrMj6BMMMLXKSUxbEVGaU= # pve1
172.21.20.2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFGSRetFdHExRT69pHJAcuhqzAu+Xx4K2AEmWJhUZ2JYF7aa0JbltiYQs58Bpx9s9NA793tiHLZXABy56dI+D9Q= # pve2
172.21.20.8 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMzNvX3ga56EELcI9gV7moyFdKllSwb81V2tCWIjhFVSFTo3QKH/gX/MBnjcs+RxeVV3GF7zIIv8492bCvgiO9s= # pve8
172.21.20.9 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNU4YzKSzzUSnAgh4L1DF3dlC1VEaKVaIeTgsL5VJ0UMqjPr+8QMjIvo28cSLfIQYtfoQbt7ASVsm0uDQvKOldM= # pve9
172.21.3.71 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI2jy8EsMo7Voor4URCMdgiEzc0nmYDowV4gB2rZ6hnH7bcKGdaODsCyBH6nvbitgnESCC8136RmdxCnO9/TuJ0= # storage1.kube.k-space.ee
172.21.3.72 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKxa2PbOj7bV0AUkBZuPkQZ/3ZMeh1mUCD+rwB4+sXbvTc+ca+xgcPGdAozbY/cUA4GdaKelhjI9DEC46MeFymY= # storage2.kube.k-space.ee
172.21.3.73 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGYqNHAxwwoZqne/uv5syRb+tEwpbaGeK8oct4IjIHcmPdU32JlMiSqLX7d58t/b8tqE1z2rM4gCc4bpzvNrHMQ= # storage3.kube.k-space.ee
172.21.3.74 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI+FRuwbrUpMDg9gKf6AqcfovEkt8r5SgB4JXEuMD+I6pp+2PfbxMwrXQ8Xg3oHW+poG413KWw4FZOWv2gH4CEQ= # storage4.kube.k-space.ee
172.20.3.81 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPnmGiEWtWnNNcF872fhYKCD07QwOb75BDEwN3fC4QYmBAbiN0iX/UH96r02V5f7uga3a07/xxt5P0cfEOdtQwQ= # worker1.kube.k-space.ee
172.20.3.82 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBkSNAYeugxGvNmV3biY1s0BWPCEw3g3H0VWLomu/vPbg+GN10/A1pfgt62DHFCYDB6QZwkZM6HIFy8y0xhRl9g= # worker2.kube.k-space.ee
172.20.3.83 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBe+A9Bg54UwUvlPguKDyNAsX7mYbnfMOxhK2UP2YofPlzJ0KDUuH5mbmw76XWz0L6jhT6I7hyc0QsFBdO3ug68= # worker3.kube.k-space.ee
172.20.3.84 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKoNIL+kEYphi/yCdhIytxqRaucm2aTzFrmNN4gEjCrn4TK8A46fyqAuwmgyLQFm7RD5qcEKPWP57Cl0DhTU1T4= # worker4.kube.k-space.ee
172.21.3.89 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCoepYYnNMXkZ9dn4RSSMhFFsppPVkzmjkG3z9vK84454XkI4wizmhUlZ0p+Ovx2YbrjbKibfrrtk8RgWUMi0rY= # worker9.kube.k-space.ee
100.102.3.4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMpkSqEOyYrKXChxl6PAV+q0KypOPnKsXoXWO1JSZSIOwAs5YTzt8Q1Ryb+nQnAOlGj1AY1H7sRllTzdv0cA/EM= # workshopdoor

239
ansible/kubernetes.yml Normal file
View File

@@ -0,0 +1,239 @@
---
# ansible-galaxy install -r requirements.yaml
- name: Install cri-o
hosts:
- worker9.kube.k-space.ee
vars:
CRIO_VERSION: "v1.30"
tasks:
- name: ensure curl is installed
ansible.builtin.apt:
name: curl
state: present
- name: Ensure /etc/apt/keyrings exists
ansible.builtin.file:
path: /etc/apt/keyrings
state: directory
# TODO: fix
# - name: add k8s repo apt key
# ansible.builtin.shell: "curl -fsSL https://pkgs.k8s.io/addons:/cri-o:/stable:/{{ CRIO_VERSION }}/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/cri-o-apt-keyring.gpg"
- name: add k8s repo
ansible.builtin.apt_repository:
repo: "deb [signed-by=/etc/apt/keyrings/cri-o-apt-keyring.gpg] https://pkgs.k8s.io/addons:/cri-o:/stable:/{{ CRIO_VERSION }}/deb/ /"
state: present
filename: cri-o
- name: check current crictl version
command: "/usr/bin/crictl --version"
failed_when: false
changed_when: false
register: crictl_version_check
- name: download crictl
unarchive:
src: "https://github.com/kubernetes-sigs/cri-tools/releases/download/{{ CRIO_VERSION }}/crictl-{{ CRIO_VERSION }}-linux-{{ 'arm64' if ansible_architecture == 'aarch64' else 'amd64' }}.tar.gz"
dest: /tmp
remote_src: true
when: >
crictl_version_check.stdout is not defined or CRIO_VERSION not in crictl_version_check.stdout
register: crictl_download_check
- name: move crictl binary into place
copy:
src: /tmp/crictl
dest: "/usr/bin/crictl"
when: >
exporter_download_check is changed
- name: ensure crio is installed
ansible.builtin.apt:
name: cri-o
state: present
- name: Reconfigure Kubernetes worker nodes
hosts:
- storage
- workers
tasks:
- name: Configure grub defaults
copy:
dest: "/etc/default/grub"
content: |
GRUB_DEFAULT=0
GRUB_TIMEOUT_STYLE=countdown
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash memhp_default_state=online"
GRUB_CMDLINE_LINUX="memhp_default_state=online rootflags=pquota"
register: grub_defaults
when: ansible_architecture == 'x86_64'
- name: Load grub defaults
ansible.builtin.shell: update-grub
when: grub_defaults.changed
- name: Ensure nfs-common is installed
ansible.builtin.apt:
name: nfs-common
state: present
- name: Reconfigure Kubernetes nodes
hosts: kubernetes
vars:
KUBERNETES_VERSION: v1.30.3
IP: "{{ hostvars[inventory_hostname]['ansible_default_ipv4']['address'] }}"
tasks:
- name: Remove APT packages
ansible.builtin.apt:
name: "{{ item }}"
state: absent
loop:
- kubelet
- kubeadm
- kubectl
- name: Download kubectl, kubeadm, kubelet
ansible.builtin.get_url:
url: "https://cdn.dl.k8s.io/release/{{ KUBERNETES_VERSION }}/bin/linux/{{ 'arm64' if ansible_architecture == 'aarch64' else 'amd64' }}/{{ item }}"
dest: "/usr/bin/{{ item }}-{{ KUBERNETES_VERSION }}"
mode: '0755'
loop:
- kubelet
- kubectl
- kubeadm
- name: Create /etc/systemd/system/kubelet.service
ansible.builtin.copy:
content: |
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/home/
Wants=network-online.target
After=network-online.target
[Service]
ExecStart=/usr/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target
dest: /etc/systemd/system/kubelet.service
register: kubelet_service
- name: Create symlinks for kubectl, kubeadm, kubelet
ansible.builtin.file:
src: "/usr/bin/{{ item }}-{{ KUBERNETES_VERSION }}"
dest: "/usr/bin/{{ item }}"
state: link
loop:
- kubelet
- kubectl
- kubeadm
register: kubelet
- name: Restart Kubelet
service:
name: kubelet
enabled: true
state: restarted
daemon_reload: true
when: kubelet.changed or kubelet_service.changed
- name: Ensure /var/lib/kubelet exists
ansible.builtin.file:
path: /var/lib/kubelet
state: directory
- name: Configure kubelet
ansible.builtin.template:
src: kubelet.j2
dest: /var/lib/kubelet/config.yaml
mode: 644
- name: Ensure /etc/systemd/system/kubelet.service.d/ exists
ansible.builtin.file:
path: /etc/systemd/system/kubelet.service.d
state: directory
- name: Configure kubelet service
ansible.builtin.template:
src: 10-kubeadm.j2
dest: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
mode: 644
# TODO: register new node if needed
- name: Disable unneccesary services
ignore_errors: true
loop:
- gdm3
- snapd
- bluetooth
- multipathd
- zram
service:
name: "{{item}}"
state: stopped
enabled: no
- name: Ensure /etc/containers exists
ansible.builtin.file:
path: /etc/containers
state: directory
- name: Reset /etc/containers/registries.conf
ansible.builtin.copy:
content: "unqualified-search-registries = [\"docker.io\"]\n"
dest: /etc/containers/registries.conf
register: registries
- name: Restart CRI-O
service:
name: cri-o
state: restarted
when: registries.changed
- name: Reset /etc/modules
ansible.builtin.copy:
content: |
overlay
br_netfilter
dest: /etc/modules
register: kernel_modules
- name: Load kernel modules
ansible.builtin.shell: "cat /etc/modules | xargs -L 1 -t modprobe"
when: kernel_modules.changed
- name: Reset /etc/sysctl.d/99-k8s.conf
ansible.builtin.copy:
content: |
net.ipv4.conf.all.accept_redirects = 0
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
vm.max_map_count = 524288
fs.inotify.max_user_instances = 1280
fs.inotify.max_user_watches = 655360
dest: /etc/sysctl.d/99-k8s.conf
register: sysctl
- name: Reload sysctl config
ansible.builtin.shell: "sysctl --system"
when: sysctl.changed
- name: Reconfigure kube-apiserver to use Passmower OIDC endpoint
ansible.builtin.template:
src: kube-apiserver.j2
dest: /etc/kubernetes/manifests/kube-apiserver.yaml
mode: 600
register: apiserver
when:
- inventory_hostname in groups["masters"]
- name: Restart kube-apiserver
ansible.builtin.shell: "killall kube-apiserver"
when: apiserver.changed

211
ansible/ssh_config Normal file
View File

@@ -0,0 +1,211 @@
# Use `ansible-playbook update-ssh-config.yml` to update this file
# Use `ssh -F ssh_config ...` to connect to target machine or
# Add `Include ~/path/to/this/kube/ssh_config` in your ~/.ssh/config
Host backdoor 100.102.3.3
User root
Hostname 100.102.3.3
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host frontdoor 100.102.3.2
User root
Hostname 100.102.3.2
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host grounddoor 100.102.3.1
User root
Hostname 100.102.3.1
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host master1.kube.k-space.ee 172.21.3.51
User root
Hostname 172.21.3.51
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host master2.kube.k-space.ee 172.21.3.52
User root
Hostname 172.21.3.52
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host master3.kube.k-space.ee 172.21.3.53
User root
Hostname 172.21.3.53
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host mon1.kube.k-space.ee 172.21.3.61
User root
Hostname 172.21.3.61
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host mon2.kube.k-space.ee 172.21.3.62
User root
Hostname 172.21.3.62
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host mon3.kube.k-space.ee 172.21.3.63
User root
Hostname 172.21.3.63
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host nas.k-space.ee 172.23.0.7
User root
Hostname 172.23.0.7
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host ns1.k-space.ee 172.20.0.2
User root
Hostname 172.20.0.2
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host offsite 78.28.64.17
User root
Hostname 78.28.64.17
Port 10648
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host pve1 172.21.20.1
User root
Hostname 172.21.20.1
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host pve2 172.21.20.2
User root
Hostname 172.21.20.2
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host pve8 172.21.20.8
User root
Hostname 172.21.20.8
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host pve9 172.21.20.9
User root
Hostname 172.21.20.9
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host storage1.kube.k-space.ee 172.21.3.71
User root
Hostname 172.21.3.71
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host storage2.kube.k-space.ee 172.21.3.72
User root
Hostname 172.21.3.72
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host storage3.kube.k-space.ee 172.21.3.73
User root
Hostname 172.21.3.73
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host storage4.kube.k-space.ee 172.21.3.74
User root
Hostname 172.21.3.74
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host worker1.kube.k-space.ee 172.20.3.81
User root
Hostname 172.20.3.81
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host worker2.kube.k-space.ee 172.20.3.82
User root
Hostname 172.20.3.82
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host worker3.kube.k-space.ee 172.20.3.83
User root
Hostname 172.20.3.83
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host worker4.kube.k-space.ee 172.20.3.84
User root
Hostname 172.20.3.84
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host worker9.kube.k-space.ee 172.21.3.89
User root
Hostname 172.21.3.89
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host workshopdoor 100.102.3.4
User root
Hostname 100.102.3.4
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h

View File

@@ -0,0 +1,12 @@
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
StandardOutput=null

View File

@@ -0,0 +1,132 @@
apiVersion: v1
kind: Pod
metadata:
annotations:
kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: {{ IP }}:6443
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --advertise-address={{ IP }}
- --allow-privileged=true
- --authorization-mode=Node,RBAC
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --enable-admission-plugins=NodeRestriction
- --enable-bootstrap-token-auth=true
- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --oidc-client-id=passmower.kubelogin
- --oidc-groups-claim=groups
- --oidc-issuer-url=https://auth.k-space.ee/
- --oidc-username-claim=sub
- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- --requestheader-allowed-names=front-proxy-client
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-User
- --secure-port=6443
- --service-account-issuer=https://kubernetes.default.svc.cluster.local
- --service-account-key-file=/etc/kubernetes/pki/sa.pub
- --service-account-signing-key-file=/etc/kubernetes/pki/sa.key
- --service-cluster-ip-range=10.96.0.0/12
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
image: registry.k8s.io/kube-apiserver:{{ KUBERNETES_VERSION }}
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: {{ IP }}
path: /livez
port: 6443
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
name: kube-apiserver
readinessProbe:
failureThreshold: 3
httpGet:
host: {{ IP }}
path: /readyz
port: 6443
scheme: HTTPS
periodSeconds: 1
timeoutSeconds: 15
resources:
requests:
cpu: 250m
startupProbe:
failureThreshold: 24
httpGet:
host: {{ IP }}
path: /livez
port: 6443
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/ca-certificates
name: etc-ca-certificates
readOnly: true
- mountPath: /etc/pki
name: etc-pki
readOnly: true
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /usr/local/share/ca-certificates
name: usr-local-share-ca-certificates
readOnly: true
- mountPath: /usr/share/ca-certificates
name: usr-share-ca-certificates
readOnly: true
hostNetwork: true
priority: 2000001000
priorityClassName: system-node-critical
securityContext:
seccompProfile:
type: RuntimeDefault
volumes:
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/ca-certificates
type: DirectoryOrCreate
name: etc-ca-certificates
- hostPath:
path: /etc/pki
type: DirectoryOrCreate
name: etc-pki
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /usr/local/share/ca-certificates
type: DirectoryOrCreate
name: usr-local-share-ca-certificates
- hostPath:
path: /usr/share/ca-certificates
type: DirectoryOrCreate
name: usr-share-ca-certificates
status: {}

View File

@@ -0,0 +1,43 @@
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 0s
cacheUnauthorizedTTL: 0s
cgroupDriver: systemd
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
logging:
flushFrequency: 0
options:
json:
infoBufferSize: "0"
verbosity: 0
memorySwap: {}
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
rotateCertificates: true
runtimeRequestTimeout: 0s
shutdownGracePeriod: 5m
shutdownGracePeriodCriticalPods: 5m
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s

View File

@@ -0,0 +1,72 @@
---
- name: Collect servers SSH public keys to known_hosts
hosts: localhost
connection: local
vars:
targets: "{{ hostvars[groups['all']] }}"
tasks:
- name: Generate ssh_config
ansible.builtin.copy:
dest: ssh_config
content: |
# Use `ansible-playbook update-ssh-config.yml` to update this file
# Use `ssh -F ssh_config ...` to connect to target machine or
# Add `Include ~/path/to/this/kube/ssh_config` in your ~/.ssh/config
{% for host in groups['all'] | sort %}
Host {{ [host, hostvars[host].get('ansible_host', host)] | unique | join(' ') }}
User root
Hostname {{ hostvars[host].get('ansible_host', host) }}
Port {{ hostvars[host].get('ansible_port', 22) }}
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
{% endfor %}
- name: Generate known_hosts
ansible.builtin.copy:
dest: known_hosts
content: |
# Use `ansible-playbook update-ssh-config.yml` to update this file
{% for host in groups['all'] | sort %}
{{ lookup('ansible.builtin.pipe', 'ssh-keyscan -p %d -t ecdsa %s' % (
hostvars[host].get('ansible_port', 22),
hostvars[host].get('ansible_host', host))) }} # {{ host }}
{% endfor %}
- name: Pull authorized keys from Gitea
hosts: localhost
connection: local
vars:
targets: "{{ hostvars[groups['all']] }}"
tasks:
- name: Download https://git.k-space.ee/user.keys
loop:
- arti
- eaas
- lauri
- rasmus
ansible.builtin.get_url:
url: https://git.k-space.ee/{{ item }}.keys
dest: "./{{ item }}.keys"
- name: Push authorized keys to targets
hosts:
- misc
- kubernetes
- doors
tasks:
- name: Generate /root/.ssh/authorized_keys
ansible.builtin.copy:
dest: "/root/.ssh/authorized_keys"
owner: root
group: root
mode: '0644'
content: |
# Use `ansible-playbook update-ssh-config.yml` from https://git.k-space.ee/k-space/kube/ to update this file
{% for user in admins + extra_admins | unique | sort %}
{% for line in lookup("ansible.builtin.file", user + ".keys").split("\n") %}
{% if line.startswith("sk-") %}
{{ line }} # {{ user }}
{% endif %}
{% endfor %}
{% endfor %}

View File

@@ -0,0 +1,49 @@
# Referenced/linked and documented by https://wiki.k-space.ee/en/hosting/storage#zrepl
- name: zrepl
hosts: nasgroup
tasks:
- name: 'apt: zrepl gpg'
ansible.builtin.get_url:
url: 'https://zrepl.cschwarz.com/apt/apt-key.asc'
dest: /usr/share/keyrings/zrepl.asc
- name: 'apt: zrepl repo'
apt_repository:
repo: 'deb [arch=amd64 signed-by=/usr/share/keyrings/zrepl.asc] https://zrepl.cschwarz.com/apt/debian bookworm main'
- name: 'apt: ensure packages'
apt:
state: latest
pkg: zrepl
- name: 'zrepl: ensure config'
ansible.builtin.template:
src: "zrepl_{{ansible_hostname}}.yml.j2"
dest: /etc/zrepl/zrepl.yml
mode: 600
register: zreplconf
- name: 'zrepl: restart service after config change'
when: zreplconf.changed
service:
state: restarted
enabled: true
name: zrepl
- name: 'zrepl: ensure service'
when: not zreplconf.changed
service:
state: started
enabled: true
name: zrepl
# avoid accidental conflicts of changes on recv (would err 'will not overwrite without force')
- name: 'zfs: ensure recv mountpoint=off'
hosts: offsite
tasks:
- name: 'zfs: get mountpoint'
shell: zfs get mountpoint -H -o value {{offsite_dataset}}
register: result
changed_when: false
- when: result.stdout != "none"
name: 'zfs: ensure mountpoint=off'
changed_when: true
shell: zfs set mountpoint=none {{offsite_dataset}}
register: result

23
ansible/zrepl/prom.yaml Normal file
View File

@@ -0,0 +1,23 @@
---
apiVersion: monitoring.coreos.com/v1
kind: Probe
metadata:
name: zrepl
spec:
scrapeTimeout: 30s
targets:
staticConfig:
static:
- nas.mgmt.k-space.ee:9811
# - offsite.k-space.ee:9811 # TODO: unreachable
relabelingConfigs:
- sourceLabels: [__param_target]
targetLabel: instance
- sourceLabels: [__param_target]
targetLabel: __address__
prober:
url: localhost
path: /metrics
metricRelabelings:
- sourceLabels: [__address__]
targetLabel: target

View File

@@ -0,0 +1,47 @@
global:
logging:
- type: syslog
format: logfmt
level: warn
monitoring:
- type: prometheus
listen: ':9811'
jobs:
- name: k6zrepl
type: snap
# "<" aka recursive, https://zrepl.github.io/configuration/filter_syntax.html
filesystems:
'nas/k6<': true
snapshotting:
type: periodic
prefix: zrepl_
interval: 1h
pruning:
keep:
# Keep non-zrepl snapshots
- type: regex
negate: true
regex: '^zrepl_'
- type: last_n
regex: "^zrepl_.*"
count: 4
- type: grid
regex: "^zrepl_.*"
grid: 4x1h | 6x4h | 3x1d | 2x7d
- name: k6zrepl_offsite_src
type: source
send:
encrypted: true # zfs native already-encrypted, filesystems not encrypted will log to error-level
serve:
type: tcp
listen: "{{ansible_host}}:35566" # NAT-ed to 193.40.103.250
clients: {
"78.28.64.17": "offsite.k-space.ee",
}
filesystems:
'nas/k6': true
snapshotting: # handled by above job, separated for secuwurity (isolation of domains)
type: manual

View File

@@ -0,0 +1,41 @@
global:
logging:
- type: syslog
format: logfmt
level: warn
monitoring:
- type: prometheus
listen: ':9811'
jobs:
- name: k6zrepl_offsite_dest
type: pull
recv:
placeholder:
encryption: off # https://zrepl.github.io/configuration/sendrecvoptions.html#placeholders
# bandwidth_limit:
# max: 9 MiB # 75.5 Mbps
connect:
type: tcp
address: '193.40.103.250:35566' # firewall whitelisted to offsite
root_fs: {{offsite_dataset}}
interval: 10m # start interval, does nothing when no snapshots to recv
replication:
concurrency:
steps: 2
pruning:
keep_sender: # offsite does not dictate nas snapshot policy
- type: regex
regex: '.*'
keep_receiver:
# Keep non-zrepl snapshots
- negate: true
type: regex
regex: "^zrepl_"
- type: last_n
regex: "^zrepl_"
count: 4
- type: grid
regex: "^zrepl_"
grid: 4x1h | 6x4h | 3x1d | 2x7d

View File

@@ -1,7 +1,9 @@
# Workflow
Most applications in our Kubernetes cluster are managed by ArgoCD.
Most notably operators are NOT managed by ArgoCD.
Adding to `applications/`: `kubectl apply -f newapp.yaml`
# Deployment
@@ -11,16 +13,15 @@ To deploy ArgoCD:
helm repo add argo-cd https://argoproj.github.io/argo-helm
kubectl create secret -n argocd generic argocd-secret # Initialize empty secret for sessions
helm template -n argocd --release-name k6 argo-cd/argo-cd --include-crds -f values.yaml > argocd.yml
kubectl apply -f argocd.yml -n argocd
kubectl apply -f argocd.yml -f application-extras.yml -n argocd
kubectl -n argocd rollout restart deployment/k6-argocd-redis
kubectl -n argocd rollout restart deployment/k6-argocd-repo-server
kubectl -n argocd rollout restart deployment/k6-argocd-server
kubectl -n argocd rollout restart deployment/k6-argocd-notifications-controller
kubectl -n argocd rollout restart statefulset/k6-argocd-application-controller
kubectl label -n argocd secret oidc-client-argocd-owner-secrets app.kubernetes.io/part-of=argocd
```
Note: Refer to Authelia README for OIDC secret setup
# Setting up Git secrets
@@ -36,11 +37,49 @@ kubectl -n argocd create secret generic gitea-kube-staging \
--from-literal=type=git \
--from-literal=url=git@git.k-space.ee:k-space/kube-staging \
--from-file=sshPrivateKey=id_ecdsa
kubectl -n argocd create secret generic gitea-kube-members \
--from-literal=type=git \
--from-literal=url=git@git.k-space.ee:k-space/kube-members \
--from-file=sshPrivateKey=id_ecdsa
kubectl label -n argocd secret gitea-kube argocd.argoproj.io/secret-type=repository
kubectl label -n argocd secret gitea-kube-staging argocd.argoproj.io/secret-type=repository
kubectl label -n argocd secret gitea-kube-members argocd.argoproj.io/secret-type=repository
rm -fv id_ecdsa
```
Have Gitea admin reset password for user `argocd` and log in with that account.
Add the SSH key for user `argocd` from file `id_ecdsa.pub`.
Delete any other SSH keys associated with Gitea user `argocd`.
# Managing applications
To update apps:
```
for j in asterisk bind camtiler etherpad freescout gitea grafana hackerspace nextcloud nyancat rosdump traefik wiki wildduck woodpecker; do
cat << EOF >> applications/$j.yaml
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: $j
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: $j
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: $j
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true
EOF
done
find applications -name "*.yaml" -exec kubectl apply -n argocd -f {} \;
```

View File

@@ -0,0 +1,37 @@
---
apiVersion: codemowers.cloud/v1beta1
kind: OIDCClient
metadata:
name: argocd
namespace: argocd
spec:
displayName: Argo CD
uri: https://argocd.k-space.ee
redirectUris:
- https://argocd.k-space.ee/auth/callback
allowedGroups:
- k-space:kubernetes:admins
grantTypes:
- authorization_code
- refresh_token
responseTypes:
- code
availableScopes:
- openid
- profile
pkce: false
---
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
namespace: argocd
name: k-space.ee
spec:
clusterResourceWhitelist:
- group: '*'
kind: '*'
destinations:
- namespace: '*'
server: '*'
sourceRepos:
- '*'

View File

@@ -1,17 +1,18 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: drone
name: argocd-applications
namespace: argocd
spec:
project: default
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: drone
path: argocd/applications
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: drone
namespace: argocd
syncPolicy:
syncOptions:
- CreateNamespace=true
automated:
prune: false

View File

@@ -0,0 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: asterisk
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: asterisk
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: asterisk
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -1,17 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: camtiler
name: bind
namespace: argocd
spec:
project: default
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: camtiler
path: bind
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: camtiler
namespace: bind
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true
- CreateNamespace=true

View File

@@ -0,0 +1,15 @@
# ---
# apiVersion: argoproj.io/v1alpha1
# kind: Application
# metadata:
# name: camtiler
# namespace: argocd
# spec:
# project: k-space.ee
# source:
# repoURL: 'git@git.k-space.ee:k-space/kube.git'
# path: camtiler
# targetRevision: HEAD
# destination:
# server: 'https://kubernetes.default.svc'
# namespace: camtiler

View File

@@ -1,23 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: elastic-system
namespace: argocd
spec:
project: default
destination:
server: 'https://kubernetes.default.svc'
namespace: elastic-system
syncPolicy:
automated: {}
syncOptions:
- CreateNamespace=true
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: elastic-system
targetRevision: HEAD
ignoreDifferences:
- group: admissionregistration.k8s.io
kind: ValidatingWebhookConfiguration
jqPathExpressions:
- '.webhooks[]?.clientConfig.caBundle'

View File

@@ -1,10 +1,11 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: etherpad
namespace: argocd
spec:
project: default
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: etherpad
@@ -13,5 +14,7 @@ spec:
server: 'https://kubernetes.default.svc'
namespace: etherpad
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true
- CreateNamespace=true

View File

@@ -1,17 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: external-dns
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: external-dns
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: external-dns
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: freescout
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: freescout
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: freescout
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: gitea
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: gitea
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: gitea
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: grafana
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: grafana
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: grafana
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: hackerspace
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: hackerspace
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: hackerspace
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -1,17 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: harbor
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: harbor
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: harbor
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@@ -1,17 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: keel
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: keel
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: keel
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@@ -1,3 +1,4 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
@@ -13,5 +14,7 @@ spec:
server: 'https://kubernetes.default.svc'
namespace: kubernetes-dashboard
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -1,17 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: logging
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: logging
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: logging
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: logmower
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: logmower
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: logmower
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -1,22 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: metallb-system
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: metallb-system
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: metallb-system
syncPolicy:
syncOptions:
- CreateNamespace=true
ignoreDifferences:
- group: apiextensions.k8s.io
kind: CustomResourceDefinition
jqPathExpressions:
- '.spec.conversion.webhook.clientConfig.caBundle'

View File

@@ -0,0 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: minio-clusters
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: minio-clusters
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: minio-clusters
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: monitoring
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: monitoring
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: monitoring
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: mysql-clusters
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: mysql-clusters
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: mysql-clusters
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -1,17 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: mysql-operator
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: mysql-operator
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: mysql-operator
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: nextcloud
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: nextcloud
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: nextcloud
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: nyancat
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: nyancat
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: nyancat
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -1,17 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: phpmyadmin
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: phpmyadmin
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: phpmyadmin
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: postgres-clusters
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: postgres-clusters
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: postgres-clusters
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -1,14 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: prometheus-operator
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: prometheus-operator
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: prometheus-operator

View File

@@ -0,0 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: redis-clusters
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: redis-clusters
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: redis-clusters
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -1,10 +1,11 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: reloader
namespace: argocd
spec:
project: default
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: reloader
@@ -13,5 +14,7 @@ spec:
server: 'https://kubernetes.default.svc'
namespace: reloader
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true
- CreateNamespace=true

View File

@@ -1,10 +1,11 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: rosdump
namespace: argocd
spec:
project: default
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: rosdump
@@ -13,5 +14,7 @@ spec:
server: 'https://kubernetes.default.svc'
namespace: rosdump
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true
- CreateNamespace=true

View File

@@ -0,0 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: signs
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: signs
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: signs
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: traefik
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: traefik
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: traefik
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -1,17 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: drone-execution
name: whoami
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: drone-execution
path: whoami
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: drone-execution
namespace: whoami
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -1,17 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: foobar
name: wiki
namespace: argocd
spec:
project: default
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: foobar
path: wiki
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: foobar
namespace: wiki
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -1,10 +1,11 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: wildduck
namespace: argocd
spec:
project: default
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: wildduck
@@ -13,5 +14,7 @@ spec:
server: 'https://kubernetes.default.svc'
namespace: wildduck
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -1,17 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: authelia
name: woodpecker
namespace: argocd
spec:
project: default
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: authelia
path: woodpecker
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: authelia
namespace: woodpecker
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -1,7 +1,7 @@
global:
logLevel: warn
domain: argocd.k-space.ee
# We use Authelia OIDC instead of Dex
dex:
enabled: false
@@ -11,12 +11,9 @@ redis-ha:
server:
# HTTPS is implemented by Traefik
extraArgs:
- --insecure
ingress:
enabled: true
annotations:
cert-manager.io/cluster-issuer: default
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
@@ -24,27 +21,9 @@ server:
- argocd.k-space.ee
tls:
- hosts:
- argocd.k-space.ee
secretName: argocd-server-tls
configEnabled: true
config:
admin.enabled: "false"
url: https://argocd.k-space.ee
application.instanceLabelKey: argocd.argoproj.io/instance
oidc.config: |
name: Authelia
issuer: https://auth.k-space.ee
clientID: argocd
cliClientID: argocd
clientSecret: $oidc.config.clientSecret
requestedIDTokenClaims:
groups:
essential: true
requestedScopes:
- openid
- profile
- email
- groups
- "*.k-space.ee"
configfucked:
resource.customizations: |
# https://github.com/argoproj/argo-cd/issues/1704
networking.k8s.io/Ingress:
@@ -52,28 +31,11 @@ server:
hs = {}
hs.status = "Healthy"
return hs
apiextensions.k8s.io/CustomResourceDefinition:
ignoreDifferences: |
jsonPointers:
- "x-kubernetes-validations"
# Members of ArgoCD Admins group in AD/Samba are allowed to administer Argo
rbacConfig:
policy.default: role:readonly
policy.csv: |
# Map AD groups to ArgoCD roles
g, Developers, role:developers
g, ArgoCD Admins, role:admin
# Allow developers to read objects
p, role:developers, applications, get, */*, allow
p, role:developers, certificates, get, *, allow
p, role:developers, clusters, get, *, allow
p, role:developers, repositories, get, *, allow
p, role:developers, projects, get, *, allow
p, role:developers, accounts, get, *, allow
p, role:developers, gpgkeys, get, *, allow
p, role:developers, logs, get, */*, allow
p, role:developers, applications, restart, default/camtiler, allow
p, role:developers, applications, override, default/camtiler, allow
p, role:developers, applications, action/apps/Deployment/restart, default/camtiler, allow
p, role:developers, applications, sync, default/camtiler, allow
p, role:developers, applications, update, default/camtiler, allow
metrics:
enabled: true
@@ -95,11 +57,49 @@ controller:
enabled: true
configs:
params:
server.insecure: true
rbac:
policy.default: role:admin
policy.csv: |
# Map AD groups to ArgoCD roles
g, Developers, role:developers
g, ArgoCD Admins, role:admin
# Allow developers to read objects
p, role:developers, applications, get, */*, allow
p, role:developers, certificates, get, *, allow
p, role:developers, clusters, get, *, allow
p, role:developers, repositories, get, *, allow
p, role:developers, projects, get, *, allow
p, role:developers, accounts, get, *, allow
p, role:developers, gpgkeys, get, *, allow
p, role:developers, logs, get, */*, allow
p, role:developers, applications, restart, default/camtiler, allow
p, role:developers, applications, override, default/camtiler, allow
p, role:developers, applications, action/apps/Deployment/restart, default/camtiler, allow
p, role:developers, applications, sync, default/camtiler, allow
p, role:developers, applications, update, default/camtiler, allow
cm:
admin.enabled: "false"
oidc.config: |
name: OpenID Connect
issuer: https://auth.k-space.ee/
clientID: $oidc-client-argocd-owner-secrets:OIDC_CLIENT_ID
cliClientID: $oidc-client-argocd-owner-secrets:OIDC_CLIENT_ID
clientSecret: $oidc-client-argocd-owner-secrets:OIDC_CLIENT_SECRET
requestedIDTokenClaims:
groups:
essential: true
requestedScopes:
- openid
- profile
- email
- groups
secret:
createSecret: false
knownHosts:
data:
ssh_known_hosts: |
ssh:
knownHosts: |
# Copy-pasted from `ssh-keyscan git.k-space.ee`
git.k-space.ee ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCF1+/TDRXuGwsu4SZQQwQuJusb7W1OciGAQp/ZbTTvKD+0p7fV6dXyUlWjdFmITrFNYDreDnMiOS+FvE62d2Z0=
git.k-space.ee ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDsLyRuubdIUnTKEqOipu+9x+FforrC8+oxulVrl0ECgdIRBQnLQXIspTNwuC3MKJ4z+DPbndSt8zdN33xWys8UNEs3V5/W6zsaW20tKiaX75WK5eOL4lIDJi/+E97+c0aZBXamhxTrgkRVJ5fcAkY6C5cKEmVM5tlke3v3ihLq78/LpJYv+P947NdnthYE2oc+XGp/elZ0LNfWRPnd///+ykbwWirvQm+iiDz7PMVKkb+Q7l3vw4+zneKJWAyFNrm+aewyJV9lFZZJuHliwlHGTriSf6zhMAWyJzvYqDAN6iT5yi9KGKw60J6vj2GLuK4ULVblTyP9k9+3iELKSWW5

1
asterisk/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
conf

11
asterisk/README.md Normal file
View File

@@ -0,0 +1,11 @@
# Asterisk
Asterisk is used as
This application is managed by [ArgoCD](https://argocd.k-space.ee/applications/argocd/asterisk)
Should ArgoCD be down manifests here can be applied with:
```
kubectl apply -n asterisk -f application.yaml
```

124
asterisk/application.yml Normal file
View File

@@ -0,0 +1,124 @@
---
apiVersion: v1
kind: Service
metadata:
name: asterisk
annotations:
external-dns.alpha.kubernetes.io/hostname: voip.k-space.ee
spec:
type: LoadBalancer
externalTrafficPolicy: Local
selector:
app: asterisk
ports:
- name: asterisk
protocol: UDP
port: 5060
- name: sip-data-10000
protocol: UDP
port: 10000
- name: sip-data-10001
protocol: UDP
port: 10001
- name: sip-data-10002
protocol: UDP
port: 10002
- name: sip-data-10003
protocol: UDP
port: 10003
- name: sip-data-10004
protocol: UDP
port: 10004
- name: sip-data-10005
protocol: UDP
port: 10005
- name: sip-data-10006
protocol: UDP
port: 10006
- name: sip-data-10007
protocol: UDP
port: 10007
- name: sip-data-10008
protocol: UDP
port: 10008
- name: sip-data-10009
protocol: UDP
port: 10009
- name: sip-data-10010
protocol: UDP
port: 10010
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: asterisk
labels:
app: asterisk
spec:
selector:
matchLabels:
app: asterisk
replicas: 1
template:
metadata:
labels:
app: asterisk
spec:
containers:
- name: asterisk
image: harbor.k-space.ee/k-space/asterisk
command:
- /usr/sbin/asterisk
args:
- -TWBpvvvdddf
volumeMounts:
- name: config
mountPath: /etc/asterisk
ports:
- containerPort: 8088
name: metrics
volumes:
- name: config
secret:
secretName: asterisk-secrets
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: asterisk
spec:
selector:
matchLabels:
app: asterisk
podMetricsEndpoints:
- port: metrics
---
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: asterisk
spec:
groups:
- name: asterisk
rules:
- alert: AsteriskPhoneNotRegistered
expr: asterisk_endpoints_state{resource=~"1.*"} < 2
for: 5m
labels:
severity: critical
annotations:
summary: "{{ $labels.resource }} is not registered."
- alert: AsteriskOutboundNumberNotRegistered
expr: asterisk_pjsip_outbound_registration_status == 0
for: 5m
labels:
severity: critical
annotations:
summary: "{{ $labels.username }} is not registered with provider."
- alert: AsteriskCallsPerMinuteLimitExceed
expr: asterisk_channels_duration_seconds > 10*60
for: 20m
labels:
severity: warning
annotations:
summary: "Call at channel {{ $labels.name }} is taking longer than 10m."

View File

@@ -0,0 +1,45 @@
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: asterisk
spec:
podSelector:
matchLabels:
app: asterisk
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus
- from:
- ipBlock:
cidr: 100.101.0.0/16
- from:
- ipBlock:
cidr: 100.102.0.0/16
- from:
- ipBlock:
cidr: 81.90.125.224/32 # Lauri home
- from:
- ipBlock:
cidr: 172.20.8.241/32 # Erki A
- from:
- ipBlock:
cidr: 195.222.16.36/32 # Elisa SIP
- from:
- ipBlock:
cidr: 195.222.16.38/32 # Elisa SIP
egress:
- to:
- ipBlock:
cidr: 195.222.16.36/32 # Elisa SIP
- to:
- ipBlock:
cidr: 195.222.16.38/32 # Elisa SIP

2
authelia/.gitignore vendored
View File

@@ -1,2 +0,0 @@
application-secrets.y*ml
oidc-secrets.y*ml

View File

@@ -1,171 +0,0 @@
# Authelia
## Background
Authelia works in conjunction with Traefik to provide SSO with
credentials stored in Samba (Active Directory compatible) directory tree.
Samba resides outside Kubernetes cluster as it's difficuilt to containerize
while keeping it usable from outside the cluster due to Samba's networking.
The MariaDB instance is used to store MFA tokens.
KeyDB is used to store session info.
## Deployment
Inspect changes with `git diff` and proceed to deploy:
```
kubectl apply -n authelia -f application.yml
kubectl create secret generic -n authelia mysql-secrets \
--from-literal=rootPassword=$(cat /dev/urandom | base64 | head -c 30)
kubectl create secret generic -n authelia mariadb-secrets \
--from-literal=MYSQL_ROOT_PASSWORD=$(cat /dev/urandom | base64 | head -c 30) \
--from-literal=MYSQL_PASSWORD=$(cat /dev/urandom | base64 | head -c 30)
kubectl -n authelia rollout restart deployment/authelia
```
To change secrets create `secret.yml`:
```
---
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: application-secrets
data:
JWT_TOKEN: ...
SESSION_ENCRYPTION_KEY: ...
STORAGE_PASSWORD: ...
STORAGE_ENCRYPTION_KEY: ...
LDAP_PASSWORD: ...
STORAGE_PASSWORD: ...
SMTP_PASSWORD: ...
```
Apply with:
```
kubectl apply -n authelia -f application-secrets.yml
kubectl annotate -n authelia secret application-secrets reloader.stakater.com/match=true
```
## OIDC secrets
OIDC secrets are separated from the main configuration until
Authelia will add CRD-s for these.
Generally speaking for untrusted applications, that is stuff that is running
outside the Kubernetes cluster eg web browser based (JS) and
local command line clients one
should use `public: true` and omit `secret: ...`.
Populate `oidc-secrets.yml` with approximately following:
```
identity_providers:
oidc:
clients:
- id: kubelogin
description: Kubernetes cluster
secret: ...
authorization_policy: two_factor
redirect_uris:
- http://localhost:27890
scopes:
- openid
- groups
- email
- profile
- id: proxmox
description: Proxmox Virtual Environment
secret: ...
authorization_policy: two_factor
redirect_uris:
- https://pve.k-space.ee
scopes:
- openid
- groups
- email
- profile
- id: argocd
description: ArgoCD
secret: ...
authorization_policy: two_factor
redirect_uris:
- https://argocd.k-space.ee/auth/callback
scopes:
- openid
- groups
- email
- profile
- id: harbor
description: Harbor
secret: ...
authorization_policy: two_factor
redirect_uris:
- https://harbor.k-space.ee/c/oidc/callback
scopes:
- openid
- groups
- email
- profile
- id: gitea
description: Gitea
secret: ...
authorization_policy: one_factor
redirect_uris:
- https://git.k-space.ee/user/oauth2/authelia/callback
scopes:
- openid
- profile
- email
- groups
grant_types:
- refresh_token
- authorization_code
response_types:
- code
userinfo_signing_algorithm: none
- id: grafana
description: Grafana
secret: ...
authorization_policy: one_factor
redirect_uris:
- https://grafana.k-space.ee/login/generic_oauth
scopes:
- openid
- groups
- email
- profile
```
To upload the file to Kubernetes secrets:
```
kubectl -n authelia delete secret oidc-secrets
kubectl -n authelia create secret generic oidc-secrets \
--from-file=oidc-secrets.yml=oidc-secrets.yml
kubectl annotate -n authelia secret oidc-secrets reloader.stakater.com/match=true
kubectl -n authelia rollout restart deployment/authelia
```
Synchronize OIDC secrets:
```
kubectl -n argocd delete secret argocd-secret
kubectl -n argocd create secret generic argocd-secret \
--from-literal=server.secretkey=$(cat /dev/urandom | base64 | head -c 30) \
--from-literal=oidc.config.clientSecret=$( \
kubectl get secret -n authelia oidc-secrets -o json \
| jq '.data."oidc-secrets.yml"' -r | base64 -d | yq -o json \
| jq '.identity_providers.oidc.clients[] | select(.id == "argocd") | .secret' -r)
kubectl -n monitoring delete secret oidc-secret
kubectl -n monitoring create secret generic oidc-secret \
--from-literal=GF_AUTH_GENERIC_OAUTH_CLIENT_SECRET=$( \
kubectl get secret -n authelia oidc-secrets -o json \
| jq '.data."oidc-secrets.yml"' -r | base64 -d | yq -o json \
| jq '.identity_providers.oidc.clients[] | select(.id == "grafana") | .secret' -r)
```

View File

@@ -1,416 +0,0 @@
---
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: authelia-certificates
labels:
app.kubernetes.io/name: authelia
data:
ldaps.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUZJekNDQXd1Z0F3SUJBZ0lVRzNaYnI0MGVVMlRHak1Lek5XaDhOTDJkRDRZd0RRWUpLb1pJaHZjTkFRRUwKQlFBd0lURWZNQjBHQTFVRUF3d1dVMkZ0WW1FZ1lYUWdZV1F1YXkxemNHRmpaUzVsWlRBZUZ3MHlNVEV5TVRRdwpOekk0TlRGYUZ3MHlOakV5TVRNd056STROVEZhTUNFeEh6QWRCZ05WQkFNTUZsTmhiV0poSUdGMElHRmtMbXN0CmMzQmhZMlV1WldVd2dnSWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUNEd0F3Z2dJS0FvSUNBUURub3hEZFlicjAKVFJHOEErdk0xT0I1Rzg4Z05YL1pXeFNLb0VaM2p0ekF0NEc3blV0aUhoVzI1cUhEeXZGVGEzTzJiMUFVSEhzbwpVQXpVTWNVV1FRb3J2RjF4L1VsYitpcnk0QkxFTklaVTdYMVpxb2ZKYXgwZTcrbit1YVM3R015dnB4VXliNGlYCkd3djdZZEh5SmM4WjZROHd2MTdNV2F2ejNaOE5CWFdoeG1xc3ljTlphVkl2S1lNRVpGazNUTnA3T20vSTFpdkYKWDJuNVNtb2d2NmdBVmpVODhSeWc2NlRFVStiaGY5QWdiU0VxWjhMaVd6c20xdHc0WnJXMDVVK25JVjRzTHdlaQp2SXppblFMYmFMTkc2ZUl0cUtQZGVsWWhRNHlCeHM3QXpTOCtieVVBZk9jRktzUTI5alFVdUxNbE1pUmt6MjV5Cnc5UUZxSGVuRjNIYXJqU1JTL3ZZV3J3K0RNbmo2Tit3QVdtd21SR3NzVmxPMjFSLzAzNThBK0h5VzhyLzlsTm8KV1FoMmt3VGRPdjdxMzFwRmZQQUhHUkFITnZUN0dRKzVCeFFjdG83cG1GQ2t2OTdpbmhiZG50d2ViSmM1VWI3NQpBeHNWVC9uNk9aTjJSU09NV0RKY1pjVkpXYjQxdTNTL2lBVHlvbDBuOEZMRlRRZm9xdXdvVkQ1UnpwU0NsVm50Cjd1eENyaGNsYXhTYnhUUDhqa29ERXQzc1NycWoySm5PNlhtQ3R2VlZkMmQvWVZQQ21qQm54TWc1bld1WEwwZTgKNkh3MTd5TGtYeFgzVERkdjF2VThvYTdpTmZyNmc3Vlcrd2ZsUkJoVW5WRUluNXZEdm80STVSdWRXaEJxcHN6VQo3bGQrUDVjZE5GWEdjUlRQdFFlbXkxUllKMG5ZejkybGtRSURBUUFCbzFNd1VUQWRCZ05WSFE0RUZnUVVjZ1JrCnZ4U3V1QnNFaktzbXQvN3dpRHIxbHVRd0h3WURWUjBqQkJnd0ZvQVVjZ1JrdnhTdXVCc0VqS3NtdC83d2lEcjEKbHVRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBTkJna3Foa2lHOXcwQkFRc0ZBQU9DQWdFQVNlNXM1aU04QjQ2agp6bXZMOUQ4dUJrQ0VIOW9mMnc1VFluL1NPZkFRVnhBOGxBYndORitlWmgyakdGSUN6citNYmlTMlhZdkxJNnVrClZ5cFJrN28vdExmdmY0alpqZnRpeEliWEM1MjYrUk1xOEcvV2xGbzJnWFZ0eW5BcXp5bXJVYjV1MVZJcG53QWYKNTBzNHlDOURFUXF1aGErYzJCWTBRQ3ZySnUvYy9KTUs3QTdYOFdRSzVDUy8wZkNPdzBPY2xkZzA0c3VWVlU2eQp0MEZmV0kvTlhURFFrU2JWVXN5OElmaXd4a0o5NmNsTjFNWVArQ015Mkh1eWF0aTZySnhVZFBEbS9tYzdRWXNPClNTSzQyNXJQOFFZMmduNlNXUXJXdUJic2dLSEpoVzRBYjdTTldkb0Q0QytwVDA2V1MzVXphMnhZd09TV1IvTWMKR1V5YXRwLzlxR05tOWM1d2RFQ3FtdkVQc2twQkp5ZWR6MUk2V2lxdjRuK0UvRk9qRGl0VVpFd3BFZXRUQktXZgoyRnZRa1pGRmpRU3VIdG5KT040cVRvWmlaNW4vbis4Z1k2Z1Y5Wnd0NHM5OGlpdnUwTFc4UlZGSTNkS0tiYm5lCkY1KzltNE9vMjF0SlU2QThXVGpqVXpLUnFKdEZSa1JpWGtOTGRoY2MrdTdMOFFlZTFOUjIyalg5N2NaVDNGUGoKYmpOUlpId3k5K1dhMG1zcC9EYUR5RnlaOStPUUhReUJJazdCSS9LdU0rT2dta3dlSHBNSE5CMUs1NHZQenZKawpHaFN1QUNIeTRybmdvQTBvMzNhZzJ6a3lEY3NocVRtK2Q3UXFWOWUzU2pONFpUUXlTeWNpa0I1bFJKVHAydVFkCk5jVjBtcG5nREl1aFVlSFRKWkJ0SVZCZnp4bHdHd2c9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
---
apiVersion: v1
kind: ConfigMap
metadata:
name: authelia-config
labels:
app.kubernetes.io/name: authelia
annotations:
reloader.stakater.com/match: "true"
data:
authelia-config.yml: |
---
log:
level: warn
certificates_directory: /certificates
theme: light
default_redirection_url: https://members.k-space.ee
totp:
issuer: K-SPACE
authentication_backend:
ldap:
implementation: activedirectory
url: ldaps://ad.k-space.ee
base_dn: dc=ad,dc=k-space,dc=ee
username_attribute: sAMAccountName
additional_users_dn: ou=Membership
users_filter: (&({username_attribute}={input})(objectCategory=person)(objectClass=user))
additional_groups_dn: cn=Users
groups_filter: (&(member={dn})(objectclass=group))
group_name_attribute: cn
mail_attribute: mail
display_name_attribute: displayName
user: cn=authelia,cn=Users,dc=ad,dc=k-space,dc=ee
session:
domain: k-space.ee
same_site: lax
expiration: 1M
inactivity: 120h
remember_me_duration: "0"
redis:
host: redis
port: 6379
regulation:
ban_time: 5m
find_time: 2m
max_retries: 3
storage:
mysql:
host: mariadb
database: authelia
username: authelia
notifier:
disable_startup_check: true
smtp:
host: mail.k-space.ee
port: 465
username: authelia
sender: authelia@k-space.ee
subject: "[Authelia] {title}"
startup_check_address: lauri@k-space.ee
access_control:
default_policy: deny
rules:
# Longhorn dashboard
- domain: longhorn.k-space.ee
policy: two_factor
subject: group:Longhorn Admins
- domain: longhorn.k-space.ee
policy: deny
# Members site
- domain: members.k-space.ee
policy: bypass
resources:
- ^/?$
- domain: members.k-space.ee
policy: two_factor
resources:
- ^/login/authelia/?$
- domain: members.k-space.ee
policy: bypass
# Webmail
- domain: webmail.k-space.ee
policy: two_factor
# Etherpad
- domain: pad.k-space.ee
policy: two_factor
resources:
- ^/p/board-
subject: group:Board Members
- domain: pad.k-space.ee
policy: deny
resources:
- ^/p/board-
- domain: pad.k-space.ee
policy: two_factor
resources:
- ^/p/members-
- domain: pad.k-space.ee
policy: deny
resources:
- ^/p/members-
- domain: pad.k-space.ee
policy: bypass
# phpMyAdmin
- domain: phpmyadmin.k-space.ee
policy: two_factor
# Require login for everything else protected by traefik-sso middleware
- domain: '*.k-space.ee'
policy: one_factor
...
---
apiVersion: v1
kind: Service
metadata:
name: authelia
labels:
app.kubernetes.io/name: authelia
spec:
type: ClusterIP
sessionAffinity: None
selector:
app.kubernetes.io/name: authelia
ports:
- name: http
protocol: TCP
port: 80
targetPort: http
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: authelia
labels:
app.kubernetes.io/name: authelia
annotations:
reloader.stakater.com/search: "true"
spec:
selector:
matchLabels:
app.kubernetes.io/name: authelia
replicas: 2
revisionHistoryLimit: 0
template:
metadata:
labels:
app.kubernetes.io/name: authelia
spec:
enableServiceLinks: false
containers:
- name: authelia
image: authelia/authelia:4
command:
- authelia
- --config=/config/authelia-config.yml
- --config=/config/oidc-secrets.yml
resources:
limits:
cpu: "4.00"
memory: 125Mi
requests:
cpu: "0.25"
memory: 50Mi
env:
- name: AUTHELIA_SERVER_DISABLE_HEALTHCHECK
value: "true"
- name: AUTHELIA_JWT_SECRET_FILE
value: /secrets/JWT_TOKEN
- name: AUTHELIA_SESSION_SECRET_FILE
value: /secrets/SESSION_ENCRYPTION_KEY
- name: AUTHELIA_AUTHENTICATION_BACKEND_LDAP_PASSWORD_FILE
value: /secrets/LDAP_PASSWORD
- name: AUTHELIA_SESSION_REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: redis-secrets
key: REDIS_PASSWORD
- name: AUTHELIA_STORAGE_ENCRYPTION_KEY_FILE
value: /secrets/STORAGE_ENCRYPTION_KEY
- name: AUTHELIA_STORAGE_MYSQL_PASSWORD_FILE
value: /mariadb-secrets/MYSQL_PASSWORD
- name: AUTHELIA_IDENTITY_PROVIDERS_OIDC_HMAC_SECRET_FILE
value: /secrets/OIDC_HMAC_SECRET
- name: AUTHELIA_IDENTITY_PROVIDERS_OIDC_ISSUER_PRIVATE_KEY_FILE
value: /secrets/OIDC_PRIVATE_KEY
- name: AUTHELIA_NOTIFIER_SMTP_PASSWORD_FILE
value: /secrets/SMTP_PASSWORD
- name: TZ
value: Europe/Tallinn
startupProbe:
failureThreshold: 6
httpGet:
path: /api/health
port: http
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 5
livenessProbe:
failureThreshold: 5
httpGet:
path: /api/health
port: http
scheme: HTTP
initialDelaySeconds: 0
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 5
readinessProbe:
failureThreshold: 5
httpGet:
path: /api/health
port: http
scheme: HTTP
initialDelaySeconds: 0
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 5
ports:
- name: http
containerPort: 9091
protocol: TCP
volumeMounts:
- mountPath: /config/authelia-config.yml
name: authelia-config
readOnly: true
subPath: authelia-config.yml
- mountPath: /config/oidc-secrets.yml
name: oidc-secrets
readOnly: true
subPath: oidc-secrets.yml
- mountPath: /secrets
name: secrets
readOnly: true
- mountPath: /certificates
name: certificates
readOnly: true
- mountPath: /mariadb-secrets
name: mariadb-secrets
readOnly: true
volumes:
- name: authelia-config
configMap:
name: authelia-config
- name: secrets
secret:
secretName: application-secrets
items:
- key: JWT_TOKEN
path: JWT_TOKEN
- key: SESSION_ENCRYPTION_KEY
path: SESSION_ENCRYPTION_KEY
- key: STORAGE_ENCRYPTION_KEY
path: STORAGE_ENCRYPTION_KEY
- key: STORAGE_PASSWORD
path: STORAGE_PASSWORD
- key: LDAP_PASSWORD
path: LDAP_PASSWORD
- key: OIDC_PRIVATE_KEY
path: OIDC_PRIVATE_KEY
- key: OIDC_HMAC_SECRET
path: OIDC_HMAC_SECRET
- key: SMTP_PASSWORD
path: SMTP_PASSWORD
- name: certificates
secret:
secretName: authelia-certificates
- name: mariadb-secrets
secret:
secretName: mariadb-secrets
- name: redis-secrets
secret:
secretName: redis-secrets
- name: oidc-secrets
secret:
secretName: oidc-secrets
items:
- key: oidc-secrets.yml
path: oidc-secrets.yml
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: authelia
labels:
app.kubernetes.io/name: authelia
annotations:
cert-manager.io/cluster-issuer: default
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
kubernetes.io/tls-acme: "true"
traefik.ingress.kubernetes.io/router.entryPoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: authelia-chain-k6-authelia@kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true"
spec:
rules:
- host: auth.k-space.ee
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: authelia
port:
number: 80
tls:
- hosts:
- auth.k-space.ee
secretName: authelia-tls
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: forwardauth-k6-authelia
labels:
app.kubernetes.io/name: authelia
spec:
forwardAuth:
address: http://authelia.authelia.svc.cluster.local/api/verify?rd=https://auth.k-space.ee/
trustForwardHeader: true
authResponseHeaders:
- Remote-User
- Remote-Name
- Remote-Email
- Remote-Groups
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: headers-k6-authelia
labels:
app.kubernetes.io/name: authelia
spec:
headers:
browserXssFilter: true
customFrameOptionsValue: "SAMEORIGIN"
customResponseHeaders:
Cache-Control: "no-store"
Pragma: "no-cache"
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: chain-k6-authelia-auth
labels:
app.kubernetes.io/name: authelia
spec:
chain:
middlewares:
- name: forwardauth-k6-authelia
namespace: authelia
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: chain-k6-authelia
labels:
app.kubernetes.io/name: authelia
spec:
chain:
middlewares:
- name: headers-k6-authelia
namespace: authelia
---
apiVersion: mysql.oracle.com/v2
kind: InnoDBCluster
metadata:
name: mysql-cluster
spec:
secretName: mysql-secrets
instances: 3
router:
instances: 2
tlsUseSelfSigned: true
datadirVolumeClaimTemplate:
storageClassName: local-path
accessModes:
- ReadWriteOnce
resources:
requests:
storage: "1Gi"
podSpec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/managed-by
operator: In
values:
- mysql-operator
topologyKey: kubernetes.io/hostname
nodeSelector:
dedicated: storage
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule
---
apiVersion: codemowers.io/v1alpha1
kind: KeyDBCluster
metadata:
name: redis
spec:
replicas: 3

View File

@@ -1 +0,0 @@
../shared/mariadb.yml

1
bind/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
*.key

118
bind/README.md Normal file
View File

@@ -0,0 +1,118 @@
#TODO:
- cert-manager talks to master to add domain names, and DNS-01 TLS through ns1.k-space.ee
^ both-side link to cert-manager
bind-services (zone transfer to HA replicas from ns1.k-space.ee)
### ns1.k-space.ee
Primary authoritive nameserver replica. Other replicas live on Kube nodes
Idea to move it to Zone.
dns.yaml files add DNS records
# Bind setup
The Bind primary resides outside Kubernetes at `193.40.103.2` and
it's internally reachable via `172.20.0.2`.
Bind secondaries are hosted inside Kubernetes, load balanced behind `62.65.250.2` and
under normal circumstances managed by [ArgoCD](https://argocd.k-space.ee/applications/argocd/bind).
Ingresses and DNSEndpoints referring to `k-space.ee`, `kspace.ee`, `k6.ee`
are picked up automatically by `external-dns` and updated on primary.
The primary triggers notification events to `172.20.53.{1..3}`
which are internally exposed IP-s of the secondaries.
# Secrets
To configure TSIG secrets:
```
kubectl create secret generic -n bind bind-readonly-secret \
--from-file=readonly.key
kubectl create secret generic -n bind bind-readwrite-secret \
--from-file=readwrite.key
kubectl create secret generic -n bind external-dns
kubectl -n bind delete secret tsig-secret
kubectl -n bind create secret generic tsig-secret \
--from-literal=TSIG_SECRET=$(cat readwrite.key | grep secret | cut -d '"' -f 2)
kubectl -n cert-manager delete secret tsig-secret
kubectl -n cert-manager create secret generic tsig-secret \
--from-literal=TSIG_SECRET=$(cat readwrite.key | grep secret | cut -d '"' -f 2)
```
# Serving additional zones
## Bind primary configuration
To serve additional domains from this Bind setup add following
section to `named.conf.local` on primary `ns1.k-space.ee`:
```
key "foobar" {
algorithm hmac-sha512;
secret "...";
};
zone "foobar.com" {
type master;
file "/var/lib/bind/db.foobar.com";
allow-update { !rejected; key foobar; };
allow-transfer { !rejected; key readonly; key foobar; };
notify explicit; also-notify { 172.20.53.1; 172.20.53.2; 172.20.53.3; };
};
```
Initiate empty zonefile in `/var/lib/bind/db.foobar.com` on the primary `ns1.k-space.ee`:
```
foobar.com IN SOA ns1.foobar.com. hostmaster.foobar.com. (1 300 300 2592000 300)
NS ns1.foobar.com.
NS ns2.foobar.com.
ns1.foobar.com. A 193.40.103.2
ns2.foobar.com. A 62.65.250.2
```
Reload Bind config:
```
named-checkconf
systemctl reload bind9
```
## Bind secondary config
Add section to `bind-secondary-config-local` under key `named.conf.local`:
```
zone "foobar.com" { type slave; masters { 172.20.0.2 key readonly; }; };
```
And restart secondaries:
```
kubectl rollout restart -n bind statefulset/bind-secondary
```
## Registrar config
At your DNS registrar point your glue records to:
```
foobar.com. NS ns1.foobar.com.
foobar.com. NS ns2.foobar.com.
ns1.foobar.com. A 193.40.103.2
ns2.foobar.com. A 62.65.250.2
```
## Updating DNS records
With the configured TSIG key `foobar` you can now:
* Obtain Let's Encrypt certificates with DNS challenge.
Inside Kubernetes use `cert-manager` with RFC2136 provider.
* Update DNS records.
Inside Kubernetes use `external-dns` with RFC2136 provider.

178
bind/bind-secondary.yaml Normal file
View File

@@ -0,0 +1,178 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: bind-secondary-config-local
data:
named.conf.local: |
zone "codemowers.ee" { type slave; masters { 172.20.0.2 key readonly; }; };
zone "codemowers.eu" { type slave; masters { 172.20.0.2 key readonly; }; };
zone "codemowers.cloud" { type slave; masters { 172.20.0.2 key readonly; }; };
---
apiVersion: v1
kind: ConfigMap
metadata:
name: bind-secondary-config
data:
named.conf: |
include "/etc/bind/named.conf.local";
include "/etc/bind/readonly.key";
options {
recursion no;
pid-file "/var/bind/named.pid";
allow-query { 0.0.0.0/0; };
allow-notify { 172.20.0.2; };
allow-transfer { none; };
check-names slave ignore;
notify no;
};
zone "k-space.ee" { type slave; masters { 172.20.0.2 key readonly; }; };
zone "k6.ee" { type slave; masters { 172.20.0.2 key readonly; }; };
zone "kspace.ee" { type slave; masters { 172.20.0.2 key readonly; }; };
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: bind-secondary
namespace: bind
spec:
replicas: 3
selector:
matchLabels:
app: bind-secondary
template:
metadata:
labels:
app: bind-secondary
spec:
volumes:
- name: run
emptyDir: {}
containers:
- name: bind-secondary
image: internetsystemsconsortium/bind9:9.20
volumeMounts:
- mountPath: /run/named
name: run
workingDir: /var/bind
command:
- named
- -g
- -c
- /etc/bind/named.conf
volumeMounts:
- name: bind-secondary-config
mountPath: /etc/bind
readOnly: true
- name: bind-data
mountPath: /var/bind
volumes:
- name: bind-secondary-config
projected:
sources:
- configMap:
name: bind-secondary-config
- configMap:
name: bind-secondary-config-local
optional: true
- secret:
name: bind-readonly-secret
- name: bind-data
emptyDir: {}
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- bind-secondary
topologyKey: "kubernetes.io/hostname"
---
apiVersion: v1
kind: Service
metadata:
name: bind-secondary
namespace: bind
spec:
type: LoadBalancer
externalTrafficPolicy: Local
loadBalancerIP: 62.65.250.2
selector:
app: bind-secondary
ports:
- protocol: TCP
port: 53
name: dns-tcp
targetPort: 53
- protocol: UDP
port: 53
name: dns-udp
targetPort: 53
---
apiVersion: v1
kind: Service
metadata:
name: bind-secondary-0
namespace: bind
spec:
type: LoadBalancer
externalTrafficPolicy: Local
loadBalancerIP: 172.20.53.1
selector:
app: bind-secondary
statefulset.kubernetes.io/pod-name: bind-secondary-0
ports:
- protocol: TCP
port: 53
name: dns-tcp
targetPort: 53
- protocol: UDP
port: 53
name: dns-udp
targetPort: 53
---
apiVersion: v1
kind: Service
metadata:
name: bind-secondary-1
namespace: bind
spec:
type: LoadBalancer
externalTrafficPolicy: Local
loadBalancerIP: 172.20.53.2
selector:
app: bind-secondary
statefulset.kubernetes.io/pod-name: bind-secondary-1
ports:
- protocol: TCP
port: 53
name: dns-tcp
targetPort: 53
- protocol: UDP
port: 53
name: dns-udp
targetPort: 53
---
apiVersion: v1
kind: Service
metadata:
name: bind-secondary-2
namespace: bind
spec:
type: LoadBalancer
externalTrafficPolicy: Local
loadBalancerIP: 172.20.53.3
selector:
app: bind-secondary
statefulset.kubernetes.io/pod-name: bind-secondary-2
ports:
- protocol: TCP
port: 53
name: dns-tcp
targetPort: 53
- protocol: UDP
port: 53
name: dns-udp
targetPort: 53

View File

@@ -0,0 +1,40 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns-k-space
spec:
revisionHistoryLimit: 0
selector:
matchLabels: &selectorLabels
app.kubernetes.io/name: external-dns
domain: k-space.ee
template:
metadata:
labels: *selectorLabels
spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: registry.k8s.io/external-dns/external-dns:v0.14.2
envFrom:
- secretRef:
name: tsig-secret
args:
- --events
- --registry=txt
- --txt-prefix=external-dns-
- --txt-owner-id=k8s
- --provider=rfc2136
- --source=ingress
- --source=service
- --source=crd
- --domain-filter=k-space.ee
- --rfc2136-tsig-axfr
- --rfc2136-host=172.20.0.2
- --rfc2136-port=53
- --rfc2136-zone=k-space.ee
- --rfc2136-tsig-keyname=readwrite
- --rfc2136-tsig-secret-alg=hmac-sha512
- --rfc2136-tsig-secret=$(TSIG_SECRET)
# https://github.com/kubernetes-sigs/external-dns/issues/2446

71
bind/external-dns-k6.yaml Normal file
View File

@@ -0,0 +1,71 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns-k6
spec:
revisionHistoryLimit: 0
selector:
matchLabels: &selectorLabels
app.kubernetes.io/name: external-dns
domain: k6.ee
template:
metadata:
labels: *selectorLabels
spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: registry.k8s.io/external-dns/external-dns:v0.14.2
envFrom:
- secretRef:
name: tsig-secret
args:
- --log-level=debug
- --events
- --registry=noop
- --provider=rfc2136
- --source=service
- --source=crd
- --domain-filter=k6.ee
- --rfc2136-tsig-axfr
- --rfc2136-host=172.20.0.2
- --rfc2136-port=53
- --rfc2136-zone=k6.ee
- --rfc2136-tsig-keyname=readwrite
- --rfc2136-tsig-secret-alg=hmac-sha512
- --rfc2136-tsig-secret=$(TSIG_SECRET)
# https://github.com/kubernetes-sigs/external-dns/issues/2446
---
apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
name: k6
spec:
endpoints:
- dnsName: k6.ee
recordTTL: 300
recordType: SOA
targets:
- "ns1.k-space.ee. hostmaster.k-space.ee. (1 300 300 300 300)"
- dnsName: k6.ee
recordTTL: 300
recordType: NS
targets:
- ns1.k-space.ee
- ns2.k-space.ee
- dnsName: ns1.k-space.ee
recordTTL: 300
recordType: A
targets:
- 193.40.103.2
- dnsName: ns2.k-space.ee
recordTTL: 300
recordType: A
targets:
- 62.65.250.2
- dnsName: k-space.ee
recordTTL: 300
recordType: MX
targets:
- 10 mail.k-space.ee

View File

@@ -0,0 +1,66 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns-kspace
spec:
revisionHistoryLimit: 0
selector:
matchLabels: &selectorLabels
app.kubernetes.io/name: external-dns
domain: kspace.ee
template:
metadata:
labels: *selectorLabels
spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: registry.k8s.io/external-dns/external-dns:v0.14.2
envFrom:
- secretRef:
name: tsig-secret
args:
- --events
- --registry=noop
- --provider=rfc2136
- --source=ingress
- --source=service
- --source=crd
- --domain-filter=kspace.ee
- --rfc2136-tsig-axfr
- --rfc2136-host=172.20.0.2
- --rfc2136-port=53
- --rfc2136-zone=kspace.ee
- --rfc2136-tsig-keyname=readwrite
- --rfc2136-tsig-secret-alg=hmac-sha512
- --rfc2136-tsig-secret=$(TSIG_SECRET)
# https://github.com/kubernetes-sigs/external-dns/issues/2446
---
apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
name: kspace
spec:
endpoints:
- dnsName: kspace.ee
recordTTL: 300
recordType: SOA
targets:
- "ns1.k-space.ee. hostmaster.k-space.ee. (1 300 300 300 300)"
- dnsName: kspace.ee
recordTTL: 300
recordType: NS
targets:
- ns1.k-space.ee
- ns2.k-space.ee
- dnsName: ns1.k-space.ee
recordTTL: 300
recordType: A
targets:
- 193.40.103.2
- dnsName: ns2.k-space.ee
recordTTL: 300
recordType: A
targets:
- 62.65.250.2

58
bind/external-dns.yaml Normal file
View File

@@ -0,0 +1,58 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: external-dns
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- pods
- nodes
verbs:
- get
- watch
- list
- apiGroups:
- extensions
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- externaldns.k8s.io
resources:
- dnsendpoints
verbs:
- get
- watch
- list
- apiGroups:
- externaldns.k8s.io
resources:
- dnsendpoints/status
verbs:
- update
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: external-dns
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: external-dns-viewer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: external-dns
subjects:
- kind: ServiceAccount
name: external-dns
namespace: bind

View File

@@ -1,29 +1,87 @@
To apply changes:
# Cameras
Camtiler is the umbrella name for our homegrown camera surveilance system.
Everything besides [Camera](#camera)s is deployed with Kubernetes.
## Components
![cameras.graphviz.svg](cameras.graphviz.svg)
<!-- Manually rendered with https://dreampuf.github.io/GraphvizOnline
digraph G {
"camera-operator" -> "camera-motion-detect" [label="deploys"]
"camera-tiler" -> "cam.k-space.ee/tiled"
camera -> "camera-tiler"
camera -> "camera-motion-detect" -> mongo
"camera-motion-detect" -> "Minio S3"
"cam.k-space.ee" -> mongo [label="queries events", decorate=true]
mongo -> "camtiler-event-broker" [label="transforms object to add (signed) URL to S3", ]
"camtiler-event-broker" -> "cam.k-space.ee"
"Minio S3" -> "cam.k-space.ee" [label="using signed URL from camtiler-event-broker", decorate=true]
camera [label="📸 camera"]
}
-->
### 📸 Camera
Cameras are listed in [application.yml](application.yml) as `kind: Camera`.
Two types of camera hosts:
- GL-AR150 with [openwrt-camera-images](https://git.k-space.ee/k-space/openwrt-camera-image).
- [Doors](https://wiki.k-space.ee/e/en/hosting/doors) (Raspberry Pi) with mjpg-streamer.
### camera-tiler (cam.k-space.ee/tiled)
Out-of-bound, connects to cameras and streams to web browser.
One instance per every camera
#### camera-operator
Functionally the same as a kubernetes deployment for camera-tiler.
Operator/deployer for camera-tiler.
### camera-motion-detect
Connects to cameras, on motion writes events to Mongo and frames to S3.
### cam.k-space.ee (logmower)
Fetches motion-detect events from mongo. Fetches referenced images from S3 (minio).
#### camtiler-event-broker
MitM between motion-detect -> mongo. Appends S3 URLs to the response.
## Kubernetes commands
Apply changes:
```
kubectl apply -n camtiler -f application.yml -f persistence.yml -f mongoexpress.yml -f mongodb-support.yml -f networkpolicy-base.yml -f minio-support.yml
kubectl apply -n camtiler \
-f application.yml \
-f minio.yml \
-f mongoexpress.yml \
-f mongodb-support.yml \
-f camera-tiler.yml \
-f logmower.yml \
-f ingress.yml \
-f network-policies.yml \
-f networkpolicy-base.yml
```
To deploy changes:
Deploy changes:
```
kubectl -n camtiler rollout restart deployment.apps/camtiler
```
To initialize secrets:
Initialize secrets:
```
kubectl create secret generic -n camtiler mongodb-application-readwrite-password --from-literal="password=$(cat /dev/urandom | base64 | head -c 30)"
kubectl create secret generic -n camtiler mongodb-application-readonly-password --from-literal="password=$(cat /dev/urandom | base64 | head -c 30)"
kubectl create secret generic -n camtiler minio-secret \
--from-literal=accesskey=application \
--from-literal=secretkey=$(cat /dev/urandom | base64 | head -c 30)
kubectl create secret generic -n camtiler minio-env-configuration \
--from-literal="MINIO_BROWSER=off" \
kubectl create secret generic -n camtiler minio-secrets \
--from-literal="MINIO_ROOT_USER=root" \
--from-literal="MINIO_ROOT_PASSWORD=$(cat /dev/urandom | base64 | head -c 30)" \
--from-literal="MINIO_STORAGE_CLASS_STANDARD=EC:4"
--from-literal="MINIO_ROOT_PASSWORD=$(cat /dev/urandom | base64 | head -c 30)"
kubectl -n camtiler create secret generic camera-secrets \
--from-literal=username=... \
--from-literal=password=...
```
Restart all deployments:
```
for j in $(kubectl get deployments -n camtiler -o name); do kubectl rollout restart -n camtiler $j; done
```

View File

@@ -1,396 +1,11 @@
---
apiVersion: apps/v1
kind: Deployment
apiVersion: codemowers.cloud/v1beta1
kind: MinioBucketClaim
metadata:
name: camtiler
annotations:
keel.sh/policy: force
keel.sh/trigger: poll
spec:
revisionHistoryLimit: 0
replicas: 2
selector:
matchLabels:
app.kubernetes.io/name: camtiler
template:
metadata:
labels:
app.kubernetes.io/name: camtiler
component: camtiler
spec:
serviceAccountName: camtiler
containers:
- name: camtiler
image: harbor.k-space.ee/k-space/camera-tiler:latest
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
ports:
- containerPort: 5001
name: "http"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: log-viewer-frontend
annotations:
keel.sh/policy: force
keel.sh/trigger: poll
spec:
revisionHistoryLimit: 0
replicas: 2
selector:
matchLabels:
app.kubernetes.io/name: log-viewer-frontend
template:
metadata:
labels:
app.kubernetes.io/name: log-viewer-frontend
spec:
containers:
- name: log-viewer-frontend
image: harbor.k-space.ee/k-space/log-viewer-frontend:latest
# securityContext:
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: log-viewer-backend
annotations:
keel.sh/policy: force
keel.sh/trigger: poll
spec:
revisionHistoryLimit: 0
replicas: 3
selector:
matchLabels:
app.kubernetes.io/name: log-viewer-backend
template:
metadata:
labels:
app.kubernetes.io/name: log-viewer-backend
spec:
containers:
- name: log-backend-backend
image: harbor.k-space.ee/k-space/log-viewer:latest
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
env:
- name: MONGO_URI
valueFrom:
secretKeyRef:
name: mongodb-application-readwrite
key: connectionString.standard
- name: MINIO_BUCKET
value: application
- name: MINIO_HOSTNAME
value: cams-s3.k-space.ee
- name: MINIO_PORT
value: "443"
- name: MINIO_SCHEME
value: "https"
- name: MINIO_SECRET_KEY
valueFrom:
secretKeyRef:
name: minio-secret
key: secretkey
- name: MINIO_ACCESS_KEY
valueFrom:
secretKeyRef:
name: minio-secret
key: accesskey
---
apiVersion: v1
kind: Service
metadata:
name: log-viewer-frontend
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: log-viewer-frontend
ports:
- protocol: TCP
port: 3003
---
apiVersion: v1
kind: Service
metadata:
name: log-viewer-backend
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: log-viewer-backend
ports:
- protocol: TCP
port: 3002
---
apiVersion: v1
kind: Service
metadata:
name: camtiler
labels:
component: camtiler
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: camtiler
component: camtiler
ports:
- protocol: TCP
port: 5001
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: camtiler
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: camtiler
rules:
- apiGroups:
- ""
resources:
- services
verbs:
- list
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: camtiler
subjects:
- kind: ServiceAccount
name: camtiler
apiGroup: ""
roleRef:
kind: Role
name: camtiler
apiGroup: ""
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: camtiler
annotations:
kubernetes.io/ingress.class: traefik
# Following specifies the certificate issuer defined in
# ../cert-manager/issuer.yml
# This is where the HTTPS certificates for the
# `tls:` section below are obtained from
cert-manager.io/cluster-issuer: default
# This tells Traefik this Ingress object is associated with the
# https:// entrypoint
# Global http:// to https:// redirect is enabled in
# ../traefik/values.yml using `globalArguments`
traefik.ingress.kubernetes.io/router.entrypoints: websecure
# Following enables Authelia intercepting middleware
# which makes sure user is authenticated and then
# proceeds to inject Remote-User header for the application
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true"
# Following tells external-dns to add CNAME entry which makes
# cams.k-space.ee point to same IP address as traefik.k-space.ee
# The A record for traefik.k-space.ee is created via annotation
# added in ../traefik/ingress.yml
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
spec:
rules:
- host: cams.k-space.ee
http:
paths:
- pathType: Prefix
path: "/tiled"
backend:
service:
name: camtiler
port:
number: 5001
- pathType: Prefix
path: "/events"
backend:
service:
name: log-viewer-backend
port:
number: 3002
- pathType: Prefix
path: "/"
backend:
service:
name: log-viewer-frontend
port:
number: 3003
tls:
- hosts:
- cams.k-space.ee
secretName: camtiler-tls
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: camera-motion-detect
spec:
podSelector:
matchLabels:
component: camdetect
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
component: camtiler
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: prometheus-operator
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus
egress:
- to:
- ipBlock:
# Permit access to cameras outside the cluster
cidr: 100.102.0.0/16
- to:
- podSelector:
matchLabels:
app: mongodb-svc
ports:
- port: 27017
- to:
- podSelector:
matchLabels:
v1.min.io/tenant: minio
ports:
- port: 9000
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: camera-tiler
spec:
podSelector:
matchLabels:
component: camtiler
policyTypes:
- Ingress
- Egress
egress:
- to:
- podSelector:
matchLabels:
component: camdetect
ports:
- port: 5000
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: prometheus-operator
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: log-viewer-backend
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: log-viewer-backend
policyTypes:
- Ingress
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: mongodb-svc
- to:
# Minio access via Traefik's public endpoint
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: log-viewer-frontend
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: log-viewer-frontend
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minio
annotations:
kubernetes.io/ingress.class: traefik
cert-manager.io/cluster-issuer: default
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
spec:
rules:
- host: cams-s3.k-space.ee
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: minio
port:
number: 80
tls:
- hosts:
- cams-s3.k-space.ee
secretName: cams-s3-tls
capacity: 150Gi
class: dedicated
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
@@ -482,12 +97,13 @@ spec:
metadata:
name: foobar
labels:
component: camdetect
app.kubernetes.io/name: foobar
component: camera-motion-detect
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: foobar
component: camdetect
component: camera-motion-detect
ports:
- protocol: TCP
port: 80
@@ -497,19 +113,16 @@ spec:
kind: Deployment
metadata:
name: camera-foobar
# Make sure keel.sh pulls updates for this deployment
annotations:
keel.sh/policy: force
keel.sh/trigger: poll
spec:
revisionHistoryLimit: 0
replicas: 1
# Make sure we do not congest the network during rollout
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
# Swap following two with replicas: 2
maxSurge: 1
maxUnavailable: 0
selector:
matchLabels:
app.kubernetes.io/name: foobar
@@ -517,18 +130,25 @@ spec:
metadata:
labels:
app.kubernetes.io/name: foobar
component: camdetect
component: camera-motion-detect
spec:
containers:
- name: camdetect
- name: camera-motion-detect
image: harbor.k-space.ee/k-space/camera-motion-detect:latest
starupProbe:
httpGet:
path: /healthz
port: 5000
initialDelaySeconds: 2
periodSeconds: 180
timeoutSeconds: 60
readinessProbe:
httpGet:
path: /readyz
port: 5000
initialDelaySeconds: 10
periodSeconds: 180
timeoutSeconds: 60
initialDelaySeconds: 60
periodSeconds: 60
timeoutSeconds: 5
ports:
- containerPort: 5000
name: "http"
@@ -538,7 +158,7 @@ spec:
cpu: "200m"
limits:
memory: "256Mi"
cpu: "1"
cpu: "4000m"
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
@@ -550,9 +170,25 @@ spec:
- name: SOURCE_NAME
value: foobar
- name: S3_BUCKET_NAME
value: application
valueFrom:
secretKeyRef:
name: miniobucket-camtiler-owner-secrets
key: BUCKET_NAME
- name: S3_ENDPOINT_URL
value: http://minio
valueFrom:
secretKeyRef:
name: miniobucket-camtiler-owner-secrets
key: AWS_S3_ENDPOINT_URL
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: miniobucket-camtiler-owner-secrets
key: AWS_SECRET_ACCESS_KEY
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: miniobucket-camtiler-owner-secrets
key: AWS_ACCESS_KEY_ID
- name: BASIC_AUTH_PASSWORD
valueFrom:
secretKeyRef:
@@ -563,16 +199,6 @@ spec:
secretKeyRef:
name: mongodb-application-readwrite
key: connectionString.standard
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: minio-secret
key: secretkey
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: minio-secret
key: accesskey
# Make sure 2+ pods of same camera are scheduled on different hosts
affinity:
@@ -580,32 +206,21 @@ spec:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
- key: app.kubernetes.io/name
operator: In
values:
- foobar
topologyKey: kubernetes.io/hostname
topologyKey: topology.kubernetes.io/zone
# Make sure camera deployments are spread over workers
topologySpreadConstraints:
- maxSkew: 1
topologyKey: kubernetes.io/hostname
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app.kubernetes.io/name: foobar
component: camdetect
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: camtiler
spec:
selector: {}
podMetricsEndpoints:
- port: http
podTargetLabels:
- app.kubernetes.io/name
component: camera-motion-detect
---
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
@@ -616,21 +231,21 @@ spec:
- name: cameras
rules:
- alert: CameraLost
expr: rate(camdetect_rx_frames_total[2m]) < 1
expr: rate(camtiler_frames_total{stage="downloaded"}[1m]) < 1
for: 2m
labels:
severity: warning
annotations:
summary: Camera feed stopped
- alert: CameraServerRoomMotion
expr: camdetect_event_active {app="camdetect-server-room"} > 0
expr: rate(camtiler_events_total{app_kubernetes_io_name="server-room"}[30m]) > 0
for: 1m
labels:
severity: warning
annotations:
summary: Motion was detected in server room
- alert: CameraSlowUploads
expr: rate(camdetect_upload_dropped_frames_total[2m]) > 1
expr: camtiler_queue_frames{stage="upload"} > 10
for: 5m
labels:
severity: warning
@@ -638,14 +253,22 @@ spec:
summary: Motion detect snapshots are piling up and
not getting uploaded to S3
- alert: CameraSlowProcessing
expr: rate(camdetect_download_dropped_frames_total[2m]) > 1
expr: camtiler_queue_frames{stage="download"} > 10
for: 5m
labels:
severity: warning
annotations:
summary: Motion detection processing pipeline is not keeping up
with incoming frames
- alert: CameraResourcesThrottled
expr: sum by (pod) (rate(container_cpu_cfs_throttled_periods_total{namespace="camtiler"}[1m])) > 0
for: 5m
labels:
severity: warning
annotations:
summary: CPU limits are bottleneck
---
# Referenced/linked by README.md
apiVersion: k-space.ee/v1alpha1
kind: Camera
metadata:
@@ -653,6 +276,7 @@ metadata:
spec:
target: http://user@workshop.cam.k-space.ee:8080/?action=stream
secretRef: camera-secrets
replicas: 1
---
apiVersion: k-space.ee/v1alpha1
kind: Camera
@@ -661,6 +285,7 @@ metadata:
spec:
target: http://user@server-room.cam.k-space.ee:8080/?action=stream
secretRef: camera-secrets
replicas: 2
---
apiVersion: k-space.ee/v1alpha1
kind: Camera
@@ -669,6 +294,7 @@ metadata:
spec:
target: http://user@printer.cam.k-space.ee:8080/?action=stream
secretRef: camera-secrets
replicas: 1
---
apiVersion: k-space.ee/v1alpha1
kind: Camera
@@ -677,6 +303,7 @@ metadata:
spec:
target: http://user@chaos.cam.k-space.ee:8080/?action=stream
secretRef: camera-secrets
replicas: 1
---
apiVersion: k-space.ee/v1alpha1
kind: Camera
@@ -685,6 +312,7 @@ metadata:
spec:
target: http://user@cyber.cam.k-space.ee:8080/?action=stream
secretRef: camera-secrets
replicas: 1
---
apiVersion: k-space.ee/v1alpha1
kind: Camera
@@ -693,19 +321,36 @@ metadata:
spec:
target: http://user@kitchen.cam.k-space.ee:8080/?action=stream
secretRef: camera-secrets
replicas: 1
---
apiVersion: k-space.ee/v1alpha1
kind: Camera
metadata:
name: back-door
spec:
target: http://user@back-door.cam.k-space.ee:8080/?action=stream
target: http://user@100.102.3.3:8080/?action=stream
secretRef: camera-secrets
replicas: 1
---
apiVersion: k-space.ee/v1alpha1
kind: Camera
metadata:
name: ground-door
spec:
target: http://user@ground-door.cam.k-space.ee:8080/?action=stream
target: http://user@100.102.3.1:8080/?action=stream
secretRef: camera-secrets
replicas: 1
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: camera-motion-detect
spec:
selector:
matchLabels:
component: camera-motion-detect
podMetricsEndpoints:
- port: http
podTargetLabels:
- app.kubernetes.io/name
- component

98
camtiler/camera-tiler.yml Normal file
View File

@@ -0,0 +1,98 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: camera-tiler
annotations:
keel.sh/policy: force
keel.sh/trigger: poll
spec:
revisionHistoryLimit: 0
replicas: 2
selector:
matchLabels: &selectorLabels
app.kubernetes.io/name: camtiler
component: camera-tiler
template:
metadata:
labels: *selectorLabels
spec:
serviceAccountName: camera-tiler
containers:
- name: camera-tiler
image: harbor.k-space.ee/k-space/camera-tiler:latest
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
ports:
- containerPort: 5001
name: "http"
resources:
requests:
memory: "200Mi"
cpu: "100m"
limits:
memory: "500Mi"
cpu: "4000m"
---
apiVersion: v1
kind: Service
metadata:
name: camera-tiler
labels:
app.kubernetes.io/name: camtiler
component: camera-tiler
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: camtiler
component: camera-tiler
ports:
- protocol: TCP
port: 5001
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: camera-tiler
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: camera-tiler
rules:
- apiGroups:
- ""
resources:
- services
verbs:
- list
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: camera-tiler
subjects:
- kind: ServiceAccount
name: camera-tiler
apiGroup: ""
roleRef:
kind: Role
name: camera-tiler
apiGroup: ""
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: camtiler
spec:
selector:
matchLabels:
app.kubernetes.io/name: camtiler
component: camera-tiler
podMetricsEndpoints:
- port: http
podTargetLabels:
- app.kubernetes.io/name
- component

View File

@@ -0,0 +1,131 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<!-- Generated by graphviz version 2.40.1 (20161225.0304)
-->
<!-- Title: G Pages: 1 -->
<svg width="658pt" height="387pt" viewBox="0.00 0.00 658.36 386.80" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 382.8)">
<title>G</title>
<polygon fill="#ffffff" stroke="transparent" points="-4,4 -4,-382.8 654.3562,-382.8 654.3562,4 -4,4"/>
<!-- camera&#45;operator -->
<g id="node1" class="node">
<title>camera-operator</title>
<ellipse fill="none" stroke="#000000" cx="356.22" cy="-360.8" rx="74.095" ry="18"/>
<text text-anchor="middle" x="356.22" y="-356.6" font-family="Times,serif" font-size="14.00" fill="#000000">camera-operator</text>
</g>
<!-- camera&#45;motion&#45;detect -->
<g id="node2" class="node">
<title>camera-motion-detect</title>
<ellipse fill="none" stroke="#000000" cx="356.22" cy="-272" rx="95.5221" ry="18"/>
<text text-anchor="middle" x="356.22" y="-267.8" font-family="Times,serif" font-size="14.00" fill="#000000">camera-motion-detect</text>
</g>
<!-- camera&#45;operator&#45;&gt;camera&#45;motion&#45;detect -->
<g id="edge1" class="edge">
<title>camera-operator-&gt;camera-motion-detect</title>
<path fill="none" stroke="#000000" d="M356.22,-342.4006C356.22,-330.2949 356.22,-314.2076 356.22,-300.4674"/>
<polygon fill="#000000" stroke="#000000" points="359.7201,-300.072 356.22,-290.072 352.7201,-300.0721 359.7201,-300.072"/>
<text text-anchor="middle" x="377.9949" y="-312.2" font-family="Times,serif" font-size="14.00" fill="#000000">deploys</text>
</g>
<!-- mongo -->
<g id="node6" class="node">
<title>mongo</title>
<ellipse fill="none" stroke="#000000" cx="292.22" cy="-199" rx="37.7256" ry="18"/>
<text text-anchor="middle" x="292.22" y="-194.8" font-family="Times,serif" font-size="14.00" fill="#000000">mongo</text>
</g>
<!-- camera&#45;motion&#45;detect&#45;&gt;mongo -->
<g id="edge5" class="edge">
<title>camera-motion-detect-&gt;mongo</title>
<path fill="none" stroke="#000000" d="M340.3997,-253.9551C332.3383,-244.76 322.4178,-233.4445 313.6783,-223.476"/>
<polygon fill="#000000" stroke="#000000" points="316.2049,-221.0485 306.9807,-215.8365 310.9413,-225.6632 316.2049,-221.0485"/>
</g>
<!-- Minio S3 -->
<g id="node7" class="node">
<title>Minio S3</title>
<ellipse fill="none" stroke="#000000" cx="396.22" cy="-145" rx="47.0129" ry="18"/>
<text text-anchor="middle" x="396.22" y="-140.8" font-family="Times,serif" font-size="14.00" fill="#000000">Minio S3</text>
</g>
<!-- camera&#45;motion&#45;detect&#45;&gt;Minio S3 -->
<g id="edge6" class="edge">
<title>camera-motion-detect-&gt;Minio S3</title>
<path fill="none" stroke="#000000" d="M361.951,-253.804C368.6045,-232.6791 379.6542,-197.5964 387.4031,-172.9935"/>
<polygon fill="#000000" stroke="#000000" points="390.8337,-173.7518 390.4996,-163.1622 384.157,-171.6489 390.8337,-173.7518"/>
</g>
<!-- camera&#45;tiler -->
<g id="node3" class="node">
<title>camera-tiler</title>
<ellipse fill="none" stroke="#000000" cx="527.22" cy="-272" rx="57.8558" ry="18"/>
<text text-anchor="middle" x="527.22" y="-267.8" font-family="Times,serif" font-size="14.00" fill="#000000">camera-tiler</text>
</g>
<!-- cam.k&#45;space.ee/tiled -->
<g id="node4" class="node">
<title>cam.k-space.ee/tiled</title>
<ellipse fill="none" stroke="#000000" cx="527.22" cy="-199" rx="89.7229" ry="18"/>
<text text-anchor="middle" x="527.22" y="-194.8" font-family="Times,serif" font-size="14.00" fill="#000000">cam.k-space.ee/tiled</text>
</g>
<!-- camera&#45;tiler&#45;&gt;cam.k&#45;space.ee/tiled -->
<g id="edge2" class="edge">
<title>camera-tiler-&gt;cam.k-space.ee/tiled</title>
<path fill="none" stroke="#000000" d="M527.22,-253.9551C527.22,-245.8828 527.22,-236.1764 527.22,-227.1817"/>
<polygon fill="#000000" stroke="#000000" points="530.7201,-227.0903 527.22,-217.0904 523.7201,-227.0904 530.7201,-227.0903"/>
</g>
<!-- camera -->
<g id="node5" class="node">
<title>camera</title>
<ellipse fill="none" stroke="#000000" cx="513.22" cy="-360.8" rx="51.565" ry="18"/>
<text text-anchor="middle" x="513.22" y="-356.6" font-family="Times,serif" font-size="14.00" fill="#000000">📸 camera</text>
</g>
<!-- camera&#45;&gt;camera&#45;motion&#45;detect -->
<g id="edge4" class="edge">
<title>camera-&gt;camera-motion-detect</title>
<path fill="none" stroke="#000000" d="M485.8726,-345.3322C460.8217,-331.1633 423.4609,-310.0318 395.271,-294.0875"/>
<polygon fill="#000000" stroke="#000000" points="396.8952,-290.9851 386.4679,-289.1084 393.449,-297.078 396.8952,-290.9851"/>
</g>
<!-- camera&#45;&gt;camera&#45;tiler -->
<g id="edge3" class="edge">
<title>camera-&gt;camera-tiler</title>
<path fill="none" stroke="#000000" d="M516.1208,-342.4006C518.0482,-330.175 520.6159,-313.8887 522.7961,-300.0599"/>
<polygon fill="#000000" stroke="#000000" points="526.2706,-300.4951 524.3708,-290.072 519.356,-299.4049 526.2706,-300.4951"/>
</g>
<!-- camtiler&#45;event&#45;broker -->
<g id="node9" class="node">
<title>camtiler-event-broker</title>
<ellipse fill="none" stroke="#000000" cx="95.22" cy="-91" rx="95.4404" ry="18"/>
<text text-anchor="middle" x="95.22" y="-86.8" font-family="Times,serif" font-size="14.00" fill="#000000">camtiler-event-broker</text>
</g>
<!-- mongo&#45;&gt;camtiler&#45;event&#45;broker -->
<g id="edge8" class="edge">
<title>mongo-&gt;camtiler-event-broker</title>
<path fill="none" stroke="#000000" d="M254.6316,-196.5601C185.4398,-191.6839 43.6101,-179.7471 28.9976,-163 18.4783,-150.9441 20.8204,-140.7526 28.9976,-127 32.2892,-121.4639 36.7631,-116.7259 41.8428,-112.6837"/>
<polygon fill="#000000" stroke="#000000" points="43.9975,-115.4493 50.2411,-106.8896 40.0224,-109.6875 43.9975,-115.4493"/>
<text text-anchor="middle" x="153.8312" y="-140.8" font-family="Times,serif" font-size="14.00" fill="#000000">transforms object to add (signed) URL to S3</text>
</g>
<!-- cam.k&#45;space.ee -->
<g id="node8" class="node">
<title>cam.k-space.ee</title>
<ellipse fill="none" stroke="#000000" cx="292.22" cy="-18" rx="70.0229" ry="18"/>
<text text-anchor="middle" x="292.22" y="-13.8" font-family="Times,serif" font-size="14.00" fill="#000000">cam.k-space.ee</text>
</g>
<!-- Minio S3&#45;&gt;cam.k&#45;space.ee -->
<g id="edge10" class="edge">
<title>Minio S3-&gt;cam.k-space.ee</title>
<path fill="none" stroke="#000000" d="M394.7596,-126.8896C392.7231,-111.3195 387.8537,-88.922 376.22,-73 366.0004,-59.0134 351.0573,-47.5978 336.5978,-38.8647"/>
<polygon fill="#000000" stroke="#000000" points="338.1215,-35.7041 327.7038,-33.7748 334.6446,-41.7796 338.1215,-35.7041"/>
<text text-anchor="middle" x="521.2881" y="-86.8" font-family="Times,serif" font-size="14.00" fill="#000000">using signed URL from camtiler-event-broker</text>
<polyline fill="none" stroke="#000000" points="650.3562,-82.6 392.22,-82.6 392.9753,-115.8309 "/>
</g>
<!-- cam.k&#45;space.ee&#45;&gt;mongo -->
<g id="edge7" class="edge">
<title>cam.k-space.ee-&gt;mongo</title>
<path fill="none" stroke="#000000" d="M292.22,-36.2125C292.22,-67.8476 292.22,-133.1569 292.22,-170.7273"/>
<polygon fill="#000000" stroke="#000000" points="288.7201,-170.9833 292.22,-180.9833 295.7201,-170.9833 288.7201,-170.9833"/>
<text text-anchor="middle" x="332.0647" y="-86.8" font-family="Times,serif" font-size="14.00" fill="#000000">queries events</text>
<polyline fill="none" stroke="#000000" points="371.9094,-82.6 292.22,-82.6 292.22,-91.3492 "/>
</g>
<!-- camtiler&#45;event&#45;broker&#45;&gt;cam.k&#45;space.ee -->
<g id="edge9" class="edge">
<title>camtiler-event-broker-&gt;cam.k-space.ee</title>
<path fill="none" stroke="#000000" d="M138.9406,-74.7989C169.6563,-63.417 210.7924,-48.1737 242.716,-36.3441"/>
<polygon fill="#000000" stroke="#000000" points="244.1451,-39.5472 252.3059,-32.7905 241.7128,-32.9833 244.1451,-39.5472"/>
</g>
</g>
</svg>

After

Width:  |  Height:  |  Size: 7.8 KiB

85
camtiler/ingress.yml Normal file
View File

@@ -0,0 +1,85 @@
---
apiVersion: codemowers.cloud/v1beta1
kind: OIDCMiddlewareClient
metadata:
name: sso
spec:
displayName: Cameras
uri: 'https://cam.k-space.ee/tiled'
allowedGroups:
- k-space:floor
- k-space:friends
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: camtiler
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: camtiler-sso@kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
external-dns.alpha.kubernetes.io/hostname: cams.k-space.ee,cam.k-space.ee
spec:
rules:
- host: cam.k-space.ee
http:
paths:
- pathType: Prefix
path: "/tiled"
backend:
service:
name: camera-tiler
port:
number: 5001
- pathType: Prefix
path: "/m"
backend:
service:
name: camera-tiler
port:
number: 5001
- pathType: Prefix
path: "/events"
backend:
service:
name: logmower-eventsource
port:
number: 3002
- pathType: Prefix
path: "/"
backend:
service:
name: logmower-frontend
port:
number: 8080
tls:
- hosts:
- "*.k-space.ee"
---
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: cams-redirect
spec:
redirectRegex:
regex: ^https://cams.k-space.ee/(.*)$
replacement: https://cam.k-space.ee/$1
permanent: true
---
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: cams
spec:
entryPoints:
- websecure
routes:
- match: Host(`cams.k-space.ee`)
kind: Rule
middlewares:
- name: cams-redirect
services:
- kind: TraefikService
name: api@internal

182
camtiler/logmower.yml Normal file
View File

@@ -0,0 +1,182 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: logmower-eventsource
spec:
revisionHistoryLimit: 0
replicas: 2
selector:
matchLabels: &selectorLabels
app.kubernetes.io/name: camtiler
component: logmower-eventsource
template:
metadata:
labels: *selectorLabels
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- camtiler
- key: component
operator: In
values:
- logmower-eventsource
topologyKey: topology.kubernetes.io/zone
nodeSelector:
dedicated: storage
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule
containers:
- name: logmower-eventsource
image: harbor.k-space.ee/k-space/logmower-eventsource
ports:
- containerPort: 3002
name: nodejs
env:
- name: MONGO_COLLECTION
value: eventlog
- name: MONGODB_HOST
valueFrom:
secretKeyRef:
name: mongodb-application-readonly
key: connectionString.standard
- name: BACKEND
value: 'camtiler'
- name: BACKEND_BROKER_URL
value: 'http://logmower-event-broker'
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: logmower-event-broker
spec:
revisionHistoryLimit: 0
replicas: 2
selector:
matchLabels: &selectorLabels
app.kubernetes.io/name: camtiler
component: logmower-event-broker
template:
metadata:
labels: *selectorLabels
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- camtiler
- key: component
operator: In
values:
- logmower-event-broker
topologyKey: topology.kubernetes.io/zone
nodeSelector:
dedicated: storage
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule
containers:
- name: logmower-event-broker
image: harbor.k-space.ee/k-space/camera-event-broker
ports:
- containerPort: 3000
env:
- name: MINIO_BUCKET
valueFrom:
secretKeyRef:
name: miniobucket-camtiler-owner-secrets
key: BUCKET_NAME
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: miniobucket-camtiler-owner-secrets
key: AWS_SECRET_ACCESS_KEY
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: miniobucket-camtiler-owner-secrets
key: AWS_ACCESS_KEY_ID
- name: MINIO_HOSTNAME
value: 'dedicated-5ee6428f-4cb5-4c2e-90b5-364668f515c2.minio-clusters.k-space.ee'
- name: MINIO_PORT
value: '443'
- name: MINIO_SCHEMA
value: 'https'
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: logmower-frontend
spec:
revisionHistoryLimit: 0
replicas: 2
selector:
matchLabels: &selectorLabels
app.kubernetes.io/name: camtiler
component: logmower-frontend
template:
metadata:
labels: *selectorLabels
spec:
containers:
- name: logmower-frontend
image: harbor.k-space.ee/k-space/logmower-frontend
ports:
- containerPort: 8080
name: http
---
apiVersion: v1
kind: Service
metadata:
name: logmower-frontend
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: camtiler
component: logmower-frontend
ports:
- protocol: TCP
port: 8080
---
apiVersion: v1
kind: Service
metadata:
name: logmower-eventsource
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: camtiler
component: logmower-eventsource
ports:
- protocol: TCP
port: 3002
---
apiVersion: v1
kind: Service
metadata:
name: logmower-event-broker
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: camtiler
component: logmower-event-broker
ports:
- protocol: TCP
port: 80
targetPort: 3000

View File

@@ -1 +0,0 @@
../shared/minio-support.yml

View File

@@ -4,12 +4,16 @@ kind: MongoDBCommunity
metadata:
name: mongodb
spec:
agent:
logLevel: ERROR
maxLogFileDurationHours: 1
additionalMongodConfig:
systemLog:
quiet: true
members: 3
members: 2
arbiters: 1
type: ReplicaSet
version: "5.0.9"
version: "6.0.3"
security:
authentication:
modes: ["SCRAM"]
@@ -27,7 +31,7 @@ spec:
passwordSecretRef:
name: mongodb-application-readonly-password
roles:
- name: readOnly
- name: read
db: application
scramCredentialsSecretName: mongodb-application-readonly
statefulSet:
@@ -35,6 +39,24 @@ spec:
logLevel: WARN
template:
spec:
containers:
- name: mongod
resources:
requests:
cpu: 100m
memory: 512Mi
limits:
cpu: 500m
memory: 1Gi
volumeMounts:
- name: journal-volume
mountPath: /data/journal
- name: mongodb-agent
resources:
requests:
cpu: 1m
memory: 100Mi
limits: {}
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
@@ -44,7 +66,7 @@ spec:
operator: In
values:
- mongodb-svc
topologyKey: kubernetes.io/hostname
topologyKey: topology.kubernetes.io/zone
nodeSelector:
dedicated: storage
tolerations:
@@ -55,76 +77,34 @@ spec:
volumeClaimTemplates:
- metadata:
name: logs-volume
labels:
usecase: logs
spec:
storageClassName: local-path
storageClassName: mongo
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
- metadata:
name: journal-volume
labels:
usecase: journal
spec:
storageClassName: mongo
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 512Mi
storage: 1Gi
- metadata:
name: data-volume
labels:
usecase: data
spec:
storageClassName: local-path
storageClassName: mongo
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
apiVersion: minio.min.io/v2
kind: Tenant
metadata:
name: minio
annotations:
prometheus.io/path: /minio/prometheus/metrics
prometheus.io/port: "9000"
prometheus.io/scrape: "true"
spec:
credsSecret:
name: minio-secret
buckets:
- name: application
requestAutoCert: false
users:
- name: minio-user-0
pools:
- name: pool-0
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: v1.min.io/tenant
operator: In
values:
- minio
- key: v1.min.io/pool
operator: In
values:
- pool-0
topologyKey: kubernetes.io/hostname
resources:
requests:
cpu: '1'
memory: 512Mi
servers: 4
volumesPerServer: 1
volumeClaimTemplate:
metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: '30Gi'
storageClassName: local-path
status: {}
nodeSelector:
dedicated: storage
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule

View File

@@ -0,0 +1,195 @@
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: camera-motion-detect
spec:
podSelector:
matchLabels:
component: camera-motion-detect
policyTypes:
- Ingress
# - Egress # Something wrong with using minio-clusters as namespaceSelector.
ingress:
- from:
- podSelector:
matchLabels:
app.kubernetes.io/name: camtiler
component: camera-tiler
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus
egress:
- to:
- ipBlock:
# Permit access to cameras outside the cluster
cidr: 100.102.0.0/16
- to:
- podSelector:
matchLabels:
app: mongodb-svc
ports:
- port: 27017
- to:
- podSelector:
matchLabels:
app.kubernetes.io/name: minio
ports:
- port: 9000
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: camera-tiler
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: camtiler
component: camera-tiler
policyTypes:
- Ingress
- Egress
egress:
- to:
- podSelector:
matchLabels:
component: camera-motion-detect
ports:
- port: 5000
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: logmower-eventsource
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: camtiler
component: logmower-eventsource
policyTypes:
- Ingress
# - Egress # Something wrong with using mongodb-svc as podSelector.
egress:
- to:
- podSelector:
matchLabels:
app: mongodb-svc
- podSelector:
matchLabels:
component: logmower-event-broker
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: logmower-event-broker
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: camtiler
component: logmower-event-broker
policyTypes:
- Ingress
- Egress
egress:
- to:
# Minio access via Traefik's public endpoint
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
ingress:
- from:
- podSelector:
matchLabels:
component: logmower-eventsource
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: logmower-frontend
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: camtiler
component: logmower-frontend
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
# Config drift: Added by ArgoCD
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: minio
spec:
egress:
- ports:
- port: http
protocol: TCP
to:
- podSelector:
matchLabels:
app.kubernetes.io/name: minio
ingress:
- from:
- podSelector: {}
ports:
- port: http
protocol: TCP
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus
podSelector:
matchLabels:
app.kubernetes.io/name: minio
policyTypes:
- Ingress
- Egress

1
cert-manager/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
cert-manager.yaml

View File

@@ -5,13 +5,13 @@
Added manifest with:
```
curl -L https://github.com/jetstack/cert-manager/releases/download/v1.6.1/cert-manager.yaml -O
curl -L https://github.com/jetstack/cert-manager/releases/download/v1.15.1/cert-manager.yaml -O
```
To update certificate issuer
```
kubectl apply -f namespace.yml -f cert-manager.yaml
kubectl apply -f cert-manager.yaml
kubectl apply -f issuer.yml
kubectl -n cert-manager create secret generic tsig-secret \
--from-literal=TSIG_SECRET=<secret>

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,12 +1,11 @@
---
# AD/Samba group "Kubernetes Admins" members have full access
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-admins
subjects:
- kind: Group
name: "Kubernetes Admins"
name: "k-space:kubernetes:admins"
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole

8
cnpg-system/README.md Normal file
View File

@@ -0,0 +1,8 @@
# CloudNativePG
To deploy:
```
kubectl apply --server-side -f \
https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/release-1.23/releases/cnpg-1.23.2.yaml
```

View File

@@ -0,0 +1,5 @@
# Dragonfly Operator
```
kubectl apply -f https://raw.githubusercontent.com/dragonflydb/dragonfly-operator/v1.1.6/manifests/dragonfly-operator.yaml
```

View File

@@ -1,13 +0,0 @@
To deply:
```
kubectl apply -n drone-execution -f application.yml
```
To bootstrap secrets:
```
kubectl create secret generic -n drone-execution application-secrets \
--from-literal=DRONE_RPC_SECRET=$(kubectl get secret -n drone application-secrets -o jsonpath="{.data.DRONE_RPC_SECRET}" | base64 -d) \
--from-literal=DRONE_SECRET_PLUGIN_TOKEN=$(cat /dev/urandom | base64 | head -c 30)
```

View File

@@ -1,177 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: drone-runner-kube
---
apiVersion: v1
kind: ConfigMap
metadata:
name: application-config
data:
DRONE_DEBUG: "false"
DRONE_TRACE: "false"
DRONE_NAMESPACE_DEFAULT: "drone-execution"
DRONE_RPC_HOST: "drone.k-space.ee"
DRONE_RPC_PROTO: "https"
PLUGIN_MTU: "1300"
DRONE_SECRET_PLUGIN_ENDPOINT: "http://secrets:3000"
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: drone-runner-kube
namespace: "drone-execution"
labels:
app: drone-runner-kube
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- create
- delete
- apiGroups:
- ""
resources:
- pods
- pods/log
verbs:
- get
- create
- delete
- list
- watch
- update
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: drone-runner-kube
namespace: drone-execution
labels:
app: drone-runner-kube
subjects:
- kind: ServiceAccount
name: drone-runner-kube
namespace: drone-execution
roleRef:
kind: Role
name: drone-runner-kube
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Service
metadata:
name: drone-runner-kube
labels:
app: drone-runner-kube
spec:
type: ClusterIP
ports:
- port: 3000
targetPort: http
protocol: TCP
name: http
selector:
app: drone-runner-kube
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: drone-runner-kube
annotations:
keel.sh/policy: force
keel.sh/trigger: poll
keel.sh/pollSchedule: "@midnight"
spec:
replicas: 1
selector:
matchLabels:
app: drone-runner-kube
template:
metadata:
labels:
app: drone-runner-kube
spec:
serviceAccountName: drone-runner-kube
terminationGracePeriodSeconds: 3600
containers:
- name: server
securityContext:
{}
image: drone/drone-runner-kube
imagePullPolicy: Always
ports:
- name: http
containerPort: 3000
protocol: TCP
envFrom:
- configMapRef:
name: application-config
- secretRef:
name: application-secrets
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: drone-kubernetes-secrets
annotations:
keel.sh/policy: force
keel.sh/trigger: poll
keel.sh/pollSchedule: "@midnight"
spec:
replicas: 1
selector:
matchLabels:
app: drone-kubernetes-secrets
template:
metadata:
labels:
app: drone-kubernetes-secrets
spec:
containers:
- name: secrets
image: drone/kubernetes-secrets
imagePullPolicy: Always
ports:
- containerPort: 3000
env:
- name: SECRET_KEY
valueFrom:
secretKeyRef:
name: application-secrets
key: DRONE_SECRET_PLUGIN_TOKEN
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: drone-kubernetes-secrets
spec:
podSelector:
matchLabels:
app: drone-kubernetes-secrets
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: drone-runner-kube
ports:
- port: 3000
---
# Following should block access to pods in other namespaces, but should permit
# Git checkout, pip install, talking to Traefik via public IP etc
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: drone-runner-kube
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0

View File

@@ -1,25 +0,0 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/
# Chart dirs/files
docs/
ci/

View File

@@ -1,161 +0,0 @@
# Deployment
To deploy:
```
kubectl apply -n drone -f application.yml
```
To bootstrap secrets:
```
kubectl create secret generic -n drone application-secrets \
--from-literal=DRONE_GITEA_CLIENT_ID=... \
--from-literal=DRONE_GITEA_CLIENT_SECRET=... \
--from-literal=DRONE_RPC_SECRET=$(cat /dev/urandom | base64 | head -c 30)
```
# Integrating with Docker registry
We use harbor.k-space.ee to host own images.
Set up robot account `robot$k-space+drone` in Harbor first.
In Drone associate `docker_username` and `docker_password` secrets with the
`k-space`.
Instead of click marathon you can also pull the CLI configuration for Drone
from https://drone.k-space.ee/account
```
drone orgsecret add k-space docker_username 'robot$k-space+drone'
drone orgsecret add k-space docker_password '...'
```
# Integrating with e-mail
To (re)set e-mail credentials:
```
drone orgsecret add k-space email_password '...'
```
To issue build hit the button in Drone web interface or alternatively:
```
drone build create k-space/...
```
# Using templates
Templates unfortunately aren't pulled in from this Git repo.
Current `docker.yaml` template includes following:
```
kind: pipeline
type: kubernetes
name: build-arm64
platform:
arch: arm64
os: linux
node_selector:
kubernetes.io/arch: arm64
tolerations:
- key: arch
operator: Equal
value: arm64
effect: NoSchedule
steps:
- name: submodules
image: alpine/git
commands:
- touch .gitmodules
- sed -i -e 's/git@git.k-space.ee:/https:\\/\\/git.k-space.ee\\//g' .gitmodules
- git submodule update --init --recursive
- echo "ENV GIT_COMMIT=$(git rev-parse HEAD)" >> Dockerfile
- echo "ENV GIT_COMMIT_TIMESTAMP=$(git log -1 --format=%cd --date=iso-strict)" >> Dockerfile
- cat Dockerfile
- name: docker
image: plugins/docker
settings:
repo: harbor.k-space.ee/${DRONE_REPO}
tags: latest-arm64
registry: harbor.k-space.ee
squash: true
experimental: true
mtu: 1300
username:
from_secret: docker_username
password:
from_secret: docker_password
---
kind: pipeline
type: kubernetes
name: build-amd64
platform:
arch: amd64
os: linux
node_selector:
kubernetes.io/arch: amd64
steps:
- name: submodules
image: alpine/git
commands:
- touch .gitmodules
- sed -i -e 's/git@git.k-space.ee:/https:\\/\\/git.k-space.ee\\//g' .gitmodules
- git submodule update --init --recursive
- echo "ENV GIT_COMMIT=$(git rev-parse HEAD)" >> Dockerfile
- echo "ENV GIT_COMMIT_TIMESTAMP=$(git log -1 --format=%cd --date=iso-strict)" >> Dockerfile
- cat Dockerfile
- name: docker
image: plugins/docker
settings:
repo: harbor.k-space.ee/${DRONE_REPO}
tags: latest-amd64
registry: harbor.k-space.ee
squash: true
experimental: true
mtu: 1300
storage_driver: vfs
username:
from_secret: docker_username
password:
from_secret: docker_password
---
kind: pipeline
type: kubernetes
name: manifest
steps:
- name: manifest
image: plugins/manifest
settings:
target: harbor.k-space.ee/${DRONE_REPO}:latest
template: harbor.k-space.ee/${DRONE_REPO}:latest-ARCH
platforms:
- linux/amd64
- linux/arm64
username:
from_secret: docker_username
password:
from_secret: docker_password
depends_on:
- build-amd64
- build-arm64
---
kind: pipeline
type: kubernetes
name: gitlint
steps:
- name: gitlint
image: harbor.k-space.ee/k-space/gitlint-bundle
# https://git.k-space.ee/k-space/gitlint-bundle
---
kind: pipeline
type: kubernetes
name: flake8
steps:
- name: flake8
image: harbor.k-space.ee/k-space/flake8-bundle
# https://git.k-space.ee/k-space/flake8-bundle
```

View File

@@ -1,106 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: application-config
data:
DRONE_GITEA_SERVER: "https://git.k-space.ee"
DRONE_GIT_ALWAYS_AUTH: "false"
DRONE_PROMETHEUS_ANONYMOUS_ACCESS: "true"
DRONE_SERVER_HOST: "drone.k-space.ee"
DRONE_SERVER_PROTO: "https"
DRONE_USER_CREATE: "username:lauri,admin:true"
---
apiVersion: v1
kind: Service
metadata:
name: drone
spec:
type: ClusterIP
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
selector:
app: drone
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: drone
annotations:
keel.sh/policy: minor
keel.sh/trigger: poll
keel.sh/pollSchedule: "@midnight"
spec:
serviceName: drone
replicas: 1
selector:
matchLabels:
app: drone
template:
metadata:
labels:
app: drone
spec:
automountServiceAccountToken: false
securityContext:
{}
containers:
- name: server
securityContext:
{}
image: drone/drone:2
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
envFrom:
- secretRef:
name: application-secrets
- configMapRef:
name: application-config
volumeMounts:
- name: drone-data
mountPath: /data
volumeClaimTemplates:
- metadata:
name: drone-data
spec:
storageClassName: longhorn
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: drone
annotations:
cert-manager.io/cluster-issuer: default
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
spec:
tls:
- hosts:
- "drone.k-space.ee"
secretName: drone-tls
rules:
- host: "drone.k-space.ee"
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: drone
port:
number: 80

2
elastic-system/.gitignore vendored Normal file
View File

@@ -0,0 +1,2 @@
crds.yaml
operator.yaml

View File

@@ -1,7 +1,7 @@
# elastic-operator
```
wget https://download.elastic.co/downloads/eck/2.2.0/crds.yaml
wget https://download.elastic.co/downloads/eck/2.2.0/operator.yaml
wget https://download.elastic.co/downloads/eck/2.13.0/crds.yaml
wget https://download.elastic.co/downloads/eck/2.13.0/operator.yaml
kubectl apply -n elastic-system -f application.yml -f crds.yaml -f operator.yaml
```

View File

@@ -1,15 +1,16 @@
---
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: filebeat
spec:
type: filebeat
version: 8.4.1
version: 8.14.3
elasticsearchRef:
name: elasticsearch
kibanaRef:
name: kibana
config:
logging:
level: warning
http:
enabled: true
port: 5066
@@ -24,50 +25,15 @@ spec:
type: container
paths:
- /var/log/containers/*${data.kubernetes.container.id}.log
processors:
- drop_fields:
fields:
- stream
- target
- host
ignore_missing: true
- rename:
fields:
- from: "kubernetes.node.name"
to: "host"
- from: "kubernetes.pod.name"
to: "pod"
- from: "kubernetes.labels.app"
to: "app"
- from: "kubernetes.namespace"
to: "namespace"
ignore_missing: true
- drop_fields:
fields:
- input
- agent
- container
- ecs
- host
- kubernetes
- log
- "@metadata"
ignore_missing: true
- decode_json_fields:
fields:
- message
max_depth: 2
expand_keys: true
target: ""
add_error_key: true
daemonSet:
podTemplate:
metadata:
annotations:
co.elastic.logs/enabled: 'false'
spec:
serviceAccountName: filebeat
automountServiceAccountToken: true
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true # Allows to provide richer host metadata
containers:
- name: filebeat
securityContext:
@@ -84,6 +50,12 @@ spec:
valueFrom:
fieldRef:
fieldPath: spec.nodeName
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
- name: exporter
image: sepa/beats-exporter
args:
@@ -108,6 +80,104 @@ spec:
- operator: "Exists"
effect: "NoSchedule"
---
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: filebeat-syslog
spec:
type: filebeat
version: 8.4.3
elasticsearchRef:
name: elasticsearch
config:
logging:
level: warning
http:
enabled: true
port: 5066
filebeat:
inputs:
- type: syslog
format: rfc5424
protocol.udp:
host: "0.0.0.0:1514"
- type: syslog
format: rfc5424
protocol.tcp:
host: "0.0.0.0:1514"
deployment:
replicas: 2
podTemplate:
metadata:
annotations:
co.elastic.logs/enabled: 'false'
spec:
terminationGracePeriodSeconds: 30
containers:
- name: filebeat
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 1514
name: syslog
protocol: UDP
volumeMounts:
- name: filebeat-registry
mountPath: /usr/share/filebeat/data
- name: exporter
image: sepa/beats-exporter
args:
- -p=5066
ports:
- containerPort: 8080
name: exporter
protocol: TCP
volumes:
- name: filebeat-registry
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: filebeat-syslog-udp
annotations:
external-dns.alpha.kubernetes.io/hostname: syslog.k-space.ee
metallb.universe.tf/allow-shared-ip: syslog.k-space.ee
spec:
type: LoadBalancer
externalTrafficPolicy: Local
loadBalancerIP: 172.20.51.4
ports:
- name: filebeat-syslog
port: 514
protocol: UDP
targetPort: 1514
selector:
beat.k8s.elastic.co/name: filebeat-syslog
---
apiVersion: v1
kind: Service
metadata:
name: filebeat-syslog-tcp
annotations:
external-dns.alpha.kubernetes.io/hostname: syslog.k-space.ee
metallb.universe.tf/allow-shared-ip: syslog.k-space.ee
spec:
type: LoadBalancer
externalTrafficPolicy: Local
loadBalancerIP: 172.20.51.4
ports:
- name: filebeat-syslog
port: 514
protocol: TCP
targetPort: 1514
selector:
beat.k8s.elastic.co/name: filebeat-syslog
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
@@ -148,12 +218,12 @@ kind: Elasticsearch
metadata:
name: elasticsearch
spec:
version: 8.4.1
version: 8.14.3
nodeSets:
- name: default
count: 3
count: 2
config:
node.store.allow_mmap: false
node.roles: [ "data_content", "data_hot", "ingest", "master", "remote_cluster_client", "data_cold", "remote_cluster_client" ]
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
@@ -163,19 +233,15 @@ spec:
resources:
requests:
storage: 5Gi
storageClassName: local-path
http:
tls:
selfSignedCertificate:
disabled: true
storageClassName: longhorn
---
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: kibana
spec:
version: 8.4.1
count: 2
version: 8.14.3
count: 1
elasticsearchRef:
name: elasticsearch
http:
@@ -186,16 +252,23 @@ spec:
server.publicBaseUrl: https://kibana.k-space.ee
xpack.reporting.enabled: false
xpack.apm.ui.enabled: false
xpack.security.authc.providers:
anonymous.anonymous1:
order: 0
credentials:
username: "elastic"
secureSettings:
- secretName: elasticsearch-es-elastic-user
entries:
- key: elastic
path: xpack.security.authc.providers.anonymous.anonymous1.credentials.password
podTemplate:
metadata:
annotations:
co.elastic.logs/enabled: 'false'
spec:
containers:
- name: kibana
readinessProbe:
httpGet:
path: /app/home
port: 5601
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 5
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
---
apiVersion: networking.k8s.io/v1
kind: Ingress
@@ -203,7 +276,6 @@ metadata:
name: kibana
annotations:
kubernetes.io/ingress.class: traefik
cert-manager.io/cluster-issuer: default
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true"
@@ -222,5 +294,51 @@ spec:
number: 5601
tls:
- hosts:
- kibana.k-space.ee
secretName: kibana-tls
- "*.k-space.ee"
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: filebeat
spec:
selector:
matchLabels:
beat.k8s.elastic.co/name: filebeat
podMetricsEndpoints:
- port: exporter
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: elasticsearch
spec:
selector:
matchLabels:
app.kubernetes.io/name: elasticsearch-exporter
podMetricsEndpoints:
- port: exporter
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kibana
annotations:
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: 'true'
spec:
tls:
- hosts:
- '*.k-space.ee'
rules:
- host: kibana.k-space.ee
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kibana-kb-http
port:
number: 5601

Some files were not shown because too many files have changed in this diff Show More