338 Commits

Author SHA1 Message Date
9ef252c8ec hackerspace kustomize
+ move static env to dockerfile
+ doorboy-direct refactor
2025-08-14 01:19:43 +03:00
c29de936af move parts of CLUSTER doc to ansible, clarify doc 2025-08-14 01:17:34 +03:00
511f6f4ca1 disable mongodb-operator 2025-08-08 03:06:29 +03:00
9be8fc3a95 mongodb is all external 2025-08-08 03:03:49 +03:00
Erki Aas
18d181f36a Disable csi-proxmox 2025-08-08 00:45:29 +03:00
Erki Aas
88eae1c35c Allow rook on control plane nodes 2025-08-07 22:11:50 +03:00
Erki Aas
79ebad6730 Disable managing rook-ceph secrets 2025-08-07 22:05:27 +03:00
Erki Aas
24229639b4 Allow rook on control plane nodes 2025-08-07 21:42:03 +03:00
Erki Aas
71d0667009 Fix ceph storage classes 2025-08-07 21:17:03 +03:00
Erki Aas
ad35fc4828 Migrate storage classes to ceph 2025-08-07 21:06:44 +03:00
3e3814efbe mongodb-operator v0.13.0 2025-08-07 19:30:28 +03:00
b6ea5d3393 mongodb-operator v0.12.0 2025-08-07 19:26:34 +03:00
c5fd94c41b mongodb-operator v0.11.0 2025-08-07 19:26:00 +03:00
f5560f812b argocd mongodb-operator 2025-08-07 19:24:27 +03:00
bbf454f33d checksync dirs <-> argocd 2025-08-07 19:20:45 +03:00
7af3a2f751 kube-system extras to kustomize 2025-08-07 19:12:24 +03:00
ad865ad8b3 link passmower-members to passmower 2025-08-07 19:04:11 +03:00
835ed59970 up grafana doc 2025-08-07 19:02:11 +03:00
872469f1c6 add checksync.sh 2025-08-07 18:59:04 +03:00
6bbe84ecbb argocd kube-system (extras) autosync+prune 2025-08-07 18:54:09 +03:00
Erki Aas
86668b80a3 Fix external-snapshotter 2025-08-07 18:50:45 +03:00
b74a9682d6 docs for comparing ns <> dir <> argo 2025-08-07 18:49:50 +03:00
fb4eb6e285 sync namespace names with directory names 2025-08-07 18:49:50 +03:00
Erki Aas
d74e4fd76f Fix external-snapshotter 2025-08-07 18:46:53 +03:00
Erki Aas
605ad868bb Fix external-snapshotter 2025-08-07 18:44:39 +03:00
73ecae479b argocd hackerspace: fixup indenting 2025-08-07 18:36:29 +03:00
82311c86ff sync namespace names with directory names 2025-08-07 18:32:12 +03:00
42aef1e928 argocd: kube-system extras 2025-08-07 18:27:59 +03:00
f3ef2facdf disable unused cnpg 2025-08-07 18:03:59 +03:00
796e9394ca mysql-clusters not used 2025-08-07 18:03:28 +03:00
5f90a41009 revert DNS move for etherpad 2025-08-07 16:34:47 +03:00
c32c84f6ed mariadb: move hardcoded IP to DNS 2025-08-07 16:26:21 +03:00
Erki Aas
20704e3a24 Add kubernetes-csi/external-snapshotter 2025-08-04 19:40:03 +03:00
882ffdd92a passmower helm kustomize 2025-08-04 16:51:08 +03:00
f88d4bb8e2 move 0ac4364157 2025-08-04 10:11:56 +03:00
c2bb1cc5ac passmower: sync config drift 2025-08-04 09:08:59 +03:00
Erki Aas
4dc45594f1 Fix rook config 2025-08-03 20:41:28 +03:00
Erki Aas
103c4deff4 Fix rook config 2025-08-03 20:35:47 +03:00
Erki Aas
6b753d4bf1 Fix rook config 2025-08-03 20:32:15 +03:00
Erki Aas
f54b5469f8 Enable rook ceph toolbox 2025-08-03 20:19:11 +03:00
Erki Aas
6543c61f81 Enable rook ceph toolbox 2025-08-03 20:17:26 +03:00
Erki Aas
5232edc303 Fix rook chart 2025-08-03 19:52:42 +03:00
Erki Aas
995360f105 Dont create rook default pools etc 2025-08-03 19:48:48 +03:00
Erki Aas
159a41d782 Fix rook chart 2025-08-03 19:47:12 +03:00
Erki Aas
1acaa04123 Fix rook chart 2025-08-03 18:35:15 +03:00
Erki Aas
2526bb5516 Install rook-ceph (operator) and rook-ceph-cluster (external cluster) 2025-08-03 18:30:09 +03:00
Erki Aas
7e2acf3e94 Install rook-ceph (operator) and rook-ceph-cluster (external cluster) 2025-08-03 18:29:19 +03:00
Erki Aas
ee72ba4db2 Install rook-ceph (operator) and rook-ceph-cluster (external cluster) 2025-08-03 18:29:19 +03:00
6c6e396db1 pve: enable pve92, remove older nodes 2025-08-02 15:45:25 +03:00
Erki Aas
a675ad127b full ipv4/6 bgp mesh with router and pve 2025-08-01 22:45:47 +03:00
0029a7e709 README: Update networking wiki ref 2025-08-01 19:25:17 +03:00
e5c914b302 pve.k-space.ee: add pve9x 2025-07-31 10:37:03 +03:00
316fbde6e6 maybe name 2025-07-30 16:49:25 +03:00
2ae8c5b99e rosdump cleanup live debug :) 2025-07-30 16:27:04 +03:00
4f3a9058f9 rosdump: git commit logic + image expectations 2025-07-30 16:20:36 +03:00
a86f5bb250 Revert "add ttlSecondsAfterFinished to rosdump"
This reverts commit c65a75ee0e.

Should be handled by CronJob anyway.
2025-07-30 15:55:51 +03:00
Erki Aas
2754b4e2f7 Add startingDeadlineSeconds to rosdump cronjob 2025-07-30 12:39:19 +03:00
28d50548bf Add startingDeadlineSeconds to rosdump cronjob 2025-07-30 09:31:50 +00:00
c65a75ee0e add ttlSecondsAfterFinished to rosdump
failed jobs give up and don't get rescheduled
2025-07-30 12:06:54 +03:00
2a79309842 whouses nas.k-space.ee 2025-07-29 19:15:17 +03:00
0fc47cab2a minio-clusters tidy 2025-07-29 15:58:20 +03:00
02b7cde355 proxmox-csi: fixup node tolerations 2025-07-28 13:15:48 +03:00
ea358b3883 pve/pvs csi: disable unused nas 2025-07-27 04:46:57 +03:00
b4ae5d3f1f frigate: rm unused/undeployed PVE storage classes 2025-07-27 04:45:26 +03:00
4dddb9622c frigate: id user 2025-07-26 14:29:54 +03:00
c51f7368e2 argocd: use traefik's wildcard tls
It was getting its own argocd.k-space.ee via CR.
and probably it failed to update it since in reality
ingress.tls.enable was false. Maybe also diff
ArgoCD versions.
2025-07-26 11:57:06 +03:00
7adbf2476d gitea v1.24.3 2025-07-26 11:43:11 +03:00
c71be24984 grafana: add plugin Infinity 2025-07-24 11:08:09 +03:00
67c97adc96 grafana forbids having secrets in secrets
3 layers of jumala eest sa secretit grafanale ei annaks
probably the key in secret reference is getting flagged
no error message, it is just dropped, but still
overrides env.. This seems to be a problem again
since Jan/Feb, with the accepted workaround being enving it.

Do as the docs don't say and agains, four times over?
2025-07-24 11:08:03 +03:00
ca4de329f7 kustomize grafana 2025-07-24 09:36:54 +03:00
b6098f92b0 hotfix: grafana upgrade
maintainer bot where are you
2025-07-24 05:08:22 +03:00
02bfe1dfa2 rm mktxp 2025-07-24 01:52:27 +03:00
541a060b6f Add mktxp 2025-07-24 01:29:31 +03:00
Arti Zirk
af3bd7bb41 Re-enable sw_core mikrotik-exporter
Hopefully this time, after RouterOS upgrades, they will not leak memory
anymore. (:
2025-07-22 18:11:50 +03:00
31800f8ffb proxmox-csi to argocd
The images were not pinned to a version before.
proxmox-csi was at :edge, and
its dependencies pinned to outdated/incompatible.
2025-07-22 02:02:44 +03:00
24b57de126 trim spaces in codebase 2025-07-22 01:44:26 +03:00
Erki Aas
6317daefa1 wildduck: disable zonemta dns caching 2025-07-20 13:58:09 +03:00
Erki Aas
31558db1d4 wildduck: improve config and add healthchecks 2025-07-20 13:30:13 +03:00
efb467e425 dragonfly: scale to 1 instance (hotfix)
eaas: dragonfly has 1 rw instance and applications
don't realize and change over from ro to rw when
the leader changes.
2025-07-17 10:52:21 +03:00
130839ff7f rm reloader: very outdated and no consumers 2025-07-17 00:51:21 +03:00
6e6b3743a0 disable freeswitch
reported not working and not tracked with anything
2025-07-16 23:29:44 +03:00
6f2220445d disable _asterisk and destroy namespace
decision with eaas, currently broken, nobody has shown interest
and trying to maintain kube as a first priority
2025-07-16 23:03:54 +03:00
bc731d98ec rm camtiler ecosystem in favour for frigate 2025-07-13 13:35:49 +03:00
3a0747d9b8 frigate: add remaining old cameras 2025-07-13 01:16:27 +03:00
792a0864a4 frigate: Fix TPU configuration 2025-07-12 22:04:34 +03:00
17f95e14cc frigate: WIP stuff until CEPH arrives 2025-07-12 21:39:39 +03:00
d3b85e4f24 frigate: Schedule frigate to dedicated nvr node 2025-07-12 20:48:02 +03:00
8525cef4fc frigate: argo autosync 2025-07-12 20:39:37 +03:00
c519fd3d6c frigate: rename rstp to rtsp 2025-07-12 20:38:44 +03:00
4408c22c5b argocd: Add Frigate 2025-07-12 20:28:58 +03:00
2041f5f80a frigate: Migrate to Kustomize 2025-07-12 20:27:50 +03:00
84b259ace4 argocd: metallb: Enable prune 2025-07-12 19:01:39 +03:00
f9fe0379da inventory: add refresh tokens 2025-07-12 18:55:06 +03:00
0359eedcb5 argocd: Add metallb 2025-07-12 18:39:44 +03:00
a03ea7d208 metallb: Migrate to kustomize + helm 2025-07-12 18:38:50 +03:00
c7cb495451 argo: Add Harbor 2025-07-12 16:39:15 +03:00
a6439a3bd1 harbor: Migrate to kustomize + ArgoCD
Still some stuff missing like proper DragonFly resources
2025-07-12 16:27:17 +03:00
754b2180fd goredirect: target k6.ee directly traefik.k-space.ee
will be interesting how the cname works out
for ingress, it must be the same IP space as traefik is on, otherwise dns points to ip with nothing
2025-07-09 20:34:09 +03:00
4f35c87a6c tidy f0b78f7b17 migration leftovers 2025-07-09 13:39:14 +03:00
266b8ee6aa redeploy (and update) frigate
longhorn is stuck in a loop attaching/detacching its storage
2025-06-30 03:08:42 +03:00
f726f8886a longhorn v1.8.2 2025-06-29 23:50:11 +03:00
fe128cf65e longhorn v1.7.3 2025-06-29 23:44:07 +03:00
7232957a04 traefik: move to kustomize
Closes #102
2025-06-29 18:51:56 +03:00
43ad7586ce traefik: fix dashboard root redirect
Closes #70
2025-06-29 16:48:39 +03:00
1b34a48e81 rosdump: move (secretful) result to secretspace 2025-06-29 16:26:05 +03:00
0d18bfd7cc rosdump: ignore ssh host keys 2025-06-29 16:26:05 +03:00
94751c44f9 rosdump: update device targets
- devices moved to sec, update DNS
- fix kube net policy CIDR
2025-06-29 16:25:32 +03:00
de36d70e68 rosdump: fixup 7d71f1b29c 2025-06-29 16:24:44 +03:00
efc2598160 k6: add-slug changed to directly add-by-slug
See also: inventory-app: 9c6902c5a2a90a6bd6a8fa93554f4dc353d9f777^
2025-06-24 18:45:42 +03:00
db935de1a5 gitea does not go through traefik 2025-06-18 19:48:53 +03:00
885f4b505e revert 28daa56bad 2025-06-18 18:46:06 +03:00
aab40b012d cert-manager to argo kustomize helm 2025-06-18 18:21:35 +03:00
28daa56bad cert-manager: rename to default-cluster-cert-issuer
much easier, vs ctrl-f for 'default'
2025-06-18 17:53:28 +03:00
a1e1dcf827 traefik: drop harbor already default issuer 2025-06-18 16:55:50 +03:00
bb1c313a37 inventory: add MACADDRESS_OUTLINK_BASEURL env 2025-05-25 17:25:19 +03:00
d7d83b37f4 freescout: not quite OIDC 2025-05-21 21:29:58 +03:00
0ac4364157 passmower: disable NORMALIZE_EMAIL_ADDRESSES
see comment in file
2025-05-21 20:48:53 +03:00
b8e525c3e0 passmower: texts: K-SPACE in all capital 2025-05-21 19:53:11 +03:00
92db22fd09 docs: there is no keydb 2025-05-03 16:26:32 +03:00
4466878b54 docs: drone is replaced 2025-05-03 15:11:11 +03:00
9b93075543 move members repo to secretspace 2025-05-03 15:05:59 +03:00
ce2e6568b1 wildduck: add mailservice group
#65
2025-04-22 12:33:45 +03:00
f82caf1751 rm unused kdoorpi
- door are outside of this cluster
- kdoorpi is superseeded by godoor
- 0 pods running
2025-04-21 03:16:51 +03:00
d9877a9fc5 tigera-operator: v3.29.3 2025-04-20 22:03:54 +03:00
13cfeeff2b tigera-operator: v3.28.4 2025-04-20 22:03:54 +03:00
21e70685f3 tigera-operator: sync configuration drift 2025-04-20 22:03:50 +03:00
6d7cdbd9c6 tigera-operator to argo (v3.28.1) 2025-04-20 21:32:02 +03:00
10585c7aff dragonfly: v1.1.11 2025-04-20 19:27:28 +03:00
bc301104fe dragonfly: to argo (v1.1.6) 2025-04-20 19:27:24 +03:00
853c9717a9 rm unused opensearch
formerly about to be used by graylog,
which itself has been replaced twice over
2025-04-20 19:18:59 +03:00
ec81c34086 ripe87 to argo 2025-04-20 19:18:59 +03:00
0b713ab321 shared/minio is already dead 2025-04-20 19:18:59 +03:00
541607a7bd cpng: v1.25.1 2025-04-20 19:18:59 +03:00
d9dce6cadf cnpg to argo (v1.24.1) 2025-04-20 19:18:59 +03:00
0447abecdc rm postgres-operator (4th competing postgres?) 2025-04-20 19:18:59 +03:00
61f7d724b5 argo: secret-claim-operator to git 2025-04-20 19:18:59 +03:00
f899283fdb argo: tidy 2025-04-20 19:18:59 +03:00
fb3123966e keydb (and redis) is dead 2025-04-20 19:18:54 +03:00
5b29fbe7cd prometheus-operator: v0.82.0 2025-04-20 19:06:37 +03:00
9fb356b5a6 prometheus-operator: v0.81.0 2025-04-20 19:06:37 +03:00
908f482396 prometheus-operator: v0.80.1 2025-04-20 19:06:37 +03:00
715cb5ce4b prometheus-operator: v0.79.2 2025-04-20 19:06:37 +03:00
48915ec26c prometheus-operator: v0.78.2 2025-04-20 19:06:37 +03:00
06324bb583 prometheus-operator: v0.77.2 2025-04-20 19:06:37 +03:00
877662445a prometheus-operator: v0.76.2 2025-04-20 19:06:37 +03:00
22b67fa4fc prometheus-operator: migrate to argo+kustomize
v0.75.1 - same as in cluster currently
2025-04-20 19:06:37 +03:00
006240ee1a sync cluster deviation: pve-csi storageclass provisioners
minio-clusters: kustomization; disable unused and outdated shared and dedicated
2025-04-20 19:06:37 +03:00
2a26b4e94c traefik: drop already-enforced router.tls=true annotation 2025-04-20 19:06:37 +03:00
4e59984fe4 woodpecker: fixup assumptions 2025-04-20 19:06:32 +03:00
7eadbee7a2 argo: enable helm in kustomize + update 2025-04-20 19:01:39 +03:00
a94fddff1e woodpecker: recreate to v3 on kustomize 2025-04-20 19:01:39 +03:00
bf44e4fa9b partial revert 3243ed1066786288956ecd7afbedf05104018721 2025-04-20 19:01:39 +03:00
f7f7d52e70 Revert "convert reloader to helm"
Failed sync attempt to 2.1.0: one or more objects failed to apply,
reason: Deployment.apps "reloader-reloader" is invalid:
spec.template.metadata.labels: Invalid value:
map[string]string{"app.kubernetes.io/instance":"reloader",
"app.kubernetes.io/managed-by":"Helm",
"app.kubernetes.io/name":"reloader",
"app.kubernetes.io/version":"v1.4.0", "group":"com.stakater.platform",
"helm.sh/chart":"reloader-2.1.0", "provider":"stakater",
"version":"v1.4.0"}: `selector` does not match template `labels`
(retried 5 times).

This reverts commit db1f33df6d28da34a973678ff576032a445dd39f.
2025-04-20 19:01:39 +03:00
cf9d686882 mirror.gcr.io
and explicit latest tag
2025-04-20 19:01:39 +03:00
5bd0a57417 explicitly use docker library 2025-04-20 19:01:39 +03:00
e22713b282 pin and update 2025-04-20 19:01:39 +03:00
37a8031bc4 minor version updates 2025-04-20 19:01:39 +03:00
095e00b516 nextcloud: 31.0.2 2025-04-20 19:01:39 +03:00
4d84a0a5ca nextcloud: 30.0.8 2025-04-20 19:01:39 +03:00
73f03dbb2a nextcloud: 29.0.14 2025-04-20 19:01:39 +03:00
0c5d2bc792 nextcloud: 28.0.14 2025-04-20 19:01:38 +03:00
6cf53505ad nextcloud: 27.1.13 2025-04-20 19:01:38 +03:00
a694463fad nextcloud 26.0.13 2025-04-20 19:01:38 +03:00
d1eeba377d nextcloud: current version 2025-04-20 19:01:38 +03:00
0628cb94e4 convert reloader to helm 2025-04-20 19:01:38 +03:00
376e74a985 harbor update 2025-04-20 19:01:38 +03:00
6eb0c20175 disable discourse
- posts and user list manually exported
- not in argo
- outdated version
- e-mail is broken
- nobody has accessed in 6mo
- no posts, apart from the initial admin
2025-04-20 19:01:38 +03:00
4bf08fdc7f disable camtiler 2025-04-20 19:01:30 +03:00
f05b1f1324 openebs already disabled 2025-04-18 23:10:38 +03:00
5fa3144e23 logging namespace already disabled 2025-04-18 23:10:38 +03:00
48054078e2 local-path-storage already unused, for 2y 2025-04-18 23:10:38 +03:00
4cf4aecea9 playground is already disabled 2025-04-18 23:10:38 +03:00
8d1c24b80f disable whoami-oidc (broken) 2025-04-18 23:10:38 +03:00
0dcd26fe4f traefik: combined tls 2025-04-18 19:21:24 +03:00
e33053bf79 goredirect: bind workaround 2025-04-18 19:18:56 +03:00
e632b90d2b bind: enable k6.ee 2025-04-18 18:47:22 +03:00
3b5df4cd43 bind: cleanup mail.k-space.ee present in wildduck/dns.yaml 2025-04-18 18:41:18 +03:00
a280a19772 inventory: k6 tls 2025-04-18 18:41:18 +03:00
19e6f53d96 inventory: rm namespace
provided by argo / kubectl command anyway
except for role-bindings, they don't get it
2025-04-18 18:41:16 +03:00
e9efee4853 inventory: fix orphaned selectors 2025-04-18 16:56:19 +03:00
a33d0d12b0 gitea: also disable passkeys ot enforce OIDC 2025-04-18 14:46:58 +03:00
dc42a9612a gitea: update and disable passwd login
Closes #11
2025-04-18 14:38:49 +03:00
6f48e3a53a Inventory Minio Quota 1 → 10 Gi
Closes k-space/inventory-app#27
2025-04-11 16:28:58 +03:00
09423ace42 rm unneeded deprecated flag 2025-03-27 09:06:07 +02:00
bb802882ae add Aktiva to non-SSO listing 2025-02-25 23:10:51 +02:00
4a7dfd6435 fix passmower email login link 2025-01-09 13:02:54 +02:00
Erki Aas
fb7504cfee force traefik to all worker nodes 2025-01-02 20:35:22 +02:00
Erki Aas
a4b9bdf89d frigate: make config storage larger 2025-01-02 20:24:17 +02:00
602b4a03f6 frigate: use coral for detect, nvidia gpu for transcode and longhorn for config storage 2025-01-02 20:19:48 +02:00
f9ad582136 allow scheduling longhorn on nvr 2025-01-02 20:19:48 +02:00
305b8ec038 add nvidia-device-plugin to use nvr gpu 2025-01-02 20:19:48 +02:00
7d71f1b29c fix rosdump 2025-01-02 20:19:48 +02:00
0e79aa8f4e passmower: 4/4 replicas (for pve localhost) 2025-01-02 01:25:04 +02:00
a784f00c71 argo: autosync passmower 2025-01-02 01:19:22 +02:00
b71a872c09 argo: passmower helm + extras didn't work out
Kustomize should be able to auto-generate Helm as well.
2025-01-02 01:02:23 +02:00
21beb2332c argo: add passmower 2025-01-02 00:53:04 +02:00
8eed4f66c1 pve: add pve2 2025-01-02 00:24:56 +02:00
75b9948997 pve: fmt port.number on same line 2025-01-02 00:24:47 +02:00
e4dfde9562 argo docs 2 2024-12-15 06:34:47 +02:00
a82193f059 add argocd-image-updater 2024-12-15 06:28:42 +02:00
68a75b8389 migrate OIDC codemowers.io/v1alpha1 to v1beta1 2024-12-15 05:39:41 +02:00
5368fe90eb argo: add localhost callback for CLI login 2024-12-15 05:39:41 +02:00
cded6fde3f fixup argo docs 2024-12-15 05:39:41 +02:00
402ff86fde grafana: disable non-oauth login 2024-12-15 01:46:22 +02:00
272f60ab73 monitoring: mikrotik-exporter fix 2024-11-22 08:16:12 +02:00
9bcad2481b monitoring: Update node-exporter 2024-11-22 05:59:34 +02:00
c04a7b7f67 monitoring: Update mikrotik-exporter 2024-11-22 05:59:08 +02:00
c23fa07c5e monitoring: Update mikrotik-exporter 2024-11-19 15:48:31 +02:00
Erki Aas
c1822888ec dont compile discourse assets 2024-10-25 14:44:27 +03:00
Erki Aas
e26cac6d86 add discourse 2024-10-25 14:35:20 +03:00
Erki Aas
d7ba4bc90e upgrade cnpg 2024-10-25 14:03:50 +03:00
Erki Aas
da4df6c21d frigate: move storage to dedicated nfs share and offload transcoding to separate go2rtc deployment 2024-10-19 13:51:13 +03:00
2964034cd3 fix rosdump scheduling 2024-10-18 18:45:42 +03:00
ae525380b1 fix gitea oidc reg 2024-10-18 18:44:27 +03:00
4b9c3ad394 monitoring: Temporarily disable monitoring of core switches 2024-10-15 10:07:28 +03:00
dbebb39749 gitea: Bump version 2024-10-02 08:15:20 +03:00
Erki Aas
6f15e45402 freeswitch: fix network policy 2024-10-01 22:32:16 +03:00
Erki Aas
36bf431259 freeswitch: fix network policy 2024-10-01 20:27:08 +03:00
Erki Aas
c14a313c57 frigate: enable recording and use openvino 2024-09-29 23:06:41 +03:00
Erki Aas
15a2fd9375 add frigate 2024-09-29 21:34:31 +03:00
Erki Aas
5bd6cf2317 freeswitch: add gitignore 2024-09-29 19:05:42 +03:00
Erki Aas
407f691152 add freeswitch 2024-09-29 19:05:42 +03:00
Erki Aas
e931f490c2 asterisk: update network policy 2024-09-29 19:05:42 +03:00
Erki Aas
b96e8d16a6 expose harbor via traefik 2024-09-29 19:05:42 +03:00
Erki Aas
15d4d44be7 expose traefik via ingress 2024-09-29 19:05:42 +03:00
Erki Aas
52ce6eab0a expose harbor via traefik 2024-09-29 19:04:51 +03:00
e89d045f38 goredirect: add nopath env var 2024-09-13 21:54:49 +03:00
7e70315514 monitoring: Fix snmp-exporter 2024-09-12 22:15:10 +03:00
af5a048bcd replace ups 2024-09-12 21:54:46 +03:00
0005219f81 monitoring: Fix mikrotik-exporter formatting 2024-09-12 21:48:43 +03:00
813bb32e48 monitoring: Update UPS-es 2024-09-12 21:47:20 +03:00
0efae7baf9 unschedule harbor from storage nodes 2024-09-12 19:48:51 +03:00
be90b4e266 monitoring: Update mikrotik-exporter 2024-09-09 22:19:46 +03:00
999d17c384 rosdump: Use codemowers/git image 2024-09-09 08:45:21 +03:00
Erki Aas
bacef8d438 remove logmower 2024-09-08 23:54:32 +03:00
60d1ba9b18 monitoring: Bump mikrotik-exporter again 2024-09-06 12:10:45 +03:00
dcb80e6638 monitoring: Bump mikrotik-exporter 2024-09-06 11:55:49 +03:00
95e0f97db2 grafana: Specify OIDC scopes on Grafana side 2024-09-05 09:32:34 +03:00
f5a7b44ae6 grafana: Add groups OIDC scope 2024-09-05 09:29:16 +03:00
be7e1d9459 grafana: Assign editor role for hackerspace members 2024-09-05 09:23:41 +03:00
cd807ebcde grafana: Allow OIDC assignment to admin role 2024-09-05 09:04:02 +03:00
eaac7f61a7 monitoring: Pin specific mikrotik-exporter image 2024-09-04 23:29:37 +03:00
Erki Aas
a0d5a585e4 add and configure calico ippool 2024-09-04 23:12:35 +03:00
1f8f288f95 monitoring: Update Mikrotik exporter 2024-09-04 22:33:15 +03:00
9de1881647 monitoring: Enable Prometheus admin API 2024-09-04 22:28:01 +03:00
Erki Aas
28904cdd63 make calico use ipip encapsulation 2024-09-04 22:27:36 +03:00
0df188db36 monitoring: README syntax fix 2024-09-04 07:12:56 +03:00
a42b79b5ac monitoring: Add doc.crds.dev ref 2024-09-04 07:12:21 +03:00
Erki Aas
89875a66f8 update passmower config 2024-08-29 14:38:44 +03:00
927366a3d5 inventory: add groups scope 2024-08-28 16:15:07 +03:00
Erki Aas
29212d7f14 passmower: get charts from ghcr 2024-08-27 15:58:05 +03:00
1d8528b312 argocd: Move to DragonflyDB and add resource customizations 2024-08-27 12:41:24 +03:00
566beecb6a Create dummy/stub entries in auth.k-space.ee 2024-08-26 23:51:04 +03:00
Erki Aas
4c52ca88ef add proxmox-nas storage class 2024-08-25 11:34:31 +03:00
b5fceb0f35 Update storage classes 2024-08-25 09:26:57 +03:00
c609b1df04 wildduck: Restore MongoDB 2024-08-25 09:26:27 +03:00
22d65664b2 whoami: Set higher port 2024-08-25 00:25:49 +03:00
59db08e891 whoami: Fix memory limit 2024-08-25 00:22:54 +03:00
d8402bdec5 whoami: Drop privileges 2024-08-25 00:21:24 +03:00
a71bd5de37 whoami: Add resource limits 2024-08-25 00:17:34 +03:00
ce9891046f wildduck: Add resource limits 2024-08-25 00:12:25 +03:00
fea3e8ce66 nextcloud: Fix Dragonfly topology spread constraints 2024-08-25 00:02:51 +03:00
bfeba4017b monitoring: Add revisionHistoryLimit: 0 2024-08-24 23:58:07 +03:00
4b00d876ad nextcloud: Set resource limits 2024-08-24 23:31:22 +03:00
d1e8d8e356 bind: Fix resource limits 2024-08-24 23:28:28 +03:00
22c6fe1979 bind: Add resource limits 2024-08-24 23:25:40 +03:00
f53b31e030 bind: Use topology spread constraint instead of anti affinity rules 2024-08-24 23:22:34 +03:00
cb41b739cc passmower: Fix Dragonfly topology spread constraints 2024-08-24 23:05:13 +03:00
91af1911c4 rm users, now at k-space/members 2024-08-24 22:53:35 +03:00
Erki Aas
4532eccd6d proxy image artefacts through harbor 2024-08-24 19:36:10 +03:00
Erki Aas
d4913aacbf add netshoot container to debug network issues 2024-08-24 19:23:35 +03:00
Erki Aas
abe022eecc update argo readme 2024-08-24 19:23:17 +03:00
Erki Aas
4bcb0a8856 fix members argo app 2024-08-24 19:19:27 +03:00
Erki Aas
b849ac340e fix members argo app 2024-08-24 19:12:31 +03:00
Erki Aas
b922412417 fix members argo app 2024-08-24 19:10:14 +03:00
Erki Aas
2661fe211e manage members (oidcusers) with argocd 2024-08-24 19:05:53 +03:00
Erki Aas
a9406748c5 manage members (oidcusers) with argocd 2024-08-24 19:01:40 +03:00
Erki Aas
cc92ea67f4 upgrade wildduck components 2024-08-24 17:44:19 +03:00
Erki Aas
222d902ec2 cleanup old oidc-gateway 2024-08-24 16:29:24 +03:00
Erki Aas
65e30d5dec migrate most storage classes to proxmox-csi, allow it on masters 2024-08-24 16:29:24 +03:00
4210855827 freescout: Elaborate about mail sync 2024-08-24 15:49:05 +03:00
d7287018ac monitoring: Specify resource limits 2024-08-24 12:36:37 +03:00
3fbecab179 Move to PVE CSI provider 2024-08-24 12:15:30 +03:00
Erki Aas
024edc1c9b expose harbor via dedicated lb on storage nodes 2024-08-23 21:35:04 +03:00
Erki Aas
a94a3f829c expose harbor via dedicated lb on storage nodes 2024-08-23 21:35:04 +03:00
Erki Aas
36055cc869 migrate nextcloud to dragonfly 2024-08-23 21:35:04 +03:00
Erki Aas
aa91322ec6 remove grafana pv as it's using db now 2024-08-23 21:35:04 +03:00
c6c94b1901 test proxmox csi 2024-08-23 17:10:55 +03:00
67fb6c3727 Consolidate monitoring stack to Kube master nodes 2024-08-23 08:00:23 +03:00
Erki Aas
18483197c9 fix passmower image pulling 2024-08-22 14:12:54 +03:00
Erki Aas
a37d268574 temporarily disable middleware from pve 2024-08-22 14:12:54 +03:00
4b5e30f51f monitoring: Revert snmp-exporter because config file needs to be updated 2024-08-21 07:20:32 +03:00
78b0f1534a monitoring: Use gcr mirror for node exporter 2024-08-21 07:19:15 +03:00
0b03a720b3 monitoring: Bump versions, use gcr mirror 2024-08-21 07:17:05 +03:00
f1a2051838 monitoring: Move to topologySpreadConstraints 2024-08-21 07:11:06 +03:00
3280b25a83 Add more revisionHistoryLimit: 1 defs 2024-08-20 12:25:15 +03:00
0eec1fde8b gitea: Add revisionHistoryLimit 2024-08-20 12:21:36 +03:00
ede08c205b grafana: Use declarative data sources 2024-08-20 12:14:42 +03:00
666d900128 Restore minio storage class 2024-08-16 18:50:33 +03:00
bc31357d5b Integrate dos4dev PR #29: postgres-cluster docs 2024-08-16 18:07:45 +03:00
f3244afb20 woodpecker: Use RWX 2024-08-15 22:23:45 +03:00
Erki Aas
384a60244d update readme about network 2024-08-15 13:40:22 +03:00
Erki Aas
ed25720003 run traefik with 4 replicas 2024-08-15 12:43:08 +03:00
Erki Aas
5c1a894a43 add goredirect service manifest 2024-08-15 11:11:20 +03:00
0a9237fae9 wildduck: Limit CPU for Dragonfly 2024-08-15 10:58:34 +03:00
69dca7e1f2 wildduck: Add topologySpreadConstraints for Dragonfly 2024-08-15 09:52:38 +03:00
4d5c47e21b wildduck: Refined Dragonfly cleanup 2024-08-15 09:49:48 +03:00
b3f1eb069f wildduck: Cleanups 2024-08-15 09:37:24 +03:00
bbf421df63 wildduck: Use recreate strategy to avoid Kube scheduling deadlock 2024-08-15 09:24:16 +03:00
Erki Aas
9bf5e2408a migrate workers to infra vlan, use bgp for calico, use calico for lb service annoucements 2024-08-14 18:16:21 +03:00
351f0ae746 Remove more Mongoose 2024-08-14 11:02:45 +03:00
84bb476812 Mongo migrated to external Mongo, removing in-cluster Mongo definitions temporarily 2024-08-14 11:00:26 +03:00
07a132748b Restore mongo storage class 2024-08-14 10:49:46 +03:00
656f28a34c Move yamllint config to separate file 2024-08-14 10:30:08 +03:00
12466b19b1 bind, cert-manager: More updates 2024-08-14 10:07:26 +03:00
1d39827375 bind, cert-manager: Cleanups 2024-08-14 10:04:41 +03:00
3f4d89b4b1 dragonfly-operator-system: Add grep example 2024-08-14 09:33:45 +03:00
474ae64156 tigera-operator: Update README 2024-08-14 09:19:00 +03:00
1fa0577ce4 passmower: Cleanup 2024-08-14 08:12:37 +03:00
f8cd93aa9c passmower: Fix Dragonfly topology spread constraints 2024-08-14 07:55:24 +03:00
e22bf78b2e dragonfly-operator-system: Add Redis license notice 2024-08-14 07:53:55 +03:00
be5b036ab8 longhorn-system: Reddit link 2024-08-14 07:42:24 +03:00
a75f703eaa longhorn-system: Update README 2024-08-14 07:41:25 +03:00
2708e48850 longhorn-system: README fix 2024-08-14 07:37:23 +03:00
cfc5a739a1 longhorn-system: Updates 2024-08-14 07:36:31 +03:00
e5e4a07d01 dragonfly-operator-system: Update README 2024-08-14 07:08:26 +03:00
f902bbfe02 dragonfly-operator-system: Update README 2024-08-14 07:00:16 +03:00
70e589ef45 etherpad: Cleanup 2024-08-14 06:58:28 +03:00
b0befbcd69 freescout: Cleanup 2024-08-14 06:57:36 +03:00
Erki Aas
a09f7d4f7e remove rawfile-csi 2024-08-13 20:27:16 +03:00
Erki Aas
2f2fa1a99f migrate inventory to external s3 2024-08-13 20:18:58 +03:00
Erki Aas
66fbf32088 migrate wildduck to external mongo 2024-08-13 20:18:47 +03:00
9b698ea197 freescout: Remove unused reset-oidc-config.yaml 2024-08-13 14:51:33 +03:00
7aa26ea236 passmower: Add topologySpreadConstraints 2024-08-13 14:50:25 +03:00
7c16f84200 monitoring: Elaborate more about operator 2024-08-12 22:15:32 +03:00
c2d08d8a80 monitoring: Update README.md 2024-08-12 22:06:28 +03:00
7c2b862ca8 Move Ansible directory to separate repo 2024-08-12 21:41:36 +03:00
Erki Aas
68e936463b chore: make tegra jetson a misc node 2024-08-12 11:45:35 +03:00
275 changed files with 4442 additions and 22809 deletions

4
.gitignore vendored
View File

@@ -5,6 +5,10 @@
*.save
*.1
# Kustomize with Helm and secrets:
charts/
*.env
### IntelliJ IDEA ###
.idea
*.iml

4
.yamllint Normal file
View File

@@ -0,0 +1,4 @@
extends: default
ignore-from-file: .gitignore
rules:
line-length: disable

View File

@@ -35,7 +35,6 @@ users:
- get-token
- --oidc-issuer-url=https://auth.k-space.ee/
- --oidc-client-id=passmower.kubelogin
- --oidc-use-pkce
- --oidc-extra-scope=profile,email,groups
- --listen-address=127.0.0.1:27890
command: kubectl
@@ -62,44 +61,24 @@ Network → VPN → `IPv6` → Search domains (Otsingudomeenid): `kube.k-space.e
Created Ubuntu 22.04 VM-s on Proxmox with local storage.
Added some ARM64 workers by using Ubuntu 22.04 server on Raspberry Pi.
After machines have booted up and you can reach them via SSH:
```
# Disable Ubuntu caching DNS resolver
systemctl disable systemd-resolved.service
systemctl stop systemd-resolved
rm -fv /etc/resolv.conf
cat > /etc/resolv.conf << EOF
nameserver 1.1.1.1
nameserver 8.8.8.8
EOF
# Disable multipathd as Longhorn handles that itself
systemctl mask multipathd snapd
systemctl disable --now multipathd snapd bluetooth ModemManager hciuart wpa_supplicant packagekit
# Permit root login
sed -i -e 's/PermitRootLogin no/PermitRootLogin without-password/' /etc/ssh/sshd_config
systemctl reload ssh
cat ~ubuntu/.ssh/authorized_keys > /root/.ssh/authorized_keys
userdel -f ubuntu
apt-get install -yqq linux-image-generic
apt-get remove -yq cloud-init linux-image-*-kvm
```
On master:
First master:
```
kubeadm init --token-ttl=120m --pod-network-cidr=10.244.0.0/16 --control-plane-endpoint "master.kube.k-space.ee:6443" --upload-certs --apiserver-cert-extra-sans master.kube.k-space.ee --node-name master1.kube.k-space.ee
```
For the `kubeadm join` command specify FQDN via `--node-name $(hostname -f)`.
Joining nodes:
```
# On a master:
kubeadm token create --print-join-command
# Joining node:
<printed join command --node-name "$(hostname -f)"
```
Set AZ labels:
```
for j in $(seq 1 9); do
for t in master mon worker storage; do
for t in master mon worker; do
kubectl label nodes ${t}${j}.kube.k-space.ee topology.kubernetes.io/zone=node${j}
done
done
@@ -116,11 +95,6 @@ for j in $(seq 1 4); do
kubectl taint nodes mon${j}.kube.k-space.ee dedicated=monitoring:NoSchedule
kubectl label nodes mon${j}.kube.k-space.ee dedicated=monitoring
done
for j in $(seq 1 4); do
kubectl taint nodes storage${j}.kube.k-space.ee dedicated=storage:NoSchedule
kubectl label nodes storage${j}.kube.k-space.ee dedicated=storage
done
```
For `arm64` nodes add suitable taint to prevent scheduling non-multiarch images on them:
@@ -138,13 +112,6 @@ for j in ground front back; do
done
```
To reduce wear on storage:
```
echo StandardOutput=null >> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
systemctl daemon-reload
systemctl restart kubelet
```
## Technology mapping
Our self-hosted Kubernetes stack compared to AWS based deployments:

View File

@@ -6,15 +6,17 @@ Kubernetes manifests, Ansible [playbooks](ansible/README.md), and documentation
- Debugging Kubernetes [on Wiki](https://wiki.k-space.ee/en/hosting/debugging-kubernetes)
- Need help? → [`#kube`](https://k-space-ee.slack.com/archives/C02EYV1NTM2)
Jump to docs: [inventory-app](hackerspace/README.md) / [cameras](camtiler/README.md) / [doors](https://wiki.k-space.ee/en/hosting/doors) / [list of apps](https://auth.k-space.ee) // [all infra](ansible/inventory.yml) / [network](https://wiki.k-space.ee/en/hosting/network/sensitive) / [retro](https://wiki.k-space.ee/en/hosting/retro) / [non-infra](https://wiki.k-space.ee)
Jump to docs: [inventory-app](hackerspace/README.md) / [cameras](_disabled/camtiler/README.md) / [doors](https://wiki.k-space.ee/en/hosting/doors) / [list of apps](https://auth.k-space.ee) // [all infra](ansible/inventory.yml) / [network](https://wiki.k-space.ee/en/hosting/network) / [retro](https://wiki.k-space.ee/en/hosting/retro) / [non-infra](https://wiki.k-space.ee)
Tip: Search the repo for `kind: xyz` for examples.
## Supporting services
- Build [Git](https://git.k-space.ee) repositories with [Woodpecker](https://woodpecker.k-space.ee).
- Build [Git](https://git.k-space.ee) repositories with [Woodpecker](https://woodpecker.k-space.ee)[^nodrone].
- Passmower: Authz with `kind: OIDCClient` (or `kind: OIDCMiddlewareClient`[^authz]).
- Traefik[^nonginx]: Expose services with `kind: Service` + `kind: Ingress` (TLS and DNS **included**).
[^nodrone]: Replaces Drone CI.
### Additional
- bind: Manage _additional_ DNS records with `kind: DNSEndpoint`.
- [Prometheus](https://wiki.k-space.ee/en/hosting/monitoring): Collect metrics with `kind: PodMonitor` (alerts with `kind: PrometheusRule`).
@@ -24,23 +26,47 @@ Tip: Search the repo for `kind: xyz` for examples.
[^nonginx]: No nginx annotations! Use `kind: Ingress` instead. `IngressRoute` is not used as it doesn't support [`external-dns`](bind/README.md) out of the box.
[^authz]: Applications should use OpenID Connect (`kind: OIDCClient`) for authentication, whereever possible. If not possible, use `kind: OIDCMiddlewareClient` client, which will provide authentication via a Traefik middleware (`traefik.ingress.kubernetes.io/router.middlewares: passmower-proxmox@kubernetescrd`). Sometimes you might use both for extra security.
### Network
All nodes are in Infra VLAN 21. Routing is implemented with BGP, all nodes and the router make a full-mesh. Both Serice LB IPs and Pod IPs are advertised to the router. Router does NAT for outbound pod traffic.
See the [Calico installation](tigera-operator/application.yml) for Kube side and Routing / BGP in the router.
Static routes for 193.40.103.36/30 have been added in pve nodes to make them communicating with Passmower via Traefik more stable - otherwise packets coming back to the PVE are routed directly via VLAN 21 internal IPs by the worker nodes, breaking TCP.
<!-- Linked to by https://wiki.k-space.ee/e/en/hosting/storage -->
### Databases / -stores:
- KeyDB: `kind: KeydbClaim` (replaces Redis[^redisdead])
- Dragonfly: `kind: Dragonfly` (replaces Redis[^redisdead])
- Longhorn: `storageClassName: longhorn` (filesystem storage)
- Mongo[^mongoproblems]: `kind: MongoDBCommunity` (NAS* `inventory-mongodb`)
- Minio S3: `kind: MinioBucketClaim` with `class: dedicated` (NAS*: `class: external`)
- MariaDB*: search for `mysql`, `mariadb`[^mariadb] (replaces MySQL)
- Postgres*: hardcoded to [harbor/application.yml](harbor/application.yml)
- Seeded secrets: `kind: SecretClaim` (generates random secret in templated format)
- Secrets in git: https://git.k-space.ee/secretspace (members personal info, API credentials, see argocd/deploy_key.pub comment)
\* External, hosted directly on [nas.k-space.ee](https://wiki.k-space.ee/en/hosting/storage)
[^mariadb]: As of 2024-07-30 used by auth, authelia, bitwarden, etherpad, freescout, git, grafana, nextcloud, wiki, woodpecker
[^redisdead]: Redis has been replaced as redis-operatori couldn't handle itself: didn't reconcile after reboots, master URI was empty, and clients complained about missing masters. ArgoCD still hosts its own Redis.
[^redisdead]: Redis has been replaced as redis-operatori couldn't handle itself: didn't reconcile after reboots, master URI was empty, and clients complained about missing masters. Dragonfly replaces KeyDB.
[^mongoproblems]: Mongo problems: Incompatible with rawfile csi (wiredtiger.wt corrupts), complicated resizing (PVCs from statefulset PVC template).
***
_This page is referenced by wiki [front page](https://wiki.k-space.ee) as **the** technical documentation for infra._
## nas.k-space.ee pre-migration whouses listing
- S3: [minio-clusters](minio-clusters/README.md)
- postgres: only harbor, 172.20.43.1
### mongodb
- inventory
- wildduck
### mariadb.infra.k-space.ee (DNS from ns1 to 172.20.36.1)
- freescout
- gitea nb! MYSQL_ROOT_PASSWORD seems to be invalid, might be ok to reset it upstream
- wiki
- nextcloud
- etherpad NB! probably NOT using kspace_etherpad_kube NB! does not take DNS likely due to netpol, hardcoded to 172.20.36.1
- grafana
- woodpecker

View File

@@ -0,0 +1,23 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: cnpg # aka in-cluster postgres
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: https://github.com/cloudnative-pg/cloudnative-pg
targetRevision: v1.25.1
path: releases
directory:
include: 'cnpg-1.25.1.yaml'
destination:
server: 'https://kubernetes.default.svc'
namespace: cnpg-system
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true
- ServerSideApply=true # Resource is too big to fit in 262144 bytes allowed annotation size.

View File

@@ -0,0 +1,21 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: mongodb-operator
namespace: argocd
spec:
project: k-space.ee
source:
# also depends on git@git.k-space.ee:secretspace/kube.git
repoURL: git@git.k-space.ee:k-space/kube.git
targetRevision: HEAD
path: mongodb-operator
destination:
server: 'https://kubernetes.default.svc'
namespace: mongodb-operator
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -9,3 +9,5 @@ Should ArgoCD be down manifests here can be applied with:
```
kubectl apply -n asterisk -f application.yaml
```
asterisk-secrets was dumped to git.k-space.ee/secretspace/kube:_disabled/asterisk

View File

@@ -0,0 +1,39 @@
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: asterisk
spec:
podSelector:
matchLabels:
app: asterisk
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus
- from:
- ipBlock:
cidr: 100.101.0.0/16
- from:
- ipBlock:
cidr: 100.102.0.0/16
- from:
- ipBlock:
cidr: 81.90.125.224/32 # Lauri home
- from:
- ipBlock:
cidr: 172.20.8.241/32 # Erki A
- from:
- ipBlock:
cidr: 212.47.211.10/32 # Elisa SIP
egress:
- to:
- ipBlock:
cidr: 212.47.211.10/32 # Elisa SIP

View File

@@ -0,0 +1,24 @@
# proxmox-csi
1. create role in pve if it doesn't exist
2. create user and assign permissions, preferrably at resource pool level
```
pveum user add ks-kubernetes-csi@pve
pveum aclmod /pool/kspace_pool -user ks-kubernetes-csi@pve -role CSI
pveum user token add ks-kubernetes-csi@pve cs -privsep 0
```
save the token!
3. apply `proxmox-csi-plugin.yml` and `storage-class.yaml`, delete proxmox-csi default storage classes from kube.
4. add the token from pve to `config.yaml` and create the secret: `kubectl -n csi-proxmox create secret generic proxmox-csi-plugin --from-file=config.yaml`
5. label the nodes according to allocation:
```
kubectl --kubeconfig ~/.kube/k-space label nodes worker1.kube.k-space.ee topology.kubernetes.io/region=pve-cluster topology.kubernetes.io/zone=pve1 --overwrite
kubectl --kubeconfig ~/.kube/k-space label nodes worker2.kube.k-space.ee topology.kubernetes.io/region=pve-cluster topology.kubernetes.io/zone=pve2 --overwrite
kubectl --kubeconfig ~/.kube/k-space label nodes worker3.kube.k-space.ee topology.kubernetes.io/region=pve-cluster topology.kubernetes.io/zone=pve8 --overwrite
kubectl --kubeconfig ~/.kube/k-space label nodes worker4.kube.k-space.ee topology.kubernetes.io/region=pve-cluster topology.kubernetes.io/zone=pve9 --overwrite
kubectl --kubeconfig ~/.kube/k-space label nodes master1.kube.k-space.ee topology.kubernetes.io/region=pve-cluster topology.kubernetes.io/zone=pve1 --overwrite
kubectl --kubeconfig ~/.kube/k-space label nodes master2.kube.k-space.ee topology.kubernetes.io/region=pve-cluster topology.kubernetes.io/zone=pve2 --overwrite
kubectl --kubeconfig ~/.kube/k-space label nodes master3.kube.k-space.ee topology.kubernetes.io/region=pve-cluster topology.kubernetes.io/zone=pve8 --overwrite
```

View File

@@ -0,0 +1,31 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: csi-proxmox
helmCharts:
- includeCRDs: true
name: &name proxmox-csi-plugin
releaseName: *name
repo: oci://ghcr.io/sergelogvinov/charts
valuesInline:
node:
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
storageClass:
- name: proxmox
fstype: xfs
storage: ks-pvs
cache: none
ssd: "true"
# Not in use, migrating off of NAS…
# - name: proxmox-nas
# fstype: xfs
# storage: ks-pvs-nas
# cache: none
# # ssd is false, https://github.com/sergelogvinov/proxmox-csi-plugin/issues/404
version: 0.3.12 # https://github.com/sergelogvinov/proxmox-csi-plugin/pkgs/container/charts%2Fproxmox-csi-plugin
resources:
- ssh://git@git.k-space.ee/secretspace/kube/proxmox-csi # secrets: proxmox-csi-plugin:config.yaml (cluster info)

View File

@@ -0,0 +1,382 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: discourse
annotations:
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
spec:
tls:
- hosts:
- "*.k-space.ee"
secretName:
rules:
- host: "discourse.k-space.ee"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: discourse
port:
name: http
---
apiVersion: v1
kind: Service
metadata:
name: discourse
spec:
type: ClusterIP
ipFamilyPolicy: SingleStack
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
selector:
app.kubernetes.io/instance: discourse
app.kubernetes.io/name: discourse
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: discourse
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: discourse
annotations:
reloader.stakater.com/auto: "true"
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/instance: discourse
app.kubernetes.io/name: discourse
strategy:
type: Recreate
template:
metadata:
labels:
app.kubernetes.io/instance: discourse
app.kubernetes.io/name: discourse
spec:
serviceAccountName: discourse
securityContext:
fsGroup: 0
fsGroupChangePolicy: Always
initContainers:
containers:
- name: discourse
image: docker.io/bitnami/discourse:3.3.2-debian-12-r0
imagePullPolicy: "IfNotPresent"
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- CHOWN
- SYS_CHROOT
- FOWNER
- SETGID
- SETUID
- DAC_OVERRIDE
drop:
- ALL
privileged: false
readOnlyRootFilesystem: false
runAsGroup: 0
runAsNonRoot: false
runAsUser: 0
seLinuxOptions: {}
seccompProfile:
type: RuntimeDefault
env:
- name: BITNAMI_DEBUG
value: "true"
- name: DISCOURSE_USERNAME
valueFrom:
secretKeyRef:
name: discourse-password
key: username
- name: DISCOURSE_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-password
key: password
- name: DISCOURSE_PORT_NUMBER
value: "8080"
- name: DISCOURSE_EXTERNAL_HTTP_PORT_NUMBER
value: "80"
- name: DISCOURSE_DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-postgresql
key: password
- name: POSTGRESQL_CLIENT_CREATE_DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-postgres-superuser
key: password
- name: POSTGRESQL_CLIENT_POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-postgres-superuser
key: password
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-redis
key: redis-password
envFrom:
- configMapRef:
name: discourse
- secretRef:
name: discourse-email
ports:
- name: http
containerPort: 8080
protocol: TCP
livenessProbe:
tcpSocket:
port: http
initialDelaySeconds: 500
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
readinessProbe:
httpGet:
path: /srv/status
port: http
initialDelaySeconds: 100
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
resources:
limits:
cpu: "6.0"
ephemeral-storage: 2Gi
memory: 12288Mi
requests:
cpu: "1.0"
ephemeral-storage: 50Mi
memory: 3072Mi
volumeMounts:
- name: discourse-data
mountPath: /bitnami/discourse
subPath: discourse
- name: sidekiq
image: docker.io/bitnami/discourse:3.3.2-debian-12-r0
imagePullPolicy: "IfNotPresent"
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- CHOWN
- SYS_CHROOT
- FOWNER
- SETGID
- SETUID
- DAC_OVERRIDE
drop:
- ALL
privileged: false
readOnlyRootFilesystem: false
runAsGroup: 0
runAsNonRoot: false
runAsUser: 0
seLinuxOptions: {}
seccompProfile:
type: RuntimeDefault
command:
- /opt/bitnami/scripts/discourse/entrypoint.sh
args:
- /opt/bitnami/scripts/discourse-sidekiq/run.sh
env:
- name: BITNAMI_DEBUG
value: "true"
- name: DISCOURSE_USERNAME
valueFrom:
secretKeyRef:
name: discourse-password
key: username
- name: DISCOURSE_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-password
key: password
- name: DISCOURSE_DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-postgresql
key: password
- name: DISCOURSE_POSTGRESQL_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-postgres-superuser
key: password
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-redis
key: redis-password
envFrom:
- configMapRef:
name: discourse
- secretRef:
name: discourse-email
livenessProbe:
exec:
command: ["/bin/sh", "-c", "pgrep -f ^sidekiq"]
initialDelaySeconds: 500
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
readinessProbe:
exec:
command: ["/bin/sh", "-c", "pgrep -f ^sidekiq"]
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
resources:
limits:
cpu: 750m
ephemeral-storage: 2Gi
memory: 768Mi
requests:
cpu: 500m
ephemeral-storage: 50Mi
memory: 512Mi
volumeMounts:
- name: discourse-data
mountPath: /bitnami/discourse
subPath: discourse
volumes:
- name: discourse-data
persistentVolumeClaim:
claimName: discourse-data
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: discourse-data
namespace: discourse
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "3Gi"
storageClassName: "proxmox-nas"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: discourse
namespace: discourse
data:
DISCOURSE_HOST: "discourse.k-space.ee"
DISCOURSE_SKIP_INSTALL: "yes"
DISCOURSE_PRECOMPILE_ASSETS: "no"
DISCOURSE_SITE_NAME: "K-Space Discourse"
DISCOURSE_USERNAME: "k-space"
DISCOURSE_EMAIL: "dos4dev@k-space.ee"
DISCOURSE_REDIS_HOST: "discourse-redis"
DISCOURSE_REDIS_PORT_NUMBER: "6379"
DISCOURSE_DATABASE_HOST: "discourse-postgres-rw"
DISCOURSE_DATABASE_PORT_NUMBER: "5432"
DISCOURSE_DATABASE_NAME: "discourse"
DISCOURSE_DATABASE_USER: "discourse"
POSTGRESQL_CLIENT_DATABASE_HOST: "discourse-postgres-rw"
POSTGRESQL_CLIENT_DATABASE_PORT_NUMBER: "5432"
POSTGRESQL_CLIENT_POSTGRES_USER: "postgres"
POSTGRESQL_CLIENT_CREATE_DATABASE_NAME: "discourse"
POSTGRESQL_CLIENT_CREATE_DATABASE_EXTENSIONS: "hstore,pg_trgm"
---
apiVersion: codemowers.cloud/v1beta1
kind: OIDCClient
metadata:
name: discourse
namespace: discourse
spec:
displayName: Discourse
uri: https://discourse.k-space.ee
redirectUris:
- https://discourse.k-space.ee/auth/oidc/callback
allowedGroups:
- k-space:floor
- k-space:friends
grantTypes:
- authorization_code
- refresh_token
responseTypes:
- code
availableScopes:
- openid
- profile
pkce: false
---
apiVersion: codemowers.cloud/v1beta1
kind: SecretClaim
metadata:
name: discourse-redis
namespace: discourse
spec:
size: 32
mapping:
- key: redis-password
value: "%(plaintext)s"
- key: REDIS_URI
value: "redis://:%(plaintext)s@discourse-redis"
---
apiVersion: dragonflydb.io/v1alpha1
kind: Dragonfly
metadata:
name: discourse-redis
namespace: discourse
spec:
authentication:
passwordFromSecret:
key: redis-password
name: discourse-redis
replicas: 3
resources:
limits:
cpu: 1000m
memory: 1Gi
topologySpreadConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: discourse-redis
app.kubernetes.io/part-of: dragonfly
---
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: discourse-postgres
namespace: discourse
spec:
instances: 1
enableSuperuserAccess: true
bootstrap:
initdb:
database: discourse
owner: discourse
secret:
name: discourse-postgresql
dataChecksums: true
encoding: 'UTF8'
storage:
size: 10Gi
storageClass: postgres

1
_disabled/freeswitch/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
PASSWORDS.xml

View File

@@ -0,0 +1,14 @@
<include>
<X-PRE-PROCESS cmd="set" data="default_password=">
<X-PRE-PROCESS cmd="set" data="ipcall_password="/>
<X-PRE-PROCESS cmd="set" data="1000_password="/>
<X-PRE-PROCESS cmd="set" data="1001_password="/>
<X-PRE-PROCESS cmd="set" data="1002_password="/>
<X-PRE-PROCESS cmd="set" data="1003_password="/>
<X-PRE-PROCESS cmd="set" data="1004_password="/>
<X-PRE-PROCESS cmd="set" data="1005_password="/>
<X-PRE-PROCESS cmd="set" data="1006_password="/>
<X-PRE-PROCESS cmd="set" data="1007_password="/>
<X-PRE-PROCESS cmd="set" data="1008_password="/>
<X-PRE-PROCESS cmd="set" data="1009_password="/>
</include>

View File

@@ -0,0 +1,7 @@
```
kubectl -n freeswitch create secret generic freeswitch-passwords --from-file freeswitch/PASSWORDS.xml
```
PASSWORDS.xml is in git.k-space.ee/secretspace/kube:_disabled/freeswitch
freeswitch-sounds was extracted form of http://files.freeswitch.org/releases/sounds/freeswitch-sounds-en-us-callie-32000-1.0.53.tar.gz (with /us/ at root of the volume)

View File

@@ -0,0 +1,567 @@
apiVersion: v1
kind: Service
metadata:
name: freeswitch
namespace: freeswitch
annotations:
external-dns.alpha.kubernetes.io/hostname: freeswitch.k-space.ee
metallb.universe.tf/address-pool: eenet
metallb.universe.tf/ip-allocated-from-pool: eenet
spec:
ports:
- name: sip-internal-udp
protocol: UDP
port: 5060
targetPort: 5060
nodePort: 31787
- name: sip-nat-udp
protocol: UDP
port: 5070
targetPort: 5070
nodePort: 32241
- name: sip-external-udp
protocol: UDP
port: 5080
targetPort: 5080
nodePort: 31354
- name: sip-data-10000
protocol: UDP
port: 10000
targetPort: 10000
nodePort: 30786
- name: sip-data-10001
protocol: UDP
port: 10001
targetPort: 10001
nodePort: 31788
- name: sip-data-10002
protocol: UDP
port: 10002
targetPort: 10002
nodePort: 30247
- name: sip-data-10003
protocol: UDP
port: 10003
targetPort: 10003
nodePort: 32389
- name: sip-data-10004
protocol: UDP
port: 10004
targetPort: 10004
nodePort: 30723
- name: sip-data-10005
protocol: UDP
port: 10005
targetPort: 10005
nodePort: 30295
- name: sip-data-10006
protocol: UDP
port: 10006
targetPort: 10006
nodePort: 30782
- name: sip-data-10007
protocol: UDP
port: 10007
targetPort: 10007
nodePort: 32165
- name: sip-data-10008
protocol: UDP
port: 10008
targetPort: 10008
nodePort: 30282
- name: sip-data-10009
protocol: UDP
port: 10009
targetPort: 10009
nodePort: 31325
- name: sip-data-10010
protocol: UDP
port: 10010
targetPort: 10010
nodePort: 31234
selector:
app: freeswitch
type: LoadBalancer
externalTrafficPolicy: Local
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
internalTrafficPolicy: Cluster
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: freeswitch-sounds
namespace: freeswitch
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
storageClassName: longhorn
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: freeswitch
namespace: freeswitch
labels:
app: freeswitch
annotations:
reloader.stakater.com/auto: "true" # reloader is disabled in cluster, (re)deploy it to use
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: freeswitch
template:
metadata:
labels:
app: freeswitch
spec:
volumes:
- name: config
configMap:
name: freeswitch-config
defaultMode: 420
- name: directory
configMap:
name: freeswitch-directory
defaultMode: 420
- name: sounds
persistentVolumeClaim:
claimName: freeswitch-sounds
- name: passwords
secret:
secretName: freeswitch-passwords
containers:
- name: freeswitch
image: mirror.gcr.io/dheaps/freeswitch:latest
env:
- name: SOUND_TYPES
value: en-us-callie
- name: SOUND_RATES
value: "32000"
resources: {}
volumeMounts:
- name: config
mountPath: /etc/freeswitch/sip_profiles/external/ipcall.xml
subPath: ipcall.xml
- name: config
mountPath: /etc/freeswitch/dialplan/default/00_outbound_ipcall.xml
subPath: 00_outbound_ipcall.xml
- name: config
mountPath: /etc/freeswitch/dialplan/public.xml
subPath: dialplan.xml
- name: config
mountPath: /etc/freeswitch/autoload_configs/switch.conf.xml
subPath: switch.xml
- name: config
mountPath: /etc/freeswitch/vars.xml
subPath: vars.xml
- name: passwords
mountPath: /etc/freeswitch/PASSWORDS.xml
subPath: PASSWORDS.xml
- name: directory
mountPath: /etc/freeswitch/directory/default
- name: sounds
mountPath: /usr/share/freeswitch/sounds
---
apiVersion: v1
kind: ConfigMap
metadata:
name: freeswitch-config
namespace: freeswitch
data:
dialplan.xml: |
<!--
NOTICE:
This context is usually accessed via the external sip profile listening on port 5080.
It is recommended to have separate inbound and outbound contexts. Not only for security
but clearing up why you would need to do such a thing. You don't want outside un-authenticated
callers hitting your default context which allows dialing calls thru your providers and results
in Toll Fraud.
-->
<!-- http://wiki.freeswitch.org/wiki/Dialplan_XML -->
<include>
<context name="public">
<extension name="unloop">
<condition field="${unroll_loops}" expression="^true$"/>
<condition field="${sip_looped_call}" expression="^true$">
<action application="deflect" data="${destination_number}"/>
</condition>
</extension>
<!--
Tag anything pass thru here as an outside_call so you can make sure not
to create any routing loops based on the conditions that it came from
the outside of the switch.
-->
<extension name="outside_call" continue="true">
<condition>
<action application="set" data="outside_call=true"/>
<action application="export" data="RFC2822_DATE=${strftime(%a, %d %b %Y %T %z)}"/>
</condition>
</extension>
<extension name="call_debug" continue="true">
<condition field="${call_debug}" expression="^true$" break="never">
<action application="info"/>
</condition>
</extension>
<extension name="public_extensions">
<condition field="destination_number" expression="^(10[01][0-9])$">
<action application="transfer" data="$1 XML default"/>
</condition>
</extension>
<extension name="public_conference_extensions">
<condition field="destination_number" expression="^(3[5-8][01][0-9])$">
<action application="transfer" data="$1 XML default"/>
</condition>
</extension>
<!--
You can place files in the public directory to get included.
-->
<X-PRE-PROCESS cmd="include" data="public/*.xml"/>
<!--
If you have made it this far lets challenge the caller and if they authenticate
lets try what they dialed in the default context. (commented out by default)
-->
<!-- TODO:
<extension name="check_auth" continue="true">
<condition field="${sip_authorized}" expression="^true$" break="never">
<anti-action application="respond" data="407"/>
</condition>
</extension>
-->
<extension name="transfer_to_default">
<condition>
<!-- TODO: proper ring grouping -->
<action application="bridge" data="user/1004@freeswitch.k-space.ee,user/1003@freeswitch.k-space.ee,sofia/gateway/ipcall/53543824"/>
</condition>
</extension>
</context>
</include>
ipcall.xml: |
<include>
<gateway name="ipcall">
<param name="proxy" value="sip.ipcall.ee"/>
<param name="register" value="true"/>
<param name="realm" value="sip.ipcall.ee"/>
<param name="username" value="6659652"/>
<param name="password" value="$${ipcall_password}"/>
<param name="from-user" value="6659652"/>
<param name="from-domain" value="sip.ipcall.ee"/>
<param name="extension" value="ring_group/default"/>
</gateway>
</include>
00_outbound_ipcall.xml: |
<extension name="outbound">
<!-- TODO: check toll_allow ? -->
<condition field="destination_number" expression="^(\d+)$">
<action application="set" data="sip_invite_domain=sip.ipcall.ee"/>
<action application="bridge" data="sofia/gateway/ipcall/${destination_number}"/>
</condition>
</extension>
switch.xml: |
<configuration name="switch.conf" description="Core Configuration">
<cli-keybindings>
<key name="1" value="help"/>
<key name="2" value="status"/>
<key name="3" value="show channels"/>
<key name="4" value="show calls"/>
<key name="5" value="sofia status"/>
<key name="6" value="reloadxml"/>
<key name="7" value="console loglevel 0"/>
<key name="8" value="console loglevel 7"/>
<key name="9" value="sofia status profile internal"/>
<key name="10" value="sofia profile internal siptrace on"/>
<key name="11" value="sofia profile internal siptrace off"/>
<key name="12" value="version"/>
</cli-keybindings>
<default-ptimes>
</default-ptimes>
<settings>
<param name="colorize-console" value="true"/>
<param name="dialplan-timestamps" value="false"/>
<param name="max-db-handles" value="50"/>
<param name="db-handle-timeout" value="10"/>
<param name="max-sessions" value="1000"/>
<param name="sessions-per-second" value="30"/>
<param name="loglevel" value="debug"/>
<param name="mailer-app" value="sendmail"/>
<param name="mailer-app-args" value="-t"/>
<param name="dump-cores" value="yes"/>
<param name="rtp-start-port" value="10000"/>
<param name="rtp-end-port" value="10010"/>
</settings>
</configuration>
vars.xml: |
<include>
<X-PRE-PROCESS cmd="set" data="disable_system_api_commands=true"/>
<X-PRE-PROCESS cmd="set" data="sound_prefix=$${sounds_dir}/en/us/callie"/>
<X-PRE-PROCESS cmd="set" data="domain=freeswitch.k-space.ee"/>
<X-PRE-PROCESS cmd="set" data="domain_name=$${domain}"/>
<X-PRE-PROCESS cmd="set" data="hold_music=local_stream://moh"/>
<X-PRE-PROCESS cmd="set" data="use_profile=external"/>
<X-PRE-PROCESS cmd="set" data="rtp_sdes_suites=AEAD_AES_256_GCM_8|AEAD_AES_128_GCM_8|AES_CM_256_HMAC_SHA1_80|AES_CM_192_HMAC_SHA1_80|AES_CM_128_HMAC_SHA1_80|AES_CM_256_HMAC_SHA1_32|AES_CM_192_HMAC_SHA1_32|AES_CM_128_HMAC_SHA1_32|AES_CM_128_NULL_AUTH"/>
<X-PRE-PROCESS cmd="set" data="global_codec_prefs=OPUS,G722,PCMU,PCMA,H264,VP8"/>
<X-PRE-PROCESS cmd="set" data="outbound_codec_prefs=OPUS,G722,PCMU,PCMA,H264,VP8"/>
<X-PRE-PROCESS cmd="set" data="xmpp_client_profile=xmppc"/>
<X-PRE-PROCESS cmd="set" data="xmpp_server_profile=xmpps"/>
<X-PRE-PROCESS cmd="set" data="bind_server_ip=auto"/>
<X-PRE-PROCESS cmd="stun-set" data="external_rtp_ip=host:freeswitch.k-space.ee"/>
<X-PRE-PROCESS cmd="stun-set" data="external_sip_ip=host:freeswitch.k-space.ee"/>
<X-PRE-PROCESS cmd="set" data="unroll_loops=true"/>
<X-PRE-PROCESS cmd="set" data="outbound_caller_name=FreeSWITCH"/>
<X-PRE-PROCESS cmd="set" data="outbound_caller_id=0000000000"/>
<X-PRE-PROCESS cmd="set" data="call_debug=false"/>
<X-PRE-PROCESS cmd="set" data="console_loglevel=info"/>
<X-PRE-PROCESS cmd="set" data="default_areacode=372"/>
<X-PRE-PROCESS cmd="set" data="default_country=EE"/>
<X-PRE-PROCESS cmd="set" data="presence_privacy=false"/>
<X-PRE-PROCESS cmd="set" data="au-ring=%(400,200,383,417);%(400,2000,383,417)"/>
<X-PRE-PROCESS cmd="set" data="be-ring=%(1000,3000,425)"/>
<X-PRE-PROCESS cmd="set" data="ca-ring=%(2000,4000,440,480)"/>
<X-PRE-PROCESS cmd="set" data="cn-ring=%(1000,4000,450)"/>
<X-PRE-PROCESS cmd="set" data="cy-ring=%(1500,3000,425)"/>
<X-PRE-PROCESS cmd="set" data="cz-ring=%(1000,4000,425)"/>
<X-PRE-PROCESS cmd="set" data="de-ring=%(1000,4000,425)"/>
<X-PRE-PROCESS cmd="set" data="dk-ring=%(1000,4000,425)"/>
<X-PRE-PROCESS cmd="set" data="dz-ring=%(1500,3500,425)"/>
<X-PRE-PROCESS cmd="set" data="eg-ring=%(2000,1000,475,375)"/>
<X-PRE-PROCESS cmd="set" data="es-ring=%(1500,3000,425)"/>
<X-PRE-PROCESS cmd="set" data="fi-ring=%(1000,4000,425)"/>
<X-PRE-PROCESS cmd="set" data="fr-ring=%(1500,3500,440)"/>
<X-PRE-PROCESS cmd="set" data="hk-ring=%(400,200,440,480);%(400,3000,440,480)"/>
<X-PRE-PROCESS cmd="set" data="hu-ring=%(1250,3750,425)"/>
<X-PRE-PROCESS cmd="set" data="il-ring=%(1000,3000,400)"/>
<X-PRE-PROCESS cmd="set" data="in-ring=%(400,200,425,375);%(400,2000,425,375)"/>
<X-PRE-PROCESS cmd="set" data="jp-ring=%(1000,2000,420,380)"/>
<X-PRE-PROCESS cmd="set" data="ko-ring=%(1000,2000,440,480)"/>
<X-PRE-PROCESS cmd="set" data="pk-ring=%(1000,2000,400)"/>
<X-PRE-PROCESS cmd="set" data="pl-ring=%(1000,4000,425)"/>
<X-PRE-PROCESS cmd="set" data="ro-ring=%(1850,4150,475,425)"/>
<X-PRE-PROCESS cmd="set" data="rs-ring=%(1000,4000,425)"/>
<X-PRE-PROCESS cmd="set" data="ru-ring=%(800,3200,425)"/>
<X-PRE-PROCESS cmd="set" data="sa-ring=%(1200,4600,425)"/>
<X-PRE-PROCESS cmd="set" data="tr-ring=%(2000,4000,450)"/>
<X-PRE-PROCESS cmd="set" data="uk-ring=%(400,200,400,450);%(400,2000,400,450)"/>
<X-PRE-PROCESS cmd="set" data="us-ring=%(2000,4000,440,480)"/>
<X-PRE-PROCESS cmd="set" data="bong-ring=v=-7;%(100,0,941.0,1477.0);v=-7;>=2;+=.1;%(1400,0,350,440)"/>
<X-PRE-PROCESS cmd="set" data="beep=%(1000,0,640)"/>
<X-PRE-PROCESS cmd="set" data="sit=%(274,0,913.8);%(274,0,1370.6);%(380,0,1776.7)"/>
<X-PRE-PROCESS cmd="set" data="df_us_ssn=(?!219099999|078051120)(?!666|000|9\d{2})\d{3}(?!00)\d{2}(?!0{4})\d{4}"/>
<X-PRE-PROCESS cmd="set" data="df_luhn=?:4[0-9]{12}(?:[0-9]{3})?|5[1-5][0-9]{14}|3[47][0-9]{13}|3(?:0[0-5]|[68][0-9])[0-9]{11}|6(?:011|5[0-9]{2})[0-9]{12}|(?:2131|1800|35\d{3})\d{11}"/>
<XX-PRE-PROCESS cmd="set" data="digits_dialed_filter=(($${df_luhn})|($${df_us_ssn}))"/>
<X-PRE-PROCESS cmd="set" data="default_provider=sip.ipcall.ee"/>
<X-PRE-PROCESS cmd="set" data="default_provider_username="/>
<X-PRE-PROCESS cmd="set" data="default_provider_password="/>
<X-PRE-PROCESS cmd="set" data="default_provider_from_domain=sip.ipcall.ee"/>
<X-PRE-PROCESS cmd="set" data="default_provider_register=true"/>
<X-PRE-PROCESS cmd="set" data="default_provider_contact=1004"/>
<X-PRE-PROCESS cmd="set" data="sip_tls_version=tlsv1,tlsv1.1,tlsv1.2"/>
<X-PRE-PROCESS cmd="set" data="sip_tls_ciphers=ALL:!ADH:!LOW:!EXP:!MD5:@STRENGTH"/>
<X-PRE-PROCESS cmd="set" data="internal_auth_calls=true"/>
<X-PRE-PROCESS cmd="set" data="internal_sip_port=5060"/>
<X-PRE-PROCESS cmd="set" data="internal_tls_port=5061"/>
<X-PRE-PROCESS cmd="set" data="internal_ssl_enable=false"/>
<X-PRE-PROCESS cmd="set" data="external_auth_calls=false"/>
<X-PRE-PROCESS cmd="set" data="external_sip_port=5080"/>
<X-PRE-PROCESS cmd="set" data="external_tls_port=5081"/>
<X-PRE-PROCESS cmd="set" data="external_ssl_enable=false"/>
<X-PRE-PROCESS cmd="set" data="rtp_video_max_bandwidth_in=3mb"/>
<X-PRE-PROCESS cmd="set" data="rtp_video_max_bandwidth_out=3mb"/>
<X-PRE-PROCESS cmd="set" data="suppress_cng=true"/>
<X-PRE-PROCESS cmd="set" data="rtp_liberal_dtmf=true"/>
<X-PRE-PROCESS cmd="set" data="video_mute_png=$${images_dir}/default-mute.png"/>
<X-PRE-PROCESS cmd="set" data="video_no_avatar_png=$${images_dir}/default-avatar.png"/>
<X-PRE-PROCESS cmd="include" data="PASSWORDS.xml"/>
</include>
---
apiVersion: v1
kind: ConfigMap
metadata:
name: freeswitch-directory
namespace: freeswitch
data:
1000.xml: |
<include>
<user id="1000">
<params>
<param name="password" value="$${1000_password}"/>
</params>
<variables>
<variable name="toll_allow" value="domestic,local"/>
<variable name="accountcode" value="1000"/>
<variable name="user_context" value="default"/>
<variable name="effective_caller_id_name" value=""/>
<variable name="effective_caller_id_number" value="1000"/>
<variable name="outbound_caller_id_name" value="$${outbound_caller_name}"/>
<variable name="outbound_caller_id_number" value="$${outbound_caller_id}"/>
</variables>
</user>
</include>
1001.xml: |
<include>
<user id="1001">
<params>
<param name="password" value="$${1001_password}"/>
</params>
<variables>
<variable name="toll_allow" value="domestic,local"/>
<variable name="accountcode" value="1001"/>
<variable name="user_context" value="default"/>
<variable name="effective_caller_id_name" value=""/>
<variable name="effective_caller_id_number" value="1001"/>
<variable name="outbound_caller_id_name" value="$${outbound_caller_name}"/>
<variable name="outbound_caller_id_number" value="$${outbound_caller_id}"/>
</variables>
</user>
</include>
1002.xml: |
<include>
<user id="1002">
<params>
<param name="password" value="$${1002_password}"/>
</params>
<variables>
<variable name="toll_allow" value="domestic,local"/>
<variable name="accountcode" value="1002"/>
<variable name="user_context" value="default"/>
<variable name="effective_caller_id_name" value=""/>
<variable name="effective_caller_id_number" value="1002"/>
<variable name="outbound_caller_id_name" value="$${outbound_caller_name}"/>
<variable name="outbound_caller_id_number" value="$${outbound_caller_id}"/>
</variables>
</user>
</include>
1003.xml: |
<include>
<user id="1003">
<params>
<param name="password" value="$${1003_password}"/>
</params>
<variables>
<variable name="toll_allow" value="domestic,local"/>
<variable name="accountcode" value="1003"/>
<variable name="user_context" value="default"/>
<variable name="effective_caller_id_name" value="Erki A"/>
<variable name="effective_caller_id_number" value="1003"/>
<variable name="outbound_caller_id_name" value="$${outbound_caller_name}"/>
<variable name="outbound_caller_id_number" value="$${outbound_caller_id}"/>
</variables>
</user>
</include>
1004.xml: |
<include>
<user id="1004">
<params>
<param name="password" value="$${1004_password}"/>
</params>
<variables>
<variable name="toll_allow" value="domestic,local"/>
<variable name="accountcode" value="1004"/>
<variable name="user_context" value="default"/>
<variable name="effective_caller_id_name" value="Erki A"/>
<variable name="effective_caller_id_number" value="1004"/>
<variable name="outbound_caller_id_name" value="$${outbound_caller_name}"/>
<variable name="outbound_caller_id_number" value="$${outbound_caller_id}"/>
</variables>
</user>
</include>
1005.xml: |
<include>
<user id="1005">
<params>
<param name="password" value="$${1005_password}"/>
</params>
<variables>
<variable name="toll_allow" value="domestic,local"/>
<variable name="accountcode" value="1005"/>
<variable name="user_context" value="default"/>
<variable name="effective_caller_id_name" value=""/>
<variable name="effective_caller_id_number" value="1005"/>
<variable name="outbound_caller_id_name" value="$${outbound_caller_name}"/>
<variable name="outbound_caller_id_number" value="$${outbound_caller_id}"/>
</variables>
</user>
</include>
1006.xml: |
<include>
<user id="1006">
<params>
<param name="password" value="$${1006_password}"/>
</params>
<variables>
<variable name="toll_allow" value="domestic,local"/>
<variable name="accountcode" value="1006"/>
<variable name="user_context" value="default"/>
<variable name="effective_caller_id_name" value=""/>
<variable name="effective_caller_id_number" value="1006"/>
<variable name="outbound_caller_id_name" value="$${outbound_caller_name}"/>
<variable name="outbound_caller_id_number" value="$${outbound_caller_id}"/>
</variables>
</user>
</include>
1007.xml: |
<include>
<user id="1007">
<params>
<param name="password" value="$${1007_password}"/>
</params>
<variables>
<variable name="toll_allow" value="domestic,local"/>
<variable name="accountcode" value="1007"/>
<variable name="user_context" value="default"/>
<variable name="effective_caller_id_name" value=""/>
<variable name="effective_caller_id_number" value="1007"/>
<variable name="outbound_caller_id_name" value="$${outbound_caller_name}"/>
<variable name="outbound_caller_id_number" value="$${outbound_caller_id}"/>
</variables>
</user>
</include>
1008.xml: |
<include>
<user id="1008">
<params>
<param name="password" value="$${1008_password}"/>
</params>
<variables>
<variable name="toll_allow" value="domestic,local"/>
<variable name="accountcode" value="1008"/>
<variable name="user_context" value="default"/>
<variable name="effective_caller_id_name" value=""/>
<variable name="effective_caller_id_number" value="1008"/>
<variable name="outbound_caller_id_name" value="$${outbound_caller_name}"/>
<variable name="outbound_caller_id_number" value="$${outbound_caller_id}"/>
</variables>
</user>
</include>
1009.xml: |
<include>
<user id="1009">
<params>
<param name="password" value="$${1009_password}"/>
</params>
<variables>
<variable name="toll_allow" value="domestic,local"/>
<variable name="accountcode" value="1009"/>
<variable name="user_context" value="default"/>
<variable name="effective_caller_id_name" value=""/>
<variable name="effective_caller_id_number" value="1009"/>
<variable name="outbound_caller_id_name" value="$${outbound_caller_name}"/>
<variable name="outbound_caller_id_number" value="$${outbound_caller_id}"/>
</variables>
</user>
</include>

View File

@@ -2,11 +2,11 @@
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: asterisk
name: freeswitch
spec:
podSelector:
matchLabels:
app: asterisk
app: freeswitch
policyTypes:
- Ingress
- Egress
@@ -32,14 +32,18 @@ spec:
cidr: 172.20.8.241/32 # Erki A
- from:
- ipBlock:
cidr: 195.222.16.36/32 # Elisa SIP
cidr: 212.47.211.10/32 # Elisa SIP
- from:
- ipBlock:
cidr: 195.222.16.38/32 # Elisa SIP
cidr: 212.47.211.10/32 # Elisa SIP
egress:
- to:
- ipBlock:
cidr: 195.222.16.36/32 # Elisa SIP
cidr: 212.47.211.10/32 # Elisa SIP
- to:
- ipBlock:
cidr: 195.222.16.38/32 # Elisa SIP
- to:
ports:
- port: 53
protocol: UDP

View File

@@ -62,7 +62,7 @@ spec:
serviceAccountName: local-path-provisioner-service-account
containers:
- name: local-path-provisioner
image: rancher/local-path-provisioner:v0.0.22
image: mirror.gcr.io/rancher/local-path-provisioner:v0.0.22
imagePullPolicy: IfNotPresent
command:
- local-path-provisioner
@@ -151,7 +151,7 @@ data:
spec:
containers:
- name: helper-pod
image: busybox
image: mirror.gcr.io/library/busybox
imagePullPolicy: IfNotPresent

View File

@@ -0,0 +1,21 @@
# MongoDB Community Kubernetes Operator
## Derployment
With ArgoCD. Render it locally:
```sh
kustomize build . --enable-helm
```
# Instantiating databases
For each application include mongodb-netpol.yaml and kustomization in resources:
```yaml
resources:
- https://git.k-space.ee/k-space/kube//mongodb-operator/mongodb-netpol.yaml
- https://github.com/mongodb/mongodb-kubernetes-operator//config/rbac/?ref=v0.13.0
```
```
kubectl create secret generic -n <application> mongodb-application-user-password --from-literal="password=$(cat /dev/urandom | base64 | head -c 30)"
```

View File

@@ -0,0 +1,13 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: mongodb-operator
# spec: https://kubectl.docs.kubernetes.io/references/kustomize/builtins/#_helmchartinflationgenerator_
helmCharts:
- includeCRDs: true
name: &name community-operator
releaseName: *name
repo: https://mongodb.github.io/helm-charts
valuesFile: values.yaml
version: 0.13.0 # helm search repo mongodb/community-operator --versions

View File

@@ -0,0 +1,25 @@
# Allow any pod in this namespace to connect to MongoDB and
# allow cluster members to talk to eachother
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: mongodb-operator
spec:
podSelector:
matchLabels:
app: mongodb-svc
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector: {}
ports:
- port: 27017
egress:
- to:
- podSelector:
matchLabels:
app: mongodb-svc
ports:
- port: 27017

View File

@@ -1,5 +1,2 @@
operator:
watchNamespace: '*'
mongodb:
repo: mirror.gcr.io

View File

@@ -13,7 +13,7 @@ spec:
podSpec:
containers:
- name: mariadb
image: mariadb:10.9.7@sha256:198c7a5fea3d7285762042a628fe8f83f0a7ccef559605b4cc9502e65210880b
image: mirror.gcr.io/library/mariadb:10.9.7@sha256:198c7a5fea3d7285762042a628fe8f83f0a7ccef559605b4cc9502e65210880b
imagePullPolicy: IfNotPresent
nodeSelector:
dedicated: storage

View File

@@ -29,7 +29,7 @@ spec:
spec:
containers:
- name: phpmyadmin
image: phpmyadmin/phpmyadmin
image: mirror.gcr.io/phpmyadmin/phpmyadmin
ports:
- name: web
containerPort: 80
@@ -77,7 +77,6 @@ metadata:
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
traefik.ingress.kubernetes.io/router.middlewares: mysql-clusters-phpmyadmin@kubernetescrd
spec:

View File

@@ -14,7 +14,7 @@ spec:
podSpec:
containers:
- name: mariadb
image: mariadb:10.9.7@sha256:198c7a5fea3d7285762042a628fe8f83f0a7ccef559605b4cc9502e65210880b
image: mirror.gcr.io/library/mariadb:10.9.7@sha256:198c7a5fea3d7285762042a628fe8f83f0a7ccef559605b4cc9502e65210880b
imagePullPolicy: IfNotPresent
nodeSelector:
dedicated: storage

View File

@@ -0,0 +1,20 @@
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: mysql
annotations:
kubernetes.io/description: |
Storage class for MySQL, MariaDB and similar applications that
implement high availability in application layer.
This storage class uses XFS, has no block level redundancy and
has block device level caching disabled.
provisioner: csi.proxmox.sinextra.dev
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
parameters:
csi.storage.k8s.io/fstype: xfs
storage: ks-pvs
cache: none
ssd: "true"

View File

@@ -1,10 +1,11 @@
---
apiVersion: codemowers.io/v1alpha1
kind: OIDCGWClient
apiVersion: codemowers.cloud/v1beta1
kind: OIDCClient
metadata:
name: whoami-oidc
namespace: whoami-oidc
spec:
displayName: Whoami (oidc-tester-app)
displayName: Whoami OIDC
uri: https://whoami-oidc.k-space.ee
redirectUris:
- https://whoami-oidc.k-space.ee/auth/callback
@@ -16,7 +17,6 @@ spec:
availableScopes:
- openid
- profile
tokenEndpointAuthMethod: client_secret_post
pkce: false
---
apiVersion: apps/v1

View File

@@ -1,5 +0,0 @@
#TODO:
- inventory
- running playbooks NB! about PWD
- ssh_config; updating
Include ssh_config (with known_hosts) to access all machines listed.

View File

@@ -1,15 +0,0 @@
[defaults]
inventory = inventory.yml
nocows = 1
pattern =
deprecation_warnings = False
fact_caching = jsonfile
fact_caching_connection = ~/.ansible/k-space-fact-cache
fact_caching_timeout = 7200
remote_user = root
[ssh_connection]
control_path = ~/.ssh/cm-%%r@%%h:%%p
ssh_args = -o ControlMaster=auto -o ControlPersist=8h -F ssh_config
pipelining = True

View File

@@ -1,76 +0,0 @@
- name: Setup primary nameserver
hosts: ns1.k-space.ee
tasks:
- name: Make sure bind9 is installed
ansible.builtin.apt:
name: bind9
state: present
- name: Configure Bind
register: bind
copy:
dest: /etc/bind/named.conf
content: |
# This file is managed by Ansible
# https://git.k-space.ee/k-space/kube/src/branch/master/ansible-bind-primary.yml
# Do NOT modify manually
include "/etc/bind/named.conf.local";
include "/etc/bind/readwrite.key";
include "/etc/bind/readonly.key";
options {
directory "/var/cache/bind";
version "";
listen-on { any; };
listen-on-v6 { any; };
pid-file "/var/run/named/named.pid";
notify explicit; also-notify { 172.20.53.1; 172.20.53.2; 172.20.53.3; };
allow-recursion { none; };
recursion no;
check-names master ignore;
dnssec-validation no;
auth-nxdomain no;
};
# https://kb.isc.org/docs/aa-00723
acl allowed {
172.20.3.0/24;
172.20.4.0/24;
};
acl rejected { !allowed; any; };
zone "." {
type hint;
file "/var/lib/bind/db.root";
};
zone "k-space.ee" {
type master;
file "/var/lib/bind/db.k-space.ee";
allow-update { !rejected; key readwrite; };
allow-transfer { !rejected; key readonly; key readwrite; };
};
zone "k6.ee" {
type master;
file "/var/lib/bind/db.k6.ee";
allow-update { !rejected; key readwrite; };
allow-transfer { !rejected; key readonly; key readwrite; };
};
zone "kspace.ee" {
type master;
file "/var/lib/bind/db.kspace.ee";
allow-update { !rejected; key readwrite; };
allow-transfer { !rejected; key readonly; key readwrite; };
};
- name: Check Bind config
ansible.builtin.shell: "named-checkconf"
- name: Reload Bind config
service:
name: bind9
state: reloaded
when: bind.changed

View File

@@ -1,65 +0,0 @@
# ansible doors -m shell -a "ctr image pull harbor.k-space.ee/k-space/mjpg-streamer:latest"
# journalctl -u mjpg_streamer@video0.service -f
# Referenced/linked and documented by https://wiki.k-space.ee/en/hosting/doors
- name: Setup doors
hosts: doors
tasks:
- name: Make sure containerd is installed
ansible.builtin.apt:
name: containerd
state: present
- name: Copy systemd service for Doorboy controller # https://git.k-space.ee/k-space/godoor
copy:
dest: /etc/systemd/system/godoor.service
content: |
[Unit]
Description=Doorboy service
Documentation=https://git.k-space.ee/k-space/godoor
After=network.target
[Service]
Environment=IMAGE=harbor.k-space.ee/k-space/godoor:latest
ExecStartPre=-ctr task kill --signal=9 %N
ExecStartPre=-ctr task rm %N
ExecStartPre=-ctr c rm %N
ExecStartPre=-ctr image pull $IMAGE
ExecStart=ctr run --rm --pid-file=/run/%N.pid --privileged --read-only --env-file=/etc/godoor --env=KDOORPI_API_ALLOWED=https://doorboy-proxy.k-space.ee/allowed --env=KDOORPI_API_LONGPOLL=https://doorboy-proxy.k-space.ee/longpoll --env=KDOORPI_API_SWIPE=https://doorboy-proxy.k-space.ee/swipe --env=KDOORPI_DOOR=%H --net-host --net-host --cwd /app $IMAGE %N /godoor
ExecStopPost=ctr task rm %N
ExecStopPost=ctr c rm %N
Restart=always
[Install]
WantedBy=multi-user.target
- name: Enable Doorboy controller
ansible.builtin.systemd:
state: restarted
daemon_reload: yes
name: godoor.service
- name: Copy systemd service for mjpg-streamer # https://git.k-space.ee/k-space/mjpg-steramer
copy:
dest: /etc/systemd/system/mjpg_streamer@.service
content: |
[Unit]
Description=A server for streaming Motion-JPEG from a video capture device
After=network.target
ConditionPathExists=/dev/%I
[Service]
Environment=IMAGE=harbor.k-space.ee/k-space/mjpg-streamer:latest
StandardOutput=tty
Type=forking
ExecStartPre=-ctr task kill --signal=9 %p_%i
ExecStartPre=-ctr task rm %p_%i
ExecStartPre=-ctr c rm %p_%i
ExecStartPre=-ctr image pull $IMAGE
ExecStart=ctr run --tty -d --rm --pid-file=/run/%i.pid --privileged --read-only --net-host $IMAGE %p_%i /usr/local/bin/mjpg_streamer -i 'input_uvc.so -d /dev/%I -r 1280x720 -f 10' -o 'output_http.so -w /usr/share/mjpg_streamer/www'
ExecStopPost=ctr task rm %p_%i
ExecStopPost=ctr c rm %p_%i
PIDFile=/run/%i.pid
[Install]
WantedBy=multi-user.target
- name: Enable mjpg-streamer
ansible.builtin.systemd:
state: restarted
daemon_reload: yes
name: mjpg_streamer@video0.service

View File

@@ -1,83 +0,0 @@
# This file is linked from /README.md as 'all infra'.
##### Not otherwise linked:
# Homepage: https://git.k-space.ee/k-space/homepage (on GitLab)
# Slack: https://k-space-ee.slack.com
# Routers/Switches: https://git.k-space.ee/k-space/rosdump
all:
vars:
admins:
- lauri
- eaas
extra_admins: []
children:
# https://wiki.k-space.ee/en/hosting/storage
nasgroup:
hosts:
nas.k-space.ee: { ansible_host: 172.23.0.7 }
offsite:
ansible_host: 78.28.64.17
ansible_port: 10648
vars:
offsite_dataset: offsite/backup_zrepl
misc:
children:
nasgroup:
hosts:
# https://git.k-space.ee/k-space/kube: bind/README.md (primary DNS, PVE VM)
ns1.k-space.ee: { ansible_host: 172.20.0.2 }
# https://wiki.k-space.ee/hosting/proxmox (depends on nas.k-space.ee)
proxmox: # aka PVE, Proxmox Virtualization Environment
vars:
extra_admins:
- rasmus
hosts:
pve1: { ansible_host: 172.21.20.1 }
pve2: { ansible_host: 172.21.20.2 }
pve8: { ansible_host: 172.21.20.8 }
pve9: { ansible_host: 172.21.20.9 }
# https://git.k-space.ee/k-space/kube: README.md
# CLUSTER.md (PVE VMs + external nas.k-space.ee)
kubernetes:
children:
masters:
hosts:
master1.kube.k-space.ee: { ansible_host: 172.21.3.51 }
master2.kube.k-space.ee: { ansible_host: 172.21.3.52 }
master3.kube.k-space.ee: { ansible_host: 172.21.3.53 }
kubelets:
children:
mon: # they sit in a priviledged VLAN
hosts:
mon1.kube.k-space.ee: { ansible_host: 172.21.3.61 }
mon2.kube.k-space.ee: { ansible_host: 172.21.3.62 }
mon3.kube.k-space.ee: { ansible_host: 172.21.3.63 }
storage: # longhorn, to be replaced with a more direct CSI
hosts:
storage1.kube.k-space.ee: { ansible_host: 172.21.3.71 }
storage2.kube.k-space.ee: { ansible_host: 172.21.3.72 }
storage3.kube.k-space.ee: { ansible_host: 172.21.3.73 }
storage4.kube.k-space.ee: { ansible_host: 172.21.3.74 }
workers:
hosts:
worker1.kube.k-space.ee: { ansible_host: 172.20.3.81 }
worker2.kube.k-space.ee: { ansible_host: 172.20.3.82 }
worker3.kube.k-space.ee: { ansible_host: 172.20.3.83 }
worker4.kube.k-space.ee: { ansible_host: 172.20.3.84 }
worker9.kube.k-space.ee: { ansible_host: 172.21.3.89 } # Nvidia Tegra Jetson-AGX
# https://wiki.k-space.ee/en/hosting/doors
# See also: https://git.k-space.ee/k-space/kube: camtiler/README.md
doors:
vars:
extra_admins:
- arti
hosts:
grounddoor: { ansible_host: 100.102.3.1 }
frontdoor: { ansible_host: 100.102.3.2 }
backdoor: { ansible_host: 100.102.3.3 }
workshopdoor: { ansible_host: 100.102.3.4 }

View File

@@ -1,27 +0,0 @@
# Use `ansible-playbook update-ssh-config.yml` to update this file
100.102.3.3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN4SifLddYAz8CasmFwX5TQbiM8atAYMFuDQRchclHM0sq9Pi8wRxSZK8SHON4Y7YFsIY+cXnQ2Wx4FpzKmfJYE= # backdoor
100.102.3.2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE8/E7PDqTrTdU+MFurHkIPzTBTGcSJqXuv5n0Ugd/IlvOr2v+eYi3ma91pSBmF5Hjy9foWypCLZfH+vWMkV0gs= # frontdoor
100.102.3.1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFcH8D2AhnESw3uu2f4EHBhT9rORQQJJ3TlbwN+kro5tRZsZk4p3MKabBiuCSZw2KWjfu0MY4yHSCrUUQrggJDM= # grounddoor
172.21.3.51 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMYy07yLlOiFvXzmVDIULS9VDCMz7T+qOq4M+x8Lo3KEKamI6ZD737mvimPTW6K1FRBzzq67Mq495UnoFKVnQWE= # master1.kube.k-space.ee
172.21.3.52 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKRFfYDaTH58FUw+9stBVsyCviaPCGEbe9Y1a9WKvj98S7m+qU03YvtfPkRfEH/3iXHDvngEDVpJrTWW4y6e6MI= # master2.kube.k-space.ee
172.21.3.53 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIqIepuMkMo/KO3bb4X6lgb6YViAifPmgHXVrbtHwbOZLll5Qqr4pXdLDxkuZsmiE7iZBw2gSzZLcNMGdDEnWrY= # master3.kube.k-space.ee
172.21.3.61 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCJ9XgDz2NEzvjw/nDmRIKUJAmNqzsaXMJn4WFiWfTz1x2HrRcXgY3UXKWUxUvJO1jJ7hIvyE+V/8UtwYRDP1uY= # mon1.kube.k-space.ee
172.21.3.62 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLveng7H/2Gek+HYDYRWFD0Dy+4l/zjrbF2mnnkBI5CFOtqK0zwBh41IlizkpmmI5fqEIXwhLFHZEWXbUvev5oo= # mon2.kube.k-space.ee
172.21.3.63 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMMgOIL43dgCYlwAI2O269iHxo7ymweG7NoXjnk2F529G5mP+mp5We4lDZEJVyLYtemvhQ2hEHI/WVPWy3SNiuM= # mon3.kube.k-space.ee
172.23.0.7 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC15tWIbuBqd4UZLaRbpb6oTlwniS4cg2IYZYe5ys352azj2kzOnvtCGiPo0fynFadwfDHtge9JjK6Efwl87Wgc= # nas.k-space.ee
172.20.0.2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO77ffkJi903aA6cM7HnFfSyYbPP4jkydI/+/tIGeMv+c9BYOE27n+ylNERaEhYkyddIx93MB4M6GYRyQOjLWSc= # ns1.k-space.ee
[78.28.64.17]:10648 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE7J61p3YzsbRAYtXIrhQUeqc47LuVw1I38egHzi/kLG+CFPsyB9krd29yJMyLRjyM+m5qUjoxNiWK/x0g3jKOI= # offsite
172.21.20.1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHLHc3T/J5G1CIf33XeniJk5+D0cpaXe0OkHmpCQ3DoZC3KkFBpA+/U1mlo+qb8xf/GrMj6BMMMLXKSUxbEVGaU= # pve1
172.21.20.2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFGSRetFdHExRT69pHJAcuhqzAu+Xx4K2AEmWJhUZ2JYF7aa0JbltiYQs58Bpx9s9NA793tiHLZXABy56dI+D9Q= # pve2
172.21.20.8 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMzNvX3ga56EELcI9gV7moyFdKllSwb81V2tCWIjhFVSFTo3QKH/gX/MBnjcs+RxeVV3GF7zIIv8492bCvgiO9s= # pve8
172.21.20.9 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNU4YzKSzzUSnAgh4L1DF3dlC1VEaKVaIeTgsL5VJ0UMqjPr+8QMjIvo28cSLfIQYtfoQbt7ASVsm0uDQvKOldM= # pve9
172.21.3.71 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI2jy8EsMo7Voor4URCMdgiEzc0nmYDowV4gB2rZ6hnH7bcKGdaODsCyBH6nvbitgnESCC8136RmdxCnO9/TuJ0= # storage1.kube.k-space.ee
172.21.3.72 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKxa2PbOj7bV0AUkBZuPkQZ/3ZMeh1mUCD+rwB4+sXbvTc+ca+xgcPGdAozbY/cUA4GdaKelhjI9DEC46MeFymY= # storage2.kube.k-space.ee
172.21.3.73 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGYqNHAxwwoZqne/uv5syRb+tEwpbaGeK8oct4IjIHcmPdU32JlMiSqLX7d58t/b8tqE1z2rM4gCc4bpzvNrHMQ= # storage3.kube.k-space.ee
172.21.3.74 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI+FRuwbrUpMDg9gKf6AqcfovEkt8r5SgB4JXEuMD+I6pp+2PfbxMwrXQ8Xg3oHW+poG413KWw4FZOWv2gH4CEQ= # storage4.kube.k-space.ee
172.20.3.81 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPnmGiEWtWnNNcF872fhYKCD07QwOb75BDEwN3fC4QYmBAbiN0iX/UH96r02V5f7uga3a07/xxt5P0cfEOdtQwQ= # worker1.kube.k-space.ee
172.20.3.82 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBkSNAYeugxGvNmV3biY1s0BWPCEw3g3H0VWLomu/vPbg+GN10/A1pfgt62DHFCYDB6QZwkZM6HIFy8y0xhRl9g= # worker2.kube.k-space.ee
172.20.3.83 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBe+A9Bg54UwUvlPguKDyNAsX7mYbnfMOxhK2UP2YofPlzJ0KDUuH5mbmw76XWz0L6jhT6I7hyc0QsFBdO3ug68= # worker3.kube.k-space.ee
172.20.3.84 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKoNIL+kEYphi/yCdhIytxqRaucm2aTzFrmNN4gEjCrn4TK8A46fyqAuwmgyLQFm7RD5qcEKPWP57Cl0DhTU1T4= # worker4.kube.k-space.ee
172.21.3.89 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCoepYYnNMXkZ9dn4RSSMhFFsppPVkzmjkG3z9vK84454XkI4wizmhUlZ0p+Ovx2YbrjbKibfrrtk8RgWUMi0rY= # worker9.kube.k-space.ee
100.102.3.4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMpkSqEOyYrKXChxl6PAV+q0KypOPnKsXoXWO1JSZSIOwAs5YTzt8Q1Ryb+nQnAOlGj1AY1H7sRllTzdv0cA/EM= # workshopdoor

View File

@@ -1,171 +0,0 @@
---
- name: Reconfigure Kubernetes worker nodes
hosts:
- storage
- workers
tasks:
- name: Configure grub defaults
copy:
dest: "/etc/default/grub"
content: |
GRUB_DEFAULT=0
GRUB_TIMEOUT_STYLE=countdown
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash memhp_default_state=online"
GRUB_CMDLINE_LINUX="memhp_default_state=online rootflags=pquota"
register: grub_defaults
when: ansible_architecture == 'x86_64'
- name: Load grub defaults
ansible.builtin.shell: update-grub
when: grub_defaults.changed
- name: Ensure nfs-common is installed
ansible.builtin.apt:
name: nfs-common
state: present
- name: Reconfigure Kubernetes nodes
hosts: kubernetes
vars:
KUBERNETES_VERSION: v1.30.3
IP: "{{ hostvars[inventory_hostname]['ansible_default_ipv4']['address'] }}"
tasks:
- name: Remove APT packages
ansible.builtin.apt:
name: "{{ item }}"
state: absent
loop:
- kubelet
- kubeadm
- kubectl
- name: Download kubectl, kubeadm, kubelet
ansible.builtin.get_url:
url: "https://cdn.dl.k8s.io/release/{{ KUBERNETES_VERSION }}/bin/linux/{{ 'arm64' if ansible_architecture == 'aarch64' else 'amd64' }}/{{ item }}"
dest: "/usr/bin/{{ item }}-{{ KUBERNETES_VERSION }}"
mode: '0755'
loop:
- kubelet
- kubectl
- kubeadm
- name: Create symlinks for kubectl, kubeadm, kubelet
ansible.builtin.file:
src: "/usr/bin/{{ item }}-{{ KUBERNETES_VERSION }}"
dest: "/usr/bin/{{ item }}"
state: link
loop:
- kubelet
- kubectl
- kubeadm
register: kubelet
- name: Restart Kubelet
service:
name: kubelet
enabled: true
state: restarted
when: kubelet.changed
- name: Create /etc/systemd/system/kubelet.service
ansible.builtin.copy:
content: |
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/home/
Wants=network-online.target
After=network-online.target
[Service]
ExecStart=/usr/local/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target
dest: /etc/systemd/system/kubelet.service
- name: Reconfigure shutdownGracePeriod
ansible.builtin.lineinfile:
path: /var/lib/kubelet/config.yaml
regexp: '^shutdownGracePeriod:'
line: 'shutdownGracePeriod: 5m'
- name: Reconfigure shutdownGracePeriodCriticalPods
ansible.builtin.lineinfile:
path: /var/lib/kubelet/config.yaml
regexp: '^shutdownGracePeriodCriticalPods:'
line: 'shutdownGracePeriodCriticalPods: 5m'
- name: Work around unattended-upgrades
ansible.builtin.lineinfile:
path: /lib/systemd/logind.conf.d/unattended-upgrades-logind-maxdelay.conf
regexp: '^InhibitDelayMaxSec='
line: 'InhibitDelayMaxSec=5m0s'
- name: Disable unneccesary services
ignore_errors: true
loop:
- gdm3
- snapd
- bluetooth
- multipathd
service:
name: "{{item}}"
state: stopped
enabled: no
- name: Reset /etc/containers/registries.conf
ansible.builtin.copy:
content: "unqualified-search-registries = [\"docker.io\"]\n"
dest: /etc/containers/registries.conf
register: registries
- name: Restart CRI-O
service:
name: cri-o
state: restarted
when: registries.changed
- name: Reset /etc/modules
ansible.builtin.copy:
content: |
overlay
br_netfilter
dest: /etc/modules
register: kernel_modules
- name: Load kernel modules
ansible.builtin.shell: "cat /etc/modules | xargs -L 1 -t modprobe"
when: kernel_modules.changed
- name: Reset /etc/sysctl.d/99-k8s.conf
ansible.builtin.copy:
content: |
net.ipv4.conf.all.accept_redirects = 0
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
vm.max_map_count = 524288
fs.inotify.max_user_instances = 1280
fs.inotify.max_user_watches = 655360
dest: /etc/sysctl.d/99-k8s.conf
register: sysctl
- name: Reload sysctl config
ansible.builtin.shell: "sysctl --system"
when: sysctl.changed
- name: Reconfigure kube-apiserver to use Passmower OIDC endpoint
ansible.builtin.template:
src: kube-apiserver.j2
dest: /etc/kubernetes/manifests/kube-apiserver.yaml
mode: 600
register: apiserver
when:
- inventory_hostname in groups["masters"]
- name: Restart kube-apiserver
ansible.builtin.shell: "killall kube-apiserver"
when: apiserver.changed

View File

@@ -1,211 +0,0 @@
# Use `ansible-playbook update-ssh-config.yml` to update this file
# Use `ssh -F ssh_config ...` to connect to target machine or
# Add `Include ~/path/to/this/kube/ssh_config` in your ~/.ssh/config
Host backdoor 100.102.3.3
User root
Hostname 100.102.3.3
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host frontdoor 100.102.3.2
User root
Hostname 100.102.3.2
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host grounddoor 100.102.3.1
User root
Hostname 100.102.3.1
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host master1.kube.k-space.ee 172.21.3.51
User root
Hostname 172.21.3.51
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host master2.kube.k-space.ee 172.21.3.52
User root
Hostname 172.21.3.52
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host master3.kube.k-space.ee 172.21.3.53
User root
Hostname 172.21.3.53
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host mon1.kube.k-space.ee 172.21.3.61
User root
Hostname 172.21.3.61
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host mon2.kube.k-space.ee 172.21.3.62
User root
Hostname 172.21.3.62
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host mon3.kube.k-space.ee 172.21.3.63
User root
Hostname 172.21.3.63
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host nas.k-space.ee 172.23.0.7
User root
Hostname 172.23.0.7
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host ns1.k-space.ee 172.20.0.2
User root
Hostname 172.20.0.2
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host offsite 78.28.64.17
User root
Hostname 78.28.64.17
Port 10648
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host pve1 172.21.20.1
User root
Hostname 172.21.20.1
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host pve2 172.21.20.2
User root
Hostname 172.21.20.2
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host pve8 172.21.20.8
User root
Hostname 172.21.20.8
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host pve9 172.21.20.9
User root
Hostname 172.21.20.9
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host storage1.kube.k-space.ee 172.21.3.71
User root
Hostname 172.21.3.71
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host storage2.kube.k-space.ee 172.21.3.72
User root
Hostname 172.21.3.72
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host storage3.kube.k-space.ee 172.21.3.73
User root
Hostname 172.21.3.73
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host storage4.kube.k-space.ee 172.21.3.74
User root
Hostname 172.21.3.74
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host worker1.kube.k-space.ee 172.20.3.81
User root
Hostname 172.20.3.81
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host worker2.kube.k-space.ee 172.20.3.82
User root
Hostname 172.20.3.82
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host worker3.kube.k-space.ee 172.20.3.83
User root
Hostname 172.20.3.83
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host worker4.kube.k-space.ee 172.20.3.84
User root
Hostname 172.20.3.84
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host worker9.kube.k-space.ee 172.21.3.89
User root
Hostname 172.21.3.89
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
Host workshopdoor 100.102.3.4
User root
Hostname 100.102.3.4
Port 22
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h

View File

@@ -1,132 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
annotations:
kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: {{ IP }}:6443
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --advertise-address={{ IP }}
- --allow-privileged=true
- --authorization-mode=Node,RBAC
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --enable-admission-plugins=NodeRestriction
- --enable-bootstrap-token-auth=true
- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --oidc-client-id=passmower.kubelogin
- --oidc-groups-claim=groups
- --oidc-issuer-url=https://auth.k-space.ee/
- --oidc-username-claim=sub
- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- --requestheader-allowed-names=front-proxy-client
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-User
- --secure-port=6443
- --service-account-issuer=https://kubernetes.default.svc.cluster.local
- --service-account-key-file=/etc/kubernetes/pki/sa.pub
- --service-account-signing-key-file=/etc/kubernetes/pki/sa.key
- --service-cluster-ip-range=10.96.0.0/12
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
image: registry.k8s.io/kube-apiserver:{{ KUBERNETES_VERSION }}
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: {{ IP }}
path: /livez
port: 6443
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
name: kube-apiserver
readinessProbe:
failureThreshold: 3
httpGet:
host: {{ IP }}
path: /readyz
port: 6443
scheme: HTTPS
periodSeconds: 1
timeoutSeconds: 15
resources:
requests:
cpu: 250m
startupProbe:
failureThreshold: 24
httpGet:
host: {{ IP }}
path: /livez
port: 6443
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/ca-certificates
name: etc-ca-certificates
readOnly: true
- mountPath: /etc/pki
name: etc-pki
readOnly: true
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /usr/local/share/ca-certificates
name: usr-local-share-ca-certificates
readOnly: true
- mountPath: /usr/share/ca-certificates
name: usr-share-ca-certificates
readOnly: true
hostNetwork: true
priority: 2000001000
priorityClassName: system-node-critical
securityContext:
seccompProfile:
type: RuntimeDefault
volumes:
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/ca-certificates
type: DirectoryOrCreate
name: etc-ca-certificates
- hostPath:
path: /etc/pki
type: DirectoryOrCreate
name: etc-pki
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /usr/local/share/ca-certificates
type: DirectoryOrCreate
name: usr-local-share-ca-certificates
- hostPath:
path: /usr/share/ca-certificates
type: DirectoryOrCreate
name: usr-share-ca-certificates
status: {}

View File

@@ -1,72 +0,0 @@
---
- name: Collect servers SSH public keys to known_hosts
hosts: localhost
connection: local
vars:
targets: "{{ hostvars[groups['all']] }}"
tasks:
- name: Generate ssh_config
ansible.builtin.copy:
dest: ssh_config
content: |
# Use `ansible-playbook update-ssh-config.yml` to update this file
# Use `ssh -F ssh_config ...` to connect to target machine or
# Add `Include ~/path/to/this/kube/ssh_config` in your ~/.ssh/config
{% for host in groups['all'] | sort %}
Host {{ [host, hostvars[host].get('ansible_host', host)] | unique | join(' ') }}
User root
Hostname {{ hostvars[host].get('ansible_host', host) }}
Port {{ hostvars[host].get('ansible_port', 22) }}
GlobalKnownHostsFile known_hosts
UserKnownHostsFile /dev/null
ControlMaster auto
ControlPersist 8h
{% endfor %}
- name: Generate known_hosts
ansible.builtin.copy:
dest: known_hosts
content: |
# Use `ansible-playbook update-ssh-config.yml` to update this file
{% for host in groups['all'] | sort %}
{{ lookup('ansible.builtin.pipe', 'ssh-keyscan -p %d -t ecdsa %s' % (
hostvars[host].get('ansible_port', 22),
hostvars[host].get('ansible_host', host))) }} # {{ host }}
{% endfor %}
- name: Pull authorized keys from Gitea
hosts: localhost
connection: local
vars:
targets: "{{ hostvars[groups['all']] }}"
tasks:
- name: Download https://git.k-space.ee/user.keys
loop:
- arti
- eaas
- lauri
- rasmus
ansible.builtin.get_url:
url: https://git.k-space.ee/{{ item }}.keys
dest: "./{{ item }}.keys"
- name: Push authorized keys to targets
hosts:
- misc
- kubernetes
- doors
tasks:
- name: Generate /root/.ssh/authorized_keys
ansible.builtin.copy:
dest: "/root/.ssh/authorized_keys"
owner: root
group: root
mode: '0644'
content: |
# Use `ansible-playbook update-ssh-config.yml` from https://git.k-space.ee/k-space/kube/ to update this file
{% for user in admins + extra_admins | unique | sort %}
{% for line in lookup("ansible.builtin.file", user + ".keys").split("\n") %}
{% if line.startswith("sk-") %}
{{ line }} # {{ user }}
{% endif %}
{% endfor %}
{% endfor %}

View File

@@ -1,49 +0,0 @@
# Referenced/linked and documented by https://wiki.k-space.ee/en/hosting/storage#zrepl
- name: zrepl
hosts: nasgroup
tasks:
- name: 'apt: zrepl gpg'
ansible.builtin.get_url:
url: 'https://zrepl.cschwarz.com/apt/apt-key.asc'
dest: /usr/share/keyrings/zrepl.asc
- name: 'apt: zrepl repo'
apt_repository:
repo: 'deb [arch=amd64 signed-by=/usr/share/keyrings/zrepl.asc] https://zrepl.cschwarz.com/apt/debian bookworm main'
- name: 'apt: ensure packages'
apt:
state: latest
pkg: zrepl
- name: 'zrepl: ensure config'
ansible.builtin.template:
src: "zrepl_{{ansible_hostname}}.yml.j2"
dest: /etc/zrepl/zrepl.yml
mode: 600
register: zreplconf
- name: 'zrepl: restart service after config change'
when: zreplconf.changed
service:
state: restarted
enabled: true
name: zrepl
- name: 'zrepl: ensure service'
when: not zreplconf.changed
service:
state: started
enabled: true
name: zrepl
# avoid accidental conflicts of changes on recv (would err 'will not overwrite without force')
- name: 'zfs: ensure recv mountpoint=off'
hosts: offsite
tasks:
- name: 'zfs: get mountpoint'
shell: zfs get mountpoint -H -o value {{offsite_dataset}}
register: result
changed_when: false
- when: result.stdout != "none"
name: 'zfs: ensure mountpoint=off'
changed_when: true
shell: zfs set mountpoint=none {{offsite_dataset}}
register: result

View File

@@ -1,23 +0,0 @@
---
apiVersion: monitoring.coreos.com/v1
kind: Probe
metadata:
name: zrepl
spec:
scrapeTimeout: 30s
targets:
staticConfig:
static:
- nas.mgmt.k-space.ee:9811
# - offsite.k-space.ee:9811 # TODO: unreachable
relabelingConfigs:
- sourceLabels: [__param_target]
targetLabel: instance
- sourceLabels: [__param_target]
targetLabel: __address__
prober:
url: localhost
path: /metrics
metricRelabelings:
- sourceLabels: [__address__]
targetLabel: target

View File

@@ -1,47 +0,0 @@
global:
logging:
- type: syslog
format: logfmt
level: warn
monitoring:
- type: prometheus
listen: ':9811'
jobs:
- name: k6zrepl
type: snap
# "<" aka recursive, https://zrepl.github.io/configuration/filter_syntax.html
filesystems:
'nas/k6<': true
snapshotting:
type: periodic
prefix: zrepl_
interval: 1h
pruning:
keep:
# Keep non-zrepl snapshots
- type: regex
negate: true
regex: '^zrepl_'
- type: last_n
regex: "^zrepl_.*"
count: 4
- type: grid
regex: "^zrepl_.*"
grid: 4x1h | 6x4h | 3x1d | 2x7d
- name: k6zrepl_offsite_src
type: source
send:
encrypted: true # zfs native already-encrypted, filesystems not encrypted will log to error-level
serve:
type: tcp
listen: "{{ansible_host}}:35566" # NAT-ed to 193.40.103.250
clients: {
"78.28.64.17": "offsite.k-space.ee",
}
filesystems:
'nas/k6': true
snapshotting: # handled by above job, separated for secuwurity (isolation of domains)
type: manual

View File

@@ -1,41 +0,0 @@
global:
logging:
- type: syslog
format: logfmt
level: warn
monitoring:
- type: prometheus
listen: ':9811'
jobs:
- name: k6zrepl_offsite_dest
type: pull
recv:
placeholder:
encryption: off # https://zrepl.github.io/configuration/sendrecvoptions.html#placeholders
# bandwidth_limit:
# max: 9 MiB # 75.5 Mbps
connect:
type: tcp
address: '193.40.103.250:35566' # firewall whitelisted to offsite
root_fs: {{offsite_dataset}}
interval: 10m # start interval, does nothing when no snapshots to recv
replication:
concurrency:
steps: 2
pruning:
keep_sender: # offsite does not dictate nas snapshot policy
- type: regex
regex: '.*'
keep_receiver:
# Keep non-zrepl snapshots
- negate: true
type: regex
regex: "^zrepl_"
- type: last_n
regex: "^zrepl_"
count: 4
- type: grid
regex: "^zrepl_"
grid: 4x1h | 6x4h | 3x1d | 2x7d

View File

@@ -0,0 +1 @@
argocd/appications/argocd-image-updater.yaml

View File

@@ -1,63 +1,11 @@
# Workflow
Most applications in our Kubernetes cluster are managed by ArgoCD.
Most notably operators are NOT managed by ArgoCD.
Adding to `applications/`: `kubectl apply -f newapp.yaml`
# Deployment
To deploy ArgoCD:
```bash
helm repo add argo-cd https://argoproj.github.io/argo-helm
kubectl create secret -n argocd generic argocd-secret # Initialize empty secret for sessions
helm template -n argocd --release-name k6 argo-cd/argo-cd --include-crds -f values.yaml > argocd.yml
kubectl apply -f argocd.yml -f application-extras.yml -n argocd
kubectl -n argocd rollout restart deployment/k6-argocd-redis
kubectl -n argocd rollout restart deployment/k6-argocd-repo-server
kubectl -n argocd rollout restart deployment/k6-argocd-server
kubectl -n argocd rollout restart deployment/k6-argocd-notifications-controller
kubectl -n argocd rollout restart statefulset/k6-argocd-application-controller
kubectl label -n argocd secret oidc-client-argocd-owner-secrets app.kubernetes.io/part-of=argocd
```
# Setting up Git secrets
Generate SSH key to access Gitea:
## Managing applications
Update apps (see TODO below):
```
ssh-keygen -t ecdsa -f id_ecdsa -C argocd.k-space.ee -P ''
kubectl -n argocd create secret generic gitea-kube \
--from-literal=type=git \
--from-literal=url=git@git.k-space.ee:k-space/kube \
--from-file=sshPrivateKey=id_ecdsa
kubectl -n argocd create secret generic gitea-kube-staging \
--from-literal=type=git \
--from-literal=url=git@git.k-space.ee:k-space/kube-staging \
--from-file=sshPrivateKey=id_ecdsa
kubectl -n argocd create secret generic gitea-kube-members \
--from-literal=type=git \
--from-literal=url=git@git.k-space.ee:k-space/kube-members \
--from-file=sshPrivateKey=id_ecdsa
kubectl label -n argocd secret gitea-kube argocd.argoproj.io/secret-type=repository
kubectl label -n argocd secret gitea-kube-staging argocd.argoproj.io/secret-type=repository
kubectl label -n argocd secret gitea-kube-members argocd.argoproj.io/secret-type=repository
rm -fv id_ecdsa
```
Have Gitea admin reset password for user `argocd` and log in with that account.
Add the SSH key for user `argocd` from file `id_ecdsa.pub`.
Delete any other SSH keys associated with Gitea user `argocd`.
# Managing applications
To update apps:
```
for j in asterisk bind camtiler etherpad freescout gitea grafana hackerspace nextcloud nyancat rosdump traefik wiki wildduck woodpecker; do
for j in asterisk bind camtiler etherpad freescout gitea grafana hackerspace nextcloud nyancat rosdump traefik wiki wildduck; do
cat << EOF >> applications/$j.yaml
---
apiVersion: argoproj.io/v1alpha1
@@ -65,6 +13,10 @@ kind: Application
metadata:
name: $j
namespace: argocd
annotations:
# Works with only Kustomize and Helm. Kustomize is easy, see https://github.com/argoproj-labs/argocd-image-updater/tree/master/manifests/base for an example.
argocd-image-updater.argoproj.io/image-list: TODO:^2 # semver 2.*.*
argocd-image-updater.argoproj.io/write-back-method: git
spec:
project: k-space.ee
source:
@@ -83,3 +35,24 @@ EOF
done
find applications -name "*.yaml" -exec kubectl apply -n argocd -f {} \;
```
### Repository secrets
1. Generate keys locally with `ssh-keygen -f argo`
2. Add `argo.pub` in `git.k-space.ee/<your>/<repo>` → Settings → Deploy keys
3. Add `argo` (private key) at https://argocd.k-space.ee/settings/repos along with referenced repo.
## Argo Deployment
To deploy ArgoCD itself:
```bash
helm repo add argo-cd https://argoproj.github.io/argo-helm
kubectl create secret -n argocd generic argocd-secret # Empty secret for sessions
kubectl label -n argocd secret oidc-client-argocd-owner-secrets app.kubernetes.io/part-of=argocd
helm template -n argocd --release-name k6 argo-cd/argo-cd --include-crds -f values.yaml > argocd.yml
kubectl apply -f argocd.yml -f application-extras.yml -f redis.yaml -f monitoring.yml -n argocd
kubectl -n argocd rollout restart deployment/k6-argocd-redis deployment/k6-argocd-repo-server deployment/k6-argocd-server deployment/k6-argocd-notifications-controller statefulset/k6-argocd-application-controller
```
WARN: ArgoCD doesn't host its own redis, Dragonfly must be able to independently cold-start.

View File

@@ -9,6 +9,7 @@ spec:
uri: https://argocd.k-space.ee
redirectUris:
- https://argocd.k-space.ee/auth/callback
- http://localhost:8085/auth/callback
allowedGroups:
- k-space:kubernetes:admins
grantTypes:

View File

@@ -2,17 +2,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: postgres-clusters
name: argocd-image-updater
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: postgres-clusters
targetRevision: HEAD
repoURL: 'https://github.com/argoproj-labs/argocd-image-updater.git'
path: manifests/base
targetRevision: stable
destination:
server: 'https://kubernetes.default.svc'
namespace: postgres-clusters
namespace: argocd
syncPolicy:
automated:
prune: true

View File

@@ -1,15 +0,0 @@
# ---
# apiVersion: argoproj.io/v1alpha1
# kind: Application
# metadata:
# name: camtiler
# namespace: argocd
# spec:
# project: k-space.ee
# source:
# repoURL: 'git@git.k-space.ee:k-space/kube.git'
# path: camtiler
# targetRevision: HEAD
# destination:
# server: 'https://kubernetes.default.svc'
# namespace: camtiler

View File

@@ -0,0 +1,21 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: cert-manager
namespace: argocd
spec:
project: k-space.ee
source:
# also depends on git@git.k-space.ee:secretspace/kube.git
repoURL: git@git.k-space.ee:k-space/kube.git
targetRevision: HEAD
path: cert-manager
destination:
server: 'https://kubernetes.default.svc'
namespace: cert-manager
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,23 @@
# See [/dragonfly/README.md](/dragonfly-operator-system/README.md)
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: dragonfly # replaces redis and keydb
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: https://github.com/dragonflydb/dragonfly-operator
targetRevision: v1.1.11 # https://github.com/dragonflydb/dragonfly-operator/releases
path: manifests
directory:
include: 'dragonfly-operator.yaml'
destination:
server: 'https://kubernetes.default.svc'
namespace: dragonfly-operator-system
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,21 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: external-snapshotter
namespace: argocd
spec:
project: k-space.ee
source:
# also depends on git@git.k-space.ee:secretspace/kube.git
repoURL: git@git.k-space.ee:k-space/kube.git
targetRevision: HEAD
path: external-snapshotter
destination:
server: 'https://kubernetes.default.svc'
namespace: kube-system
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,21 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: frigate
namespace: argocd
spec:
project: k-space.ee
source:
# also depends on git@git.k-space.ee:secretspace/kube.git
repoURL: git@git.k-space.ee:k-space/kube.git
targetRevision: HEAD
path: frigate
destination:
server: 'https://kubernetes.default.svc'
namespace: frigate
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -7,9 +7,10 @@ metadata:
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: grafana
# also depends on git@git.k-space.ee:secretspace/kube.git
repoURL: git@git.k-space.ee:k-space/kube.git
targetRevision: HEAD
path: grafana
destination:
server: 'https://kubernetes.default.svc'
namespace: grafana
@@ -17,4 +18,4 @@ spec:
automated:
prune: true
syncOptions:
- CreateNamespace=true
- CreateNamespace=true

View File

@@ -17,4 +17,4 @@ spec:
automated:
prune: true
syncOptions:
- CreateNamespace=true
- CreateNamespace=true

View File

@@ -0,0 +1,21 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: harbor-operator
namespace: argocd
spec:
project: k-space.ee
source:
# also depends on git@git.k-space.ee:secretspace/kube.git
repoURL: git@git.k-space.ee:k-space/kube.git
targetRevision: HEAD
path: harbor-operator
destination:
server: 'https://kubernetes.default.svc'
namespace: harbor-operator
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -2,17 +2,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: redis-clusters
name: kube-system
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: redis-clusters
path: kube-system
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: redis-clusters
namespace: kube-system
syncPolicy:
automated:
prune: true

View File

@@ -5,7 +5,7 @@ metadata:
name: kubernetes-dashboard
namespace: argocd
spec:
project: default
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: kubernetes-dashboard

View File

@@ -0,0 +1,21 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: metallb-system
namespace: argocd
spec:
project: k-space.ee
source:
# also depends on git@git.k-space.ee:secretspace/kube.git
repoURL: git@git.k-space.ee:k-space/kube.git
targetRevision: HEAD
path: metallb-system
destination:
server: 'https://kubernetes.default.svc'
namespace: metallb-system
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: members
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:secretspace/members.git'
path: members
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: passmower
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,18 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: passmower
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: passmower
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: passmower
syncPolicy:
automated:
prune: true

View File

@@ -2,17 +2,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: asterisk
name: pgweb
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: asterisk
path: pgweb
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: asterisk
namespace: pgweb
syncPolicy:
automated:
prune: true

View File

@@ -0,0 +1,24 @@
# Note: Do not put any Prometheus instances or exporters in this namespace, instead have them in `monitoring` namespace
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: prometheus-operator
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: https://github.com/prometheus-operator/prometheus-operator.git
targetRevision: v0.82.0
path: .
kustomize:
namespace: prometheus-operator
destination:
server: 'https://kubernetes.default.svc'
namespace: prometheus-operator
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true
- ServerSideApply=true # Resource is too big to fit in 262144 bytes allowed annotation size.

View File

@@ -2,17 +2,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: logmower
name: ripe87
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: logmower
path: ripe87
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: logmower
namespace: ripe87
syncPolicy:
automated:
prune: true

View File

@@ -2,17 +2,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: reloader
name: rook-ceph
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: reloader
path: rook-ceph
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: reloader
namespace: rook-ceph
syncPolicy:
automated:
prune: true

View File

@@ -0,0 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: secret-claim-operator
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: https://github.com/codemowers/operatorlib
path: samples/secret-claim-operator
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: secret-claim-operator
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,24 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: tigera-operator
namespace: argocd
spec:
project: k-space.ee
source:
# also depends on git@git.k-space.ee:secretspace/kube.git
repoURL: git@git.k-space.ee:k-space/kube.git
targetRevision: HEAD
path: tigera-operator
destination:
server: 'https://kubernetes.default.svc'
namespace: tigera-operator
# also houses calico-system and calico-apiserver
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true
- ServerSideApply=true # Resource is too big to fit in 262144 bytes allowed annotation size.
- Force=true # `--force-conflicts`, according to https://docs.tigera.io/calico/latest/operations/upgrading/kubernetes-upgrade

View File

@@ -5,7 +5,7 @@ metadata:
name: whoami
namespace: argocd
spec:
project: default
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: whoami

View File

@@ -7,9 +7,10 @@ metadata:
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: woodpecker
# also depends on git@git.k-space.ee:secretspace/kube.git
repoURL: git@git.k-space.ee:k-space/kube.git
targetRevision: HEAD
path: woodpecker
destination:
server: 'https://kubernetes.default.svc'
namespace: woodpecker

2
argocd/deploy_key.pub Normal file
View File

@@ -0,0 +1,2 @@
# used for git.k-space: k-space/kube, secretspace/kube, secretspace/members
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOxYpFf85Vnxw7WNb/V5dtZT0PJ4VbBhdBNscDd8TVv/ argocd.k-space.ee

50
argocd/redis.yaml Normal file
View File

@@ -0,0 +1,50 @@
---
apiVersion: codemowers.cloud/v1beta1
kind: SecretClaim
metadata:
name: argocd-redis
namespace: argocd
spec:
size: 32
mapping:
- key: redis-password
value: "%(plaintext)s"
- key: REDIS_URI
value: "redis://:%(plaintext)s@argocd-redis"
---
apiVersion: dragonflydb.io/v1alpha1
kind: Dragonfly
metadata:
name: argocd-redis
namespace: argocd
spec:
authentication:
passwordFromSecret:
key: redis-password
name: argocd-redis
replicas: 3
resources:
limits:
cpu: 1000m
memory: 1Gi
topologySpreadConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: argocd-redis
app.kubernetes.io/part-of: dragonfly
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: argocd-redis
namespace: argocd
spec:
selector:
matchLabels:
app: argocd-redis
app.kubernetes.io/part-of: dragonfly
podMetricsEndpoints:
- port: admin

View File

@@ -5,38 +5,26 @@ global:
dex:
enabled: false
# Maybe one day switch to Redis HA?
redis:
enabled: false
redis-ha:
enabled: false
externalRedis:
host: argocd-redis
existingSecret: argocd-redis
server:
# HTTPS is implemented by Traefik
ingress:
enabled: true
annotations:
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
hosts:
- argocd.k-space.ee
tls:
extraTls:
- hosts:
- "*.k-space.ee"
configfucked:
resource.customizations: |
# https://github.com/argoproj/argo-cd/issues/1704
networking.k8s.io/Ingress:
health.lua: |
hs = {}
hs.status = "Healthy"
return hs
apiextensions.k8s.io/CustomResourceDefinition:
ignoreDifferences: |
jsonPointers:
- "x-kubernetes-validations"
metrics:
enabled: true
@@ -79,9 +67,24 @@ configs:
p, role:developers, applications, action/apps/Deployment/restart, default/camtiler, allow
p, role:developers, applications, sync, default/camtiler, allow
p, role:developers, applications, update, default/camtiler, allow
# argocd-image-updater
p, role:image-updater, applications, get, */*, allow
p, role:image-updater, applications, update, */*, allow
g, image-updater, role:image-updater
cm:
kustomize.buildOptions: --enable-helm
admin.enabled: "false"
resource.customizations: |
# https://github.com/argoproj/argo-cd/issues/1704
networking.k8s.io/Ingress:
health.lua: |
hs = {}
hs.status = "Healthy"
return hs
apiextensions.k8s.io/CustomResourceDefinition:
ignoreDifferences: |
jsonPointers:
- "x-kubernetes-validations"
oidc.config: |
name: OpenID Connect
issuer: https://auth.k-space.ee/

View File

@@ -1,36 +1,42 @@
#TODO:
# Bind namespace
- cert-manager talks to master to add domain names, and DNS-01 TLS through ns1.k-space.ee
^ both-side link to cert-manager
The Bind secondary servers and `external-dns` service pods are running in this namespace.
The `external-dns` pods are used to declaratively update DNS records on the
[Bind primary](https://git.k-space.ee/k-space/ansible/src/branch/main/authoritative-nameserver.yaml).
bind-services (zone transfer to HA replicas from ns1.k-space.ee)
### ns1.k-space.ee
Primary authoritive nameserver replica. Other replicas live on Kube nodes
Idea to move it to Zone.
dns.yaml files add DNS records
# Bind setup
The Bind primary resides outside Kubernetes at `193.40.103.2` and
The Bind primary `ns1.k-space.ee` resides outside Kubernetes at `193.40.103.2` and
it's internally reachable via `172.20.0.2`.
Bind secondaries perform AXFR (zone transfer) from `ns1.k-space.ee` using
shared secret autentication.
The primary triggers notification events to `172.20.53.{1..3}`
which are internally exposed IP-s of the secondaries.
Bind secondaries are hosted inside Kubernetes, load balanced behind `62.65.250.2` and
under normal circumstances managed by [ArgoCD](https://argocd.k-space.ee/applications/argocd/bind).
Note that [cert-manager](https://git.k-space.ee/k-space/kube/src/branch/master/cert-manager/) also performs DNS updates on the Bind primary.
# For user
`Ingresses` and `DNSEndpoint` resources under `k-space.ee`, `kspace.ee`, `k6.ee`
domains are picked up automatically by `external-dns` and updated on the Bind primary.
To find usage examples in this repository use
`grep -r -A25 "^kind: Ingress" .` and
`grep -R -r -A100 "^kind: DNSEndpoint" .`
# For administrator
Ingresses and DNSEndpoints referring to `k-space.ee`, `kspace.ee`, `k6.ee`
are picked up automatically by `external-dns` and updated on primary.
The primary triggers notification events to `172.20.53.{1..3}`
The primary triggers notification events to `172.21.53.{1..3}`
which are internally exposed IP-s of the secondaries.
# Secrets
To configure TSIG secrets:
```
```sh
kubectl create secret generic -n bind bind-readonly-secret \
--from-file=readonly.key
kubectl create secret generic -n bind bind-readwrite-secret \
@@ -39,9 +45,8 @@ kubectl create secret generic -n bind external-dns
kubectl -n bind delete secret tsig-secret
kubectl -n bind create secret generic tsig-secret \
--from-literal=TSIG_SECRET=$(cat readwrite.key | grep secret | cut -d '"' -f 2)
kubectl -n cert-manager delete secret tsig-secret
kubectl -n cert-manager create secret generic tsig-secret \
--from-literal=TSIG_SECRET=$(cat readwrite.key | grep secret | cut -d '"' -f 2)
# ^ same tsig-secret is in git.k-space.ee/secretspace/kube cert-manager
```
# Serving additional zones
@@ -62,7 +67,7 @@ zone "foobar.com" {
file "/var/lib/bind/db.foobar.com";
allow-update { !rejected; key foobar; };
allow-transfer { !rejected; key readonly; key foobar; };
notify explicit; also-notify { 172.20.53.1; 172.20.53.2; 172.20.53.3; };
notify explicit; also-notify { 172.21.53.1; 172.21.53.2; 172.21.53.3; };
};
```

View File

@@ -3,6 +3,7 @@ apiVersion: v1
kind: ConfigMap
metadata:
name: bind-secondary-config-local
namespace: bind
data:
named.conf.local: |
zone "codemowers.ee" { type slave; masters { 172.20.0.2 key readonly; }; };
@@ -13,6 +14,7 @@ apiVersion: v1
kind: ConfigMap
metadata:
name: bind-secondary-config
namespace: bind
data:
named.conf: |
include "/etc/bind/named.conf.local";
@@ -36,6 +38,7 @@ metadata:
name: bind-secondary
namespace: bind
spec:
revisionHistoryLimit: 0
replicas: 3
selector:
matchLabels:
@@ -45,15 +48,16 @@ spec:
labels:
app: bind-secondary
spec:
volumes:
- name: run
emptyDir: {}
containers:
- name: bind-secondary
image: internetsystemsconsortium/bind9:9.20
volumeMounts:
- mountPath: /run/named
name: run
image: mirror.gcr.io/internetsystemsconsortium/bind9:9.20
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 1m
memory: 35Mi
workingDir: /var/bind
command:
- named
@@ -79,16 +83,13 @@ spec:
name: bind-readonly-secret
- name: bind-data
emptyDir: {}
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- bind-secondary
topologyKey: "kubernetes.io/hostname"
topologySpreadConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: bind-secondary
---
apiVersion: v1
kind: Service
@@ -119,7 +120,7 @@ metadata:
spec:
type: LoadBalancer
externalTrafficPolicy: Local
loadBalancerIP: 172.20.53.1
loadBalancerIP: 172.21.53.1
selector:
app: bind-secondary
statefulset.kubernetes.io/pod-name: bind-secondary-0
@@ -141,7 +142,7 @@ metadata:
spec:
type: LoadBalancer
externalTrafficPolicy: Local
loadBalancerIP: 172.20.53.2
loadBalancerIP: 172.21.53.2
selector:
app: bind-secondary
statefulset.kubernetes.io/pod-name: bind-secondary-1
@@ -163,7 +164,7 @@ metadata:
spec:
type: LoadBalancer
externalTrafficPolicy: Local
loadBalancerIP: 172.20.53.3
loadBalancerIP: 172.21.53.3
selector:
app: bind-secondary
statefulset.kubernetes.io/pod-name: bind-secondary-2

View File

@@ -3,6 +3,7 @@ apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns-k-space
namespace: bind
spec:
revisionHistoryLimit: 0
selector:
@@ -16,7 +17,14 @@ spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: registry.k8s.io/external-dns/external-dns:v0.14.2
image: registry.k8s.io/external-dns/external-dns:v0.16.1
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 2m
memory: 35Mi
envFrom:
- secretRef:
name: tsig-secret

View File

@@ -3,6 +3,7 @@ apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns-k6
namespace: bind
spec:
revisionHistoryLimit: 0
selector:
@@ -16,15 +17,22 @@ spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: registry.k8s.io/external-dns/external-dns:v0.14.2
image: registry.k8s.io/external-dns/external-dns:v0.16.1
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 2m
memory: 35Mi
envFrom:
- secretRef:
name: tsig-secret
args:
- --log-level=debug
- --events
- --registry=noop
- --provider=rfc2136
- --source=ingress
- --source=service
- --source=crd
- --domain-filter=k6.ee
@@ -41,31 +49,27 @@ apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
name: k6
namespace: bind
spec:
endpoints:
- dnsName: k6.ee
recordTTL: 300
recordType: SOA
targets:
- "ns1.k-space.ee. hostmaster.k-space.ee. (1 300 300 300 300)"
- dnsName: k6.ee
recordTTL: 300
recordType: NS
targets:
- ns1.k-space.ee
- ns2.k-space.ee
- dnsName: ns1.k-space.ee
recordTTL: 300
recordType: A
targets:
- 193.40.103.2
- dnsName: ns2.k-space.ee
recordTTL: 300
recordType: A
targets:
- 62.65.250.2
- dnsName: k-space.ee
recordTTL: 300
recordType: MX
targets:
- 10 mail.k-space.ee
- dnsName: k6.ee
recordTTL: 300
recordType: SOA
targets:
- "ns1.k-space.ee. hostmaster.k-space.ee. (1 300 300 300 300)"
- dnsName: k6.ee
recordTTL: 300
recordType: NS
targets:
- ns1.k-space.ee
- ns2.k-space.ee
- dnsName: ns1.k-space.ee
recordTTL: 300
recordType: A
targets:
- 193.40.103.2
- dnsName: ns2.k-space.ee
recordTTL: 300
recordType: A
targets:
- 62.65.250.2

View File

@@ -3,6 +3,7 @@ apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns-kspace
namespace: bind
spec:
revisionHistoryLimit: 0
selector:
@@ -16,10 +17,17 @@ spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: registry.k8s.io/external-dns/external-dns:v0.14.2
image: registry.k8s.io/external-dns/external-dns:v0.16.1
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 2m
memory: 35Mi
envFrom:
- secretRef:
name: tsig-secret
- secretRef:
name: tsig-secret
args:
- --events
- --registry=noop
@@ -41,26 +49,27 @@ apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
name: kspace
namespace: bind
spec:
endpoints:
- dnsName: kspace.ee
recordTTL: 300
recordType: SOA
targets:
- "ns1.k-space.ee. hostmaster.k-space.ee. (1 300 300 300 300)"
- dnsName: kspace.ee
recordTTL: 300
recordType: NS
targets:
- ns1.k-space.ee
- ns2.k-space.ee
- dnsName: ns1.k-space.ee
recordTTL: 300
recordType: A
targets:
- 193.40.103.2
- dnsName: ns2.k-space.ee
recordTTL: 300
recordType: A
targets:
- 62.65.250.2
- dnsName: kspace.ee
recordTTL: 300
recordType: SOA
targets:
- "ns1.k-space.ee. hostmaster.k-space.ee. (1 300 300 300 300)"
- dnsName: kspace.ee
recordTTL: 300
recordType: NS
targets:
- ns1.k-space.ee
- ns2.k-space.ee
- dnsName: ns1.k-space.ee
recordTTL: 300
recordType: A
targets:
- 193.40.103.2
- dnsName: ns2.k-space.ee
recordTTL: 300
recordType: A
targets:
- 62.65.250.2

View File

@@ -4,55 +4,57 @@ kind: ClusterRole
metadata:
name: external-dns
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- pods
- nodes
verbs:
- get
- watch
- list
- apiGroups:
- extensions
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- externaldns.k8s.io
resources:
- dnsendpoints
verbs:
- get
- watch
- list
- apiGroups:
- externaldns.k8s.io
resources:
- dnsendpoints/status
verbs:
- update
- apiGroups:
- ""
resources:
- services
- endpoints
- pods
- nodes
verbs:
- get
- watch
- list
- apiGroups:
- extensions
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- externaldns.k8s.io
resources:
- dnsendpoints
verbs:
- get
- watch
- list
- apiGroups:
- externaldns.k8s.io
resources:
- dnsendpoints/status
verbs:
- update
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: external-dns
namespace: bind
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: external-dns-viewer
namespace: bind
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: external-dns
subjects:
- kind: ServiceAccount
name: external-dns
namespace: bind
- kind: ServiceAccount
name: external-dns
namespace: bind

1
camtiler/.gitignore vendored
View File

@@ -1 +0,0 @@
deployments/

View File

@@ -1,87 +0,0 @@
# Cameras
Camtiler is the umbrella name for our homegrown camera surveilance system.
Everything besides [Camera](#camera)s is deployed with Kubernetes.
## Components
![cameras.graphviz.svg](cameras.graphviz.svg)
<!-- Manually rendered with https://dreampuf.github.io/GraphvizOnline
digraph G {
"camera-operator" -> "camera-motion-detect" [label="deploys"]
"camera-tiler" -> "cam.k-space.ee/tiled"
camera -> "camera-tiler"
camera -> "camera-motion-detect" -> mongo
"camera-motion-detect" -> "Minio S3"
"cam.k-space.ee" -> mongo [label="queries events", decorate=true]
mongo -> "camtiler-event-broker" [label="transforms object to add (signed) URL to S3", ]
"camtiler-event-broker" -> "cam.k-space.ee"
"Minio S3" -> "cam.k-space.ee" [label="using signed URL from camtiler-event-broker", decorate=true]
camera [label="📸 camera"]
}
-->
### 📸 Camera
Cameras are listed in [application.yml](application.yml) as `kind: Camera`.
Two types of camera hosts:
- GL-AR150 with [openwrt-camera-images](https://git.k-space.ee/k-space/openwrt-camera-image).
- [Doors](https://wiki.k-space.ee/e/en/hosting/doors) (Raspberry Pi) with mjpg-streamer.
### camera-tiler (cam.k-space.ee/tiled)
Out-of-bound, connects to cameras and streams to web browser.
One instance per every camera
#### camera-operator
Functionally the same as a kubernetes deployment for camera-tiler.
Operator/deployer for camera-tiler.
### camera-motion-detect
Connects to cameras, on motion writes events to Mongo and frames to S3.
### cam.k-space.ee (logmower)
Fetches motion-detect events from mongo. Fetches referenced images from S3 (minio).
#### camtiler-event-broker
MitM between motion-detect -> mongo. Appends S3 URLs to the response.
## Kubernetes commands
Apply changes:
```
kubectl apply -n camtiler \
-f application.yml \
-f minio.yml \
-f mongoexpress.yml \
-f mongodb-support.yml \
-f camera-tiler.yml \
-f logmower.yml \
-f ingress.yml \
-f network-policies.yml \
-f networkpolicy-base.yml
```
Deploy changes:
```
kubectl -n camtiler rollout restart deployment.apps/camtiler
```
Initialize secrets:
```
kubectl create secret generic -n camtiler mongodb-application-readwrite-password --from-literal="password=$(cat /dev/urandom | base64 | head -c 30)"
kubectl create secret generic -n camtiler mongodb-application-readonly-password --from-literal="password=$(cat /dev/urandom | base64 | head -c 30)"
kubectl create secret generic -n camtiler minio-secrets \
--from-literal="MINIO_ROOT_USER=root" \
--from-literal="MINIO_ROOT_PASSWORD=$(cat /dev/urandom | base64 | head -c 30)"
kubectl -n camtiler create secret generic camera-secrets \
--from-literal=username=... \
--from-literal=password=...
```
Restart all deployments:
```
for j in $(kubectl get deployments -n camtiler -o name); do kubectl rollout restart -n camtiler $j; done
```

View File

@@ -1,356 +0,0 @@
---
apiVersion: codemowers.cloud/v1beta1
kind: MinioBucketClaim
metadata:
name: camtiler
spec:
capacity: 150Gi
class: dedicated
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: cams.k-space.ee
spec:
group: k-space.ee
names:
plural: cams
singular: cam
kind: Camera
shortNames:
- cam
scope: Namespaced
versions:
- name: v1alpha1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
roi:
type: object
description: Region of interest for this camera
properties:
threshold:
type: integer
description: Percentage of pixels changed within ROI to
consider whole frame to have motion detected.
Defaults to 5.
enabled:
type: boolean
description: Whether motion detection is enabled for this
camera. Defaults to false.
left:
type: integer
description: Left boundary of ROI as
percentage of the width of a frame.
By default 0.
right:
type: integer
description: Right boundary of ROI as
percentage of the width of a frame.
By default 100.
top:
type: integer
description: Top boundary of ROI as
percentage of the height of a frame
By deafault 0.
bottom:
type: integer
description: Bottom boundary of ROI as
percentage of the height of a frame.
By default 100.
secretRef:
type: string
description: Secret that contains authentication credentials
target:
type: string
description: URL of the video feed stream
replicas:
type: integer
minimum: 1
maximum: 2
description: For highly available deployment set this to 2 or
higher. Make sure you also run Mongo and Minio in HA
configurations
required: ["target"]
required: ["spec"]
---
apiVersion: codemowers.io/v1alpha1
kind: ClusterOperator
metadata:
name: camera
spec:
resource:
group: k-space.ee
version: v1alpha1
plural: cams
secret:
enabled: false
services:
- apiVersion: v1
kind: Service
metadata:
name: foobar
labels:
app.kubernetes.io/name: foobar
component: camera-motion-detect
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: foobar
component: camera-motion-detect
ports:
- protocol: TCP
port: 80
targetPort: 5000
deployments:
- apiVersion: apps/v1
kind: Deployment
metadata:
name: camera-foobar
spec:
revisionHistoryLimit: 0
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
# Swap following two with replicas: 2
maxSurge: 1
maxUnavailable: 0
selector:
matchLabels:
app.kubernetes.io/name: foobar
template:
metadata:
labels:
app.kubernetes.io/name: foobar
component: camera-motion-detect
spec:
containers:
- name: camera-motion-detect
image: harbor.k-space.ee/k-space/camera-motion-detect:latest
starupProbe:
httpGet:
path: /healthz
port: 5000
initialDelaySeconds: 2
periodSeconds: 180
timeoutSeconds: 60
readinessProbe:
httpGet:
path: /readyz
port: 5000
initialDelaySeconds: 60
periodSeconds: 60
timeoutSeconds: 5
ports:
- containerPort: 5000
name: "http"
resources:
requests:
memory: "64Mi"
cpu: "200m"
limits:
memory: "256Mi"
cpu: "4000m"
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
command:
- /app/camdetect.py
- http://user@foobar.cam.k-space.ee:8080/?action=stream
env:
- name: SOURCE_NAME
value: foobar
- name: S3_BUCKET_NAME
valueFrom:
secretKeyRef:
name: miniobucket-camtiler-owner-secrets
key: BUCKET_NAME
- name: S3_ENDPOINT_URL
valueFrom:
secretKeyRef:
name: miniobucket-camtiler-owner-secrets
key: AWS_S3_ENDPOINT_URL
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: miniobucket-camtiler-owner-secrets
key: AWS_SECRET_ACCESS_KEY
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: miniobucket-camtiler-owner-secrets
key: AWS_ACCESS_KEY_ID
- name: BASIC_AUTH_PASSWORD
valueFrom:
secretKeyRef:
name: camera-secrets
key: password
- name: MONGO_URI
valueFrom:
secretKeyRef:
name: mongodb-application-readwrite
key: connectionString.standard
# Make sure 2+ pods of same camera are scheduled on different hosts
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- foobar
topologyKey: topology.kubernetes.io/zone
# Make sure camera deployments are spread over workers
topologySpreadConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app.kubernetes.io/name: foobar
component: camera-motion-detect
---
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: cameras
spec:
groups:
- name: cameras
rules:
- alert: CameraLost
expr: rate(camtiler_frames_total{stage="downloaded"}[1m]) < 1
for: 2m
labels:
severity: warning
annotations:
summary: Camera feed stopped
- alert: CameraServerRoomMotion
expr: rate(camtiler_events_total{app_kubernetes_io_name="server-room"}[30m]) > 0
for: 1m
labels:
severity: warning
annotations:
summary: Motion was detected in server room
- alert: CameraSlowUploads
expr: camtiler_queue_frames{stage="upload"} > 10
for: 5m
labels:
severity: warning
annotations:
summary: Motion detect snapshots are piling up and
not getting uploaded to S3
- alert: CameraSlowProcessing
expr: camtiler_queue_frames{stage="download"} > 10
for: 5m
labels:
severity: warning
annotations:
summary: Motion detection processing pipeline is not keeping up
with incoming frames
- alert: CameraResourcesThrottled
expr: sum by (pod) (rate(container_cpu_cfs_throttled_periods_total{namespace="camtiler"}[1m])) > 0
for: 5m
labels:
severity: warning
annotations:
summary: CPU limits are bottleneck
---
# Referenced/linked by README.md
apiVersion: k-space.ee/v1alpha1
kind: Camera
metadata:
name: workshop
spec:
target: http://user@workshop.cam.k-space.ee:8080/?action=stream
secretRef: camera-secrets
replicas: 1
---
apiVersion: k-space.ee/v1alpha1
kind: Camera
metadata:
name: server-room
spec:
target: http://user@server-room.cam.k-space.ee:8080/?action=stream
secretRef: camera-secrets
replicas: 2
---
apiVersion: k-space.ee/v1alpha1
kind: Camera
metadata:
name: printer
spec:
target: http://user@printer.cam.k-space.ee:8080/?action=stream
secretRef: camera-secrets
replicas: 1
---
apiVersion: k-space.ee/v1alpha1
kind: Camera
metadata:
name: chaos
spec:
target: http://user@chaos.cam.k-space.ee:8080/?action=stream
secretRef: camera-secrets
replicas: 1
---
apiVersion: k-space.ee/v1alpha1
kind: Camera
metadata:
name: cyber
spec:
target: http://user@cyber.cam.k-space.ee:8080/?action=stream
secretRef: camera-secrets
replicas: 1
---
apiVersion: k-space.ee/v1alpha1
kind: Camera
metadata:
name: kitchen
spec:
target: http://user@kitchen.cam.k-space.ee:8080/?action=stream
secretRef: camera-secrets
replicas: 1
---
apiVersion: k-space.ee/v1alpha1
kind: Camera
metadata:
name: back-door
spec:
target: http://user@100.102.3.3:8080/?action=stream
secretRef: camera-secrets
replicas: 1
---
apiVersion: k-space.ee/v1alpha1
kind: Camera
metadata:
name: ground-door
spec:
target: http://user@100.102.3.1:8080/?action=stream
secretRef: camera-secrets
replicas: 1
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: camera-motion-detect
spec:
selector:
matchLabels:
component: camera-motion-detect
podMetricsEndpoints:
- port: http
podTargetLabels:
- app.kubernetes.io/name
- component

View File

@@ -1,98 +0,0 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: camera-tiler
annotations:
keel.sh/policy: force
keel.sh/trigger: poll
spec:
revisionHistoryLimit: 0
replicas: 2
selector:
matchLabels: &selectorLabels
app.kubernetes.io/name: camtiler
component: camera-tiler
template:
metadata:
labels: *selectorLabels
spec:
serviceAccountName: camera-tiler
containers:
- name: camera-tiler
image: harbor.k-space.ee/k-space/camera-tiler:latest
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
ports:
- containerPort: 5001
name: "http"
resources:
requests:
memory: "200Mi"
cpu: "100m"
limits:
memory: "500Mi"
cpu: "4000m"
---
apiVersion: v1
kind: Service
metadata:
name: camera-tiler
labels:
app.kubernetes.io/name: camtiler
component: camera-tiler
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: camtiler
component: camera-tiler
ports:
- protocol: TCP
port: 5001
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: camera-tiler
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: camera-tiler
rules:
- apiGroups:
- ""
resources:
- services
verbs:
- list
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: camera-tiler
subjects:
- kind: ServiceAccount
name: camera-tiler
apiGroup: ""
roleRef:
kind: Role
name: camera-tiler
apiGroup: ""
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: camtiler
spec:
selector:
matchLabels:
app.kubernetes.io/name: camtiler
component: camera-tiler
podMetricsEndpoints:
- port: http
podTargetLabels:
- app.kubernetes.io/name
- component

View File

@@ -1,131 +0,0 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<!-- Generated by graphviz version 2.40.1 (20161225.0304)
-->
<!-- Title: G Pages: 1 -->
<svg width="658pt" height="387pt" viewBox="0.00 0.00 658.36 386.80" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 382.8)">
<title>G</title>
<polygon fill="#ffffff" stroke="transparent" points="-4,4 -4,-382.8 654.3562,-382.8 654.3562,4 -4,4"/>
<!-- camera&#45;operator -->
<g id="node1" class="node">
<title>camera-operator</title>
<ellipse fill="none" stroke="#000000" cx="356.22" cy="-360.8" rx="74.095" ry="18"/>
<text text-anchor="middle" x="356.22" y="-356.6" font-family="Times,serif" font-size="14.00" fill="#000000">camera-operator</text>
</g>
<!-- camera&#45;motion&#45;detect -->
<g id="node2" class="node">
<title>camera-motion-detect</title>
<ellipse fill="none" stroke="#000000" cx="356.22" cy="-272" rx="95.5221" ry="18"/>
<text text-anchor="middle" x="356.22" y="-267.8" font-family="Times,serif" font-size="14.00" fill="#000000">camera-motion-detect</text>
</g>
<!-- camera&#45;operator&#45;&gt;camera&#45;motion&#45;detect -->
<g id="edge1" class="edge">
<title>camera-operator-&gt;camera-motion-detect</title>
<path fill="none" stroke="#000000" d="M356.22,-342.4006C356.22,-330.2949 356.22,-314.2076 356.22,-300.4674"/>
<polygon fill="#000000" stroke="#000000" points="359.7201,-300.072 356.22,-290.072 352.7201,-300.0721 359.7201,-300.072"/>
<text text-anchor="middle" x="377.9949" y="-312.2" font-family="Times,serif" font-size="14.00" fill="#000000">deploys</text>
</g>
<!-- mongo -->
<g id="node6" class="node">
<title>mongo</title>
<ellipse fill="none" stroke="#000000" cx="292.22" cy="-199" rx="37.7256" ry="18"/>
<text text-anchor="middle" x="292.22" y="-194.8" font-family="Times,serif" font-size="14.00" fill="#000000">mongo</text>
</g>
<!-- camera&#45;motion&#45;detect&#45;&gt;mongo -->
<g id="edge5" class="edge">
<title>camera-motion-detect-&gt;mongo</title>
<path fill="none" stroke="#000000" d="M340.3997,-253.9551C332.3383,-244.76 322.4178,-233.4445 313.6783,-223.476"/>
<polygon fill="#000000" stroke="#000000" points="316.2049,-221.0485 306.9807,-215.8365 310.9413,-225.6632 316.2049,-221.0485"/>
</g>
<!-- Minio S3 -->
<g id="node7" class="node">
<title>Minio S3</title>
<ellipse fill="none" stroke="#000000" cx="396.22" cy="-145" rx="47.0129" ry="18"/>
<text text-anchor="middle" x="396.22" y="-140.8" font-family="Times,serif" font-size="14.00" fill="#000000">Minio S3</text>
</g>
<!-- camera&#45;motion&#45;detect&#45;&gt;Minio S3 -->
<g id="edge6" class="edge">
<title>camera-motion-detect-&gt;Minio S3</title>
<path fill="none" stroke="#000000" d="M361.951,-253.804C368.6045,-232.6791 379.6542,-197.5964 387.4031,-172.9935"/>
<polygon fill="#000000" stroke="#000000" points="390.8337,-173.7518 390.4996,-163.1622 384.157,-171.6489 390.8337,-173.7518"/>
</g>
<!-- camera&#45;tiler -->
<g id="node3" class="node">
<title>camera-tiler</title>
<ellipse fill="none" stroke="#000000" cx="527.22" cy="-272" rx="57.8558" ry="18"/>
<text text-anchor="middle" x="527.22" y="-267.8" font-family="Times,serif" font-size="14.00" fill="#000000">camera-tiler</text>
</g>
<!-- cam.k&#45;space.ee/tiled -->
<g id="node4" class="node">
<title>cam.k-space.ee/tiled</title>
<ellipse fill="none" stroke="#000000" cx="527.22" cy="-199" rx="89.7229" ry="18"/>
<text text-anchor="middle" x="527.22" y="-194.8" font-family="Times,serif" font-size="14.00" fill="#000000">cam.k-space.ee/tiled</text>
</g>
<!-- camera&#45;tiler&#45;&gt;cam.k&#45;space.ee/tiled -->
<g id="edge2" class="edge">
<title>camera-tiler-&gt;cam.k-space.ee/tiled</title>
<path fill="none" stroke="#000000" d="M527.22,-253.9551C527.22,-245.8828 527.22,-236.1764 527.22,-227.1817"/>
<polygon fill="#000000" stroke="#000000" points="530.7201,-227.0903 527.22,-217.0904 523.7201,-227.0904 530.7201,-227.0903"/>
</g>
<!-- camera -->
<g id="node5" class="node">
<title>camera</title>
<ellipse fill="none" stroke="#000000" cx="513.22" cy="-360.8" rx="51.565" ry="18"/>
<text text-anchor="middle" x="513.22" y="-356.6" font-family="Times,serif" font-size="14.00" fill="#000000">📸 camera</text>
</g>
<!-- camera&#45;&gt;camera&#45;motion&#45;detect -->
<g id="edge4" class="edge">
<title>camera-&gt;camera-motion-detect</title>
<path fill="none" stroke="#000000" d="M485.8726,-345.3322C460.8217,-331.1633 423.4609,-310.0318 395.271,-294.0875"/>
<polygon fill="#000000" stroke="#000000" points="396.8952,-290.9851 386.4679,-289.1084 393.449,-297.078 396.8952,-290.9851"/>
</g>
<!-- camera&#45;&gt;camera&#45;tiler -->
<g id="edge3" class="edge">
<title>camera-&gt;camera-tiler</title>
<path fill="none" stroke="#000000" d="M516.1208,-342.4006C518.0482,-330.175 520.6159,-313.8887 522.7961,-300.0599"/>
<polygon fill="#000000" stroke="#000000" points="526.2706,-300.4951 524.3708,-290.072 519.356,-299.4049 526.2706,-300.4951"/>
</g>
<!-- camtiler&#45;event&#45;broker -->
<g id="node9" class="node">
<title>camtiler-event-broker</title>
<ellipse fill="none" stroke="#000000" cx="95.22" cy="-91" rx="95.4404" ry="18"/>
<text text-anchor="middle" x="95.22" y="-86.8" font-family="Times,serif" font-size="14.00" fill="#000000">camtiler-event-broker</text>
</g>
<!-- mongo&#45;&gt;camtiler&#45;event&#45;broker -->
<g id="edge8" class="edge">
<title>mongo-&gt;camtiler-event-broker</title>
<path fill="none" stroke="#000000" d="M254.6316,-196.5601C185.4398,-191.6839 43.6101,-179.7471 28.9976,-163 18.4783,-150.9441 20.8204,-140.7526 28.9976,-127 32.2892,-121.4639 36.7631,-116.7259 41.8428,-112.6837"/>
<polygon fill="#000000" stroke="#000000" points="43.9975,-115.4493 50.2411,-106.8896 40.0224,-109.6875 43.9975,-115.4493"/>
<text text-anchor="middle" x="153.8312" y="-140.8" font-family="Times,serif" font-size="14.00" fill="#000000">transforms object to add (signed) URL to S3</text>
</g>
<!-- cam.k&#45;space.ee -->
<g id="node8" class="node">
<title>cam.k-space.ee</title>
<ellipse fill="none" stroke="#000000" cx="292.22" cy="-18" rx="70.0229" ry="18"/>
<text text-anchor="middle" x="292.22" y="-13.8" font-family="Times,serif" font-size="14.00" fill="#000000">cam.k-space.ee</text>
</g>
<!-- Minio S3&#45;&gt;cam.k&#45;space.ee -->
<g id="edge10" class="edge">
<title>Minio S3-&gt;cam.k-space.ee</title>
<path fill="none" stroke="#000000" d="M394.7596,-126.8896C392.7231,-111.3195 387.8537,-88.922 376.22,-73 366.0004,-59.0134 351.0573,-47.5978 336.5978,-38.8647"/>
<polygon fill="#000000" stroke="#000000" points="338.1215,-35.7041 327.7038,-33.7748 334.6446,-41.7796 338.1215,-35.7041"/>
<text text-anchor="middle" x="521.2881" y="-86.8" font-family="Times,serif" font-size="14.00" fill="#000000">using signed URL from camtiler-event-broker</text>
<polyline fill="none" stroke="#000000" points="650.3562,-82.6 392.22,-82.6 392.9753,-115.8309 "/>
</g>
<!-- cam.k&#45;space.ee&#45;&gt;mongo -->
<g id="edge7" class="edge">
<title>cam.k-space.ee-&gt;mongo</title>
<path fill="none" stroke="#000000" d="M292.22,-36.2125C292.22,-67.8476 292.22,-133.1569 292.22,-170.7273"/>
<polygon fill="#000000" stroke="#000000" points="288.7201,-170.9833 292.22,-180.9833 295.7201,-170.9833 288.7201,-170.9833"/>
<text text-anchor="middle" x="332.0647" y="-86.8" font-family="Times,serif" font-size="14.00" fill="#000000">queries events</text>
<polyline fill="none" stroke="#000000" points="371.9094,-82.6 292.22,-82.6 292.22,-91.3492 "/>
</g>
<!-- camtiler&#45;event&#45;broker&#45;&gt;cam.k&#45;space.ee -->
<g id="edge9" class="edge">
<title>camtiler-event-broker-&gt;cam.k-space.ee</title>
<path fill="none" stroke="#000000" d="M138.9406,-74.7989C169.6563,-63.417 210.7924,-48.1737 242.716,-36.3441"/>
<polygon fill="#000000" stroke="#000000" points="244.1451,-39.5472 252.3059,-32.7905 241.7128,-32.9833 244.1451,-39.5472"/>
</g>
</g>
</svg>

Before

Width:  |  Height:  |  Size: 7.8 KiB

View File

@@ -1,85 +0,0 @@
---
apiVersion: codemowers.cloud/v1beta1
kind: OIDCMiddlewareClient
metadata:
name: sso
spec:
displayName: Cameras
uri: 'https://cam.k-space.ee/tiled'
allowedGroups:
- k-space:floor
- k-space:friends
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: camtiler
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: camtiler-sso@kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
external-dns.alpha.kubernetes.io/hostname: cams.k-space.ee,cam.k-space.ee
spec:
rules:
- host: cam.k-space.ee
http:
paths:
- pathType: Prefix
path: "/tiled"
backend:
service:
name: camera-tiler
port:
number: 5001
- pathType: Prefix
path: "/m"
backend:
service:
name: camera-tiler
port:
number: 5001
- pathType: Prefix
path: "/events"
backend:
service:
name: logmower-eventsource
port:
number: 3002
- pathType: Prefix
path: "/"
backend:
service:
name: logmower-frontend
port:
number: 8080
tls:
- hosts:
- "*.k-space.ee"
---
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: cams-redirect
spec:
redirectRegex:
regex: ^https://cams.k-space.ee/(.*)$
replacement: https://cam.k-space.ee/$1
permanent: true
---
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: cams
spec:
entryPoints:
- websecure
routes:
- match: Host(`cams.k-space.ee`)
kind: Rule
middlewares:
- name: cams-redirect
services:
- kind: TraefikService
name: api@internal

Some files were not shown because too many files have changed in this diff Show More