721 Commits

Author SHA1 Message Date
9ef252c8ec hackerspace kustomize
+ move static env to dockerfile
+ doorboy-direct refactor
2025-08-14 01:19:43 +03:00
c29de936af move parts of CLUSTER doc to ansible, clarify doc 2025-08-14 01:17:34 +03:00
511f6f4ca1 disable mongodb-operator 2025-08-08 03:06:29 +03:00
9be8fc3a95 mongodb is all external 2025-08-08 03:03:49 +03:00
Erki Aas
18d181f36a Disable csi-proxmox 2025-08-08 00:45:29 +03:00
Erki Aas
88eae1c35c Allow rook on control plane nodes 2025-08-07 22:11:50 +03:00
Erki Aas
79ebad6730 Disable managing rook-ceph secrets 2025-08-07 22:05:27 +03:00
Erki Aas
24229639b4 Allow rook on control plane nodes 2025-08-07 21:42:03 +03:00
Erki Aas
71d0667009 Fix ceph storage classes 2025-08-07 21:17:03 +03:00
Erki Aas
ad35fc4828 Migrate storage classes to ceph 2025-08-07 21:06:44 +03:00
3e3814efbe mongodb-operator v0.13.0 2025-08-07 19:30:28 +03:00
b6ea5d3393 mongodb-operator v0.12.0 2025-08-07 19:26:34 +03:00
c5fd94c41b mongodb-operator v0.11.0 2025-08-07 19:26:00 +03:00
f5560f812b argocd mongodb-operator 2025-08-07 19:24:27 +03:00
bbf454f33d checksync dirs <-> argocd 2025-08-07 19:20:45 +03:00
7af3a2f751 kube-system extras to kustomize 2025-08-07 19:12:24 +03:00
ad865ad8b3 link passmower-members to passmower 2025-08-07 19:04:11 +03:00
835ed59970 up grafana doc 2025-08-07 19:02:11 +03:00
872469f1c6 add checksync.sh 2025-08-07 18:59:04 +03:00
6bbe84ecbb argocd kube-system (extras) autosync+prune 2025-08-07 18:54:09 +03:00
Erki Aas
86668b80a3 Fix external-snapshotter 2025-08-07 18:50:45 +03:00
b74a9682d6 docs for comparing ns <> dir <> argo 2025-08-07 18:49:50 +03:00
fb4eb6e285 sync namespace names with directory names 2025-08-07 18:49:50 +03:00
Erki Aas
d74e4fd76f Fix external-snapshotter 2025-08-07 18:46:53 +03:00
Erki Aas
605ad868bb Fix external-snapshotter 2025-08-07 18:44:39 +03:00
73ecae479b argocd hackerspace: fixup indenting 2025-08-07 18:36:29 +03:00
82311c86ff sync namespace names with directory names 2025-08-07 18:32:12 +03:00
42aef1e928 argocd: kube-system extras 2025-08-07 18:27:59 +03:00
f3ef2facdf disable unused cnpg 2025-08-07 18:03:59 +03:00
796e9394ca mysql-clusters not used 2025-08-07 18:03:28 +03:00
5f90a41009 revert DNS move for etherpad 2025-08-07 16:34:47 +03:00
c32c84f6ed mariadb: move hardcoded IP to DNS 2025-08-07 16:26:21 +03:00
Erki Aas
20704e3a24 Add kubernetes-csi/external-snapshotter 2025-08-04 19:40:03 +03:00
882ffdd92a passmower helm kustomize 2025-08-04 16:51:08 +03:00
f88d4bb8e2 move 0ac4364157 2025-08-04 10:11:56 +03:00
c2bb1cc5ac passmower: sync config drift 2025-08-04 09:08:59 +03:00
Erki Aas
4dc45594f1 Fix rook config 2025-08-03 20:41:28 +03:00
Erki Aas
103c4deff4 Fix rook config 2025-08-03 20:35:47 +03:00
Erki Aas
6b753d4bf1 Fix rook config 2025-08-03 20:32:15 +03:00
Erki Aas
f54b5469f8 Enable rook ceph toolbox 2025-08-03 20:19:11 +03:00
Erki Aas
6543c61f81 Enable rook ceph toolbox 2025-08-03 20:17:26 +03:00
Erki Aas
5232edc303 Fix rook chart 2025-08-03 19:52:42 +03:00
Erki Aas
995360f105 Dont create rook default pools etc 2025-08-03 19:48:48 +03:00
Erki Aas
159a41d782 Fix rook chart 2025-08-03 19:47:12 +03:00
Erki Aas
1acaa04123 Fix rook chart 2025-08-03 18:35:15 +03:00
Erki Aas
2526bb5516 Install rook-ceph (operator) and rook-ceph-cluster (external cluster) 2025-08-03 18:30:09 +03:00
Erki Aas
7e2acf3e94 Install rook-ceph (operator) and rook-ceph-cluster (external cluster) 2025-08-03 18:29:19 +03:00
Erki Aas
ee72ba4db2 Install rook-ceph (operator) and rook-ceph-cluster (external cluster) 2025-08-03 18:29:19 +03:00
6c6e396db1 pve: enable pve92, remove older nodes 2025-08-02 15:45:25 +03:00
Erki Aas
a675ad127b full ipv4/6 bgp mesh with router and pve 2025-08-01 22:45:47 +03:00
0029a7e709 README: Update networking wiki ref 2025-08-01 19:25:17 +03:00
e5c914b302 pve.k-space.ee: add pve9x 2025-07-31 10:37:03 +03:00
316fbde6e6 maybe name 2025-07-30 16:49:25 +03:00
2ae8c5b99e rosdump cleanup live debug :) 2025-07-30 16:27:04 +03:00
4f3a9058f9 rosdump: git commit logic + image expectations 2025-07-30 16:20:36 +03:00
a86f5bb250 Revert "add ttlSecondsAfterFinished to rosdump"
This reverts commit c65a75ee0e.

Should be handled by CronJob anyway.
2025-07-30 15:55:51 +03:00
Erki Aas
2754b4e2f7 Add startingDeadlineSeconds to rosdump cronjob 2025-07-30 12:39:19 +03:00
28d50548bf Add startingDeadlineSeconds to rosdump cronjob 2025-07-30 09:31:50 +00:00
c65a75ee0e add ttlSecondsAfterFinished to rosdump
failed jobs give up and don't get rescheduled
2025-07-30 12:06:54 +03:00
2a79309842 whouses nas.k-space.ee 2025-07-29 19:15:17 +03:00
0fc47cab2a minio-clusters tidy 2025-07-29 15:58:20 +03:00
02b7cde355 proxmox-csi: fixup node tolerations 2025-07-28 13:15:48 +03:00
ea358b3883 pve/pvs csi: disable unused nas 2025-07-27 04:46:57 +03:00
b4ae5d3f1f frigate: rm unused/undeployed PVE storage classes 2025-07-27 04:45:26 +03:00
4dddb9622c frigate: id user 2025-07-26 14:29:54 +03:00
c51f7368e2 argocd: use traefik's wildcard tls
It was getting its own argocd.k-space.ee via CR.
and probably it failed to update it since in reality
ingress.tls.enable was false. Maybe also diff
ArgoCD versions.
2025-07-26 11:57:06 +03:00
7adbf2476d gitea v1.24.3 2025-07-26 11:43:11 +03:00
c71be24984 grafana: add plugin Infinity 2025-07-24 11:08:09 +03:00
67c97adc96 grafana forbids having secrets in secrets
3 layers of jumala eest sa secretit grafanale ei annaks
probably the key in secret reference is getting flagged
no error message, it is just dropped, but still
overrides env.. This seems to be a problem again
since Jan/Feb, with the accepted workaround being enving it.

Do as the docs don't say and agains, four times over?
2025-07-24 11:08:03 +03:00
ca4de329f7 kustomize grafana 2025-07-24 09:36:54 +03:00
b6098f92b0 hotfix: grafana upgrade
maintainer bot where are you
2025-07-24 05:08:22 +03:00
02bfe1dfa2 rm mktxp 2025-07-24 01:52:27 +03:00
541a060b6f Add mktxp 2025-07-24 01:29:31 +03:00
Arti Zirk
af3bd7bb41 Re-enable sw_core mikrotik-exporter
Hopefully this time, after RouterOS upgrades, they will not leak memory
anymore. (:
2025-07-22 18:11:50 +03:00
31800f8ffb proxmox-csi to argocd
The images were not pinned to a version before.
proxmox-csi was at :edge, and
its dependencies pinned to outdated/incompatible.
2025-07-22 02:02:44 +03:00
24b57de126 trim spaces in codebase 2025-07-22 01:44:26 +03:00
Erki Aas
6317daefa1 wildduck: disable zonemta dns caching 2025-07-20 13:58:09 +03:00
Erki Aas
31558db1d4 wildduck: improve config and add healthchecks 2025-07-20 13:30:13 +03:00
efb467e425 dragonfly: scale to 1 instance (hotfix)
eaas: dragonfly has 1 rw instance and applications
don't realize and change over from ro to rw when
the leader changes.
2025-07-17 10:52:21 +03:00
130839ff7f rm reloader: very outdated and no consumers 2025-07-17 00:51:21 +03:00
6e6b3743a0 disable freeswitch
reported not working and not tracked with anything
2025-07-16 23:29:44 +03:00
6f2220445d disable _asterisk and destroy namespace
decision with eaas, currently broken, nobody has shown interest
and trying to maintain kube as a first priority
2025-07-16 23:03:54 +03:00
bc731d98ec rm camtiler ecosystem in favour for frigate 2025-07-13 13:35:49 +03:00
3a0747d9b8 frigate: add remaining old cameras 2025-07-13 01:16:27 +03:00
792a0864a4 frigate: Fix TPU configuration 2025-07-12 22:04:34 +03:00
17f95e14cc frigate: WIP stuff until CEPH arrives 2025-07-12 21:39:39 +03:00
d3b85e4f24 frigate: Schedule frigate to dedicated nvr node 2025-07-12 20:48:02 +03:00
8525cef4fc frigate: argo autosync 2025-07-12 20:39:37 +03:00
c519fd3d6c frigate: rename rstp to rtsp 2025-07-12 20:38:44 +03:00
4408c22c5b argocd: Add Frigate 2025-07-12 20:28:58 +03:00
2041f5f80a frigate: Migrate to Kustomize 2025-07-12 20:27:50 +03:00
84b259ace4 argocd: metallb: Enable prune 2025-07-12 19:01:39 +03:00
f9fe0379da inventory: add refresh tokens 2025-07-12 18:55:06 +03:00
0359eedcb5 argocd: Add metallb 2025-07-12 18:39:44 +03:00
a03ea7d208 metallb: Migrate to kustomize + helm 2025-07-12 18:38:50 +03:00
c7cb495451 argo: Add Harbor 2025-07-12 16:39:15 +03:00
a6439a3bd1 harbor: Migrate to kustomize + ArgoCD
Still some stuff missing like proper DragonFly resources
2025-07-12 16:27:17 +03:00
754b2180fd goredirect: target k6.ee directly traefik.k-space.ee
will be interesting how the cname works out
for ingress, it must be the same IP space as traefik is on, otherwise dns points to ip with nothing
2025-07-09 20:34:09 +03:00
4f35c87a6c tidy f0b78f7b17 migration leftovers 2025-07-09 13:39:14 +03:00
266b8ee6aa redeploy (and update) frigate
longhorn is stuck in a loop attaching/detacching its storage
2025-06-30 03:08:42 +03:00
f726f8886a longhorn v1.8.2 2025-06-29 23:50:11 +03:00
fe128cf65e longhorn v1.7.3 2025-06-29 23:44:07 +03:00
7232957a04 traefik: move to kustomize
Closes #102
2025-06-29 18:51:56 +03:00
43ad7586ce traefik: fix dashboard root redirect
Closes #70
2025-06-29 16:48:39 +03:00
1b34a48e81 rosdump: move (secretful) result to secretspace 2025-06-29 16:26:05 +03:00
0d18bfd7cc rosdump: ignore ssh host keys 2025-06-29 16:26:05 +03:00
94751c44f9 rosdump: update device targets
- devices moved to sec, update DNS
- fix kube net policy CIDR
2025-06-29 16:25:32 +03:00
de36d70e68 rosdump: fixup 7d71f1b29c 2025-06-29 16:24:44 +03:00
efc2598160 k6: add-slug changed to directly add-by-slug
See also: inventory-app: 9c6902c5a2a90a6bd6a8fa93554f4dc353d9f777^
2025-06-24 18:45:42 +03:00
db935de1a5 gitea does not go through traefik 2025-06-18 19:48:53 +03:00
885f4b505e revert 28daa56bad 2025-06-18 18:46:06 +03:00
aab40b012d cert-manager to argo kustomize helm 2025-06-18 18:21:35 +03:00
28daa56bad cert-manager: rename to default-cluster-cert-issuer
much easier, vs ctrl-f for 'default'
2025-06-18 17:53:28 +03:00
a1e1dcf827 traefik: drop harbor already default issuer 2025-06-18 16:55:50 +03:00
bb1c313a37 inventory: add MACADDRESS_OUTLINK_BASEURL env 2025-05-25 17:25:19 +03:00
d7d83b37f4 freescout: not quite OIDC 2025-05-21 21:29:58 +03:00
0ac4364157 passmower: disable NORMALIZE_EMAIL_ADDRESSES
see comment in file
2025-05-21 20:48:53 +03:00
b8e525c3e0 passmower: texts: K-SPACE in all capital 2025-05-21 19:53:11 +03:00
92db22fd09 docs: there is no keydb 2025-05-03 16:26:32 +03:00
4466878b54 docs: drone is replaced 2025-05-03 15:11:11 +03:00
9b93075543 move members repo to secretspace 2025-05-03 15:05:59 +03:00
ce2e6568b1 wildduck: add mailservice group
#65
2025-04-22 12:33:45 +03:00
f82caf1751 rm unused kdoorpi
- door are outside of this cluster
- kdoorpi is superseeded by godoor
- 0 pods running
2025-04-21 03:16:51 +03:00
d9877a9fc5 tigera-operator: v3.29.3 2025-04-20 22:03:54 +03:00
13cfeeff2b tigera-operator: v3.28.4 2025-04-20 22:03:54 +03:00
21e70685f3 tigera-operator: sync configuration drift 2025-04-20 22:03:50 +03:00
6d7cdbd9c6 tigera-operator to argo (v3.28.1) 2025-04-20 21:32:02 +03:00
10585c7aff dragonfly: v1.1.11 2025-04-20 19:27:28 +03:00
bc301104fe dragonfly: to argo (v1.1.6) 2025-04-20 19:27:24 +03:00
853c9717a9 rm unused opensearch
formerly about to be used by graylog,
which itself has been replaced twice over
2025-04-20 19:18:59 +03:00
ec81c34086 ripe87 to argo 2025-04-20 19:18:59 +03:00
0b713ab321 shared/minio is already dead 2025-04-20 19:18:59 +03:00
541607a7bd cpng: v1.25.1 2025-04-20 19:18:59 +03:00
d9dce6cadf cnpg to argo (v1.24.1) 2025-04-20 19:18:59 +03:00
0447abecdc rm postgres-operator (4th competing postgres?) 2025-04-20 19:18:59 +03:00
61f7d724b5 argo: secret-claim-operator to git 2025-04-20 19:18:59 +03:00
f899283fdb argo: tidy 2025-04-20 19:18:59 +03:00
fb3123966e keydb (and redis) is dead 2025-04-20 19:18:54 +03:00
5b29fbe7cd prometheus-operator: v0.82.0 2025-04-20 19:06:37 +03:00
9fb356b5a6 prometheus-operator: v0.81.0 2025-04-20 19:06:37 +03:00
908f482396 prometheus-operator: v0.80.1 2025-04-20 19:06:37 +03:00
715cb5ce4b prometheus-operator: v0.79.2 2025-04-20 19:06:37 +03:00
48915ec26c prometheus-operator: v0.78.2 2025-04-20 19:06:37 +03:00
06324bb583 prometheus-operator: v0.77.2 2025-04-20 19:06:37 +03:00
877662445a prometheus-operator: v0.76.2 2025-04-20 19:06:37 +03:00
22b67fa4fc prometheus-operator: migrate to argo+kustomize
v0.75.1 - same as in cluster currently
2025-04-20 19:06:37 +03:00
006240ee1a sync cluster deviation: pve-csi storageclass provisioners
minio-clusters: kustomization; disable unused and outdated shared and dedicated
2025-04-20 19:06:37 +03:00
2a26b4e94c traefik: drop already-enforced router.tls=true annotation 2025-04-20 19:06:37 +03:00
4e59984fe4 woodpecker: fixup assumptions 2025-04-20 19:06:32 +03:00
7eadbee7a2 argo: enable helm in kustomize + update 2025-04-20 19:01:39 +03:00
a94fddff1e woodpecker: recreate to v3 on kustomize 2025-04-20 19:01:39 +03:00
bf44e4fa9b partial revert 3243ed1066786288956ecd7afbedf05104018721 2025-04-20 19:01:39 +03:00
f7f7d52e70 Revert "convert reloader to helm"
Failed sync attempt to 2.1.0: one or more objects failed to apply,
reason: Deployment.apps "reloader-reloader" is invalid:
spec.template.metadata.labels: Invalid value:
map[string]string{"app.kubernetes.io/instance":"reloader",
"app.kubernetes.io/managed-by":"Helm",
"app.kubernetes.io/name":"reloader",
"app.kubernetes.io/version":"v1.4.0", "group":"com.stakater.platform",
"helm.sh/chart":"reloader-2.1.0", "provider":"stakater",
"version":"v1.4.0"}: `selector` does not match template `labels`
(retried 5 times).

This reverts commit db1f33df6d28da34a973678ff576032a445dd39f.
2025-04-20 19:01:39 +03:00
cf9d686882 mirror.gcr.io
and explicit latest tag
2025-04-20 19:01:39 +03:00
5bd0a57417 explicitly use docker library 2025-04-20 19:01:39 +03:00
e22713b282 pin and update 2025-04-20 19:01:39 +03:00
37a8031bc4 minor version updates 2025-04-20 19:01:39 +03:00
095e00b516 nextcloud: 31.0.2 2025-04-20 19:01:39 +03:00
4d84a0a5ca nextcloud: 30.0.8 2025-04-20 19:01:39 +03:00
73f03dbb2a nextcloud: 29.0.14 2025-04-20 19:01:39 +03:00
0c5d2bc792 nextcloud: 28.0.14 2025-04-20 19:01:38 +03:00
6cf53505ad nextcloud: 27.1.13 2025-04-20 19:01:38 +03:00
a694463fad nextcloud 26.0.13 2025-04-20 19:01:38 +03:00
d1eeba377d nextcloud: current version 2025-04-20 19:01:38 +03:00
0628cb94e4 convert reloader to helm 2025-04-20 19:01:38 +03:00
376e74a985 harbor update 2025-04-20 19:01:38 +03:00
6eb0c20175 disable discourse
- posts and user list manually exported
- not in argo
- outdated version
- e-mail is broken
- nobody has accessed in 6mo
- no posts, apart from the initial admin
2025-04-20 19:01:38 +03:00
4bf08fdc7f disable camtiler 2025-04-20 19:01:30 +03:00
f05b1f1324 openebs already disabled 2025-04-18 23:10:38 +03:00
5fa3144e23 logging namespace already disabled 2025-04-18 23:10:38 +03:00
48054078e2 local-path-storage already unused, for 2y 2025-04-18 23:10:38 +03:00
4cf4aecea9 playground is already disabled 2025-04-18 23:10:38 +03:00
8d1c24b80f disable whoami-oidc (broken) 2025-04-18 23:10:38 +03:00
0dcd26fe4f traefik: combined tls 2025-04-18 19:21:24 +03:00
e33053bf79 goredirect: bind workaround 2025-04-18 19:18:56 +03:00
e632b90d2b bind: enable k6.ee 2025-04-18 18:47:22 +03:00
3b5df4cd43 bind: cleanup mail.k-space.ee present in wildduck/dns.yaml 2025-04-18 18:41:18 +03:00
a280a19772 inventory: k6 tls 2025-04-18 18:41:18 +03:00
19e6f53d96 inventory: rm namespace
provided by argo / kubectl command anyway
except for role-bindings, they don't get it
2025-04-18 18:41:16 +03:00
e9efee4853 inventory: fix orphaned selectors 2025-04-18 16:56:19 +03:00
a33d0d12b0 gitea: also disable passkeys ot enforce OIDC 2025-04-18 14:46:58 +03:00
dc42a9612a gitea: update and disable passwd login
Closes #11
2025-04-18 14:38:49 +03:00
6f48e3a53a Inventory Minio Quota 1 → 10 Gi
Closes k-space/inventory-app#27
2025-04-11 16:28:58 +03:00
09423ace42 rm unneeded deprecated flag 2025-03-27 09:06:07 +02:00
bb802882ae add Aktiva to non-SSO listing 2025-02-25 23:10:51 +02:00
4a7dfd6435 fix passmower email login link 2025-01-09 13:02:54 +02:00
Erki Aas
fb7504cfee force traefik to all worker nodes 2025-01-02 20:35:22 +02:00
Erki Aas
a4b9bdf89d frigate: make config storage larger 2025-01-02 20:24:17 +02:00
602b4a03f6 frigate: use coral for detect, nvidia gpu for transcode and longhorn for config storage 2025-01-02 20:19:48 +02:00
f9ad582136 allow scheduling longhorn on nvr 2025-01-02 20:19:48 +02:00
305b8ec038 add nvidia-device-plugin to use nvr gpu 2025-01-02 20:19:48 +02:00
7d71f1b29c fix rosdump 2025-01-02 20:19:48 +02:00
0e79aa8f4e passmower: 4/4 replicas (for pve localhost) 2025-01-02 01:25:04 +02:00
a784f00c71 argo: autosync passmower 2025-01-02 01:19:22 +02:00
b71a872c09 argo: passmower helm + extras didn't work out
Kustomize should be able to auto-generate Helm as well.
2025-01-02 01:02:23 +02:00
21beb2332c argo: add passmower 2025-01-02 00:53:04 +02:00
8eed4f66c1 pve: add pve2 2025-01-02 00:24:56 +02:00
75b9948997 pve: fmt port.number on same line 2025-01-02 00:24:47 +02:00
e4dfde9562 argo docs 2 2024-12-15 06:34:47 +02:00
a82193f059 add argocd-image-updater 2024-12-15 06:28:42 +02:00
68a75b8389 migrate OIDC codemowers.io/v1alpha1 to v1beta1 2024-12-15 05:39:41 +02:00
5368fe90eb argo: add localhost callback for CLI login 2024-12-15 05:39:41 +02:00
cded6fde3f fixup argo docs 2024-12-15 05:39:41 +02:00
402ff86fde grafana: disable non-oauth login 2024-12-15 01:46:22 +02:00
272f60ab73 monitoring: mikrotik-exporter fix 2024-11-22 08:16:12 +02:00
9bcad2481b monitoring: Update node-exporter 2024-11-22 05:59:34 +02:00
c04a7b7f67 monitoring: Update mikrotik-exporter 2024-11-22 05:59:08 +02:00
c23fa07c5e monitoring: Update mikrotik-exporter 2024-11-19 15:48:31 +02:00
Erki Aas
c1822888ec dont compile discourse assets 2024-10-25 14:44:27 +03:00
Erki Aas
e26cac6d86 add discourse 2024-10-25 14:35:20 +03:00
Erki Aas
d7ba4bc90e upgrade cnpg 2024-10-25 14:03:50 +03:00
Erki Aas
da4df6c21d frigate: move storage to dedicated nfs share and offload transcoding to separate go2rtc deployment 2024-10-19 13:51:13 +03:00
2964034cd3 fix rosdump scheduling 2024-10-18 18:45:42 +03:00
ae525380b1 fix gitea oidc reg 2024-10-18 18:44:27 +03:00
4b9c3ad394 monitoring: Temporarily disable monitoring of core switches 2024-10-15 10:07:28 +03:00
dbebb39749 gitea: Bump version 2024-10-02 08:15:20 +03:00
Erki Aas
6f15e45402 freeswitch: fix network policy 2024-10-01 22:32:16 +03:00
Erki Aas
36bf431259 freeswitch: fix network policy 2024-10-01 20:27:08 +03:00
Erki Aas
c14a313c57 frigate: enable recording and use openvino 2024-09-29 23:06:41 +03:00
Erki Aas
15a2fd9375 add frigate 2024-09-29 21:34:31 +03:00
Erki Aas
5bd6cf2317 freeswitch: add gitignore 2024-09-29 19:05:42 +03:00
Erki Aas
407f691152 add freeswitch 2024-09-29 19:05:42 +03:00
Erki Aas
e931f490c2 asterisk: update network policy 2024-09-29 19:05:42 +03:00
Erki Aas
b96e8d16a6 expose harbor via traefik 2024-09-29 19:05:42 +03:00
Erki Aas
15d4d44be7 expose traefik via ingress 2024-09-29 19:05:42 +03:00
Erki Aas
52ce6eab0a expose harbor via traefik 2024-09-29 19:04:51 +03:00
e89d045f38 goredirect: add nopath env var 2024-09-13 21:54:49 +03:00
7e70315514 monitoring: Fix snmp-exporter 2024-09-12 22:15:10 +03:00
af5a048bcd replace ups 2024-09-12 21:54:46 +03:00
0005219f81 monitoring: Fix mikrotik-exporter formatting 2024-09-12 21:48:43 +03:00
813bb32e48 monitoring: Update UPS-es 2024-09-12 21:47:20 +03:00
0efae7baf9 unschedule harbor from storage nodes 2024-09-12 19:48:51 +03:00
be90b4e266 monitoring: Update mikrotik-exporter 2024-09-09 22:19:46 +03:00
999d17c384 rosdump: Use codemowers/git image 2024-09-09 08:45:21 +03:00
Erki Aas
bacef8d438 remove logmower 2024-09-08 23:54:32 +03:00
60d1ba9b18 monitoring: Bump mikrotik-exporter again 2024-09-06 12:10:45 +03:00
dcb80e6638 monitoring: Bump mikrotik-exporter 2024-09-06 11:55:49 +03:00
95e0f97db2 grafana: Specify OIDC scopes on Grafana side 2024-09-05 09:32:34 +03:00
f5a7b44ae6 grafana: Add groups OIDC scope 2024-09-05 09:29:16 +03:00
be7e1d9459 grafana: Assign editor role for hackerspace members 2024-09-05 09:23:41 +03:00
cd807ebcde grafana: Allow OIDC assignment to admin role 2024-09-05 09:04:02 +03:00
eaac7f61a7 monitoring: Pin specific mikrotik-exporter image 2024-09-04 23:29:37 +03:00
Erki Aas
a0d5a585e4 add and configure calico ippool 2024-09-04 23:12:35 +03:00
1f8f288f95 monitoring: Update Mikrotik exporter 2024-09-04 22:33:15 +03:00
9de1881647 monitoring: Enable Prometheus admin API 2024-09-04 22:28:01 +03:00
Erki Aas
28904cdd63 make calico use ipip encapsulation 2024-09-04 22:27:36 +03:00
0df188db36 monitoring: README syntax fix 2024-09-04 07:12:56 +03:00
a42b79b5ac monitoring: Add doc.crds.dev ref 2024-09-04 07:12:21 +03:00
Erki Aas
89875a66f8 update passmower config 2024-08-29 14:38:44 +03:00
927366a3d5 inventory: add groups scope 2024-08-28 16:15:07 +03:00
Erki Aas
29212d7f14 passmower: get charts from ghcr 2024-08-27 15:58:05 +03:00
1d8528b312 argocd: Move to DragonflyDB and add resource customizations 2024-08-27 12:41:24 +03:00
566beecb6a Create dummy/stub entries in auth.k-space.ee 2024-08-26 23:51:04 +03:00
Erki Aas
4c52ca88ef add proxmox-nas storage class 2024-08-25 11:34:31 +03:00
b5fceb0f35 Update storage classes 2024-08-25 09:26:57 +03:00
c609b1df04 wildduck: Restore MongoDB 2024-08-25 09:26:27 +03:00
22d65664b2 whoami: Set higher port 2024-08-25 00:25:49 +03:00
59db08e891 whoami: Fix memory limit 2024-08-25 00:22:54 +03:00
d8402bdec5 whoami: Drop privileges 2024-08-25 00:21:24 +03:00
a71bd5de37 whoami: Add resource limits 2024-08-25 00:17:34 +03:00
ce9891046f wildduck: Add resource limits 2024-08-25 00:12:25 +03:00
fea3e8ce66 nextcloud: Fix Dragonfly topology spread constraints 2024-08-25 00:02:51 +03:00
bfeba4017b monitoring: Add revisionHistoryLimit: 0 2024-08-24 23:58:07 +03:00
4b00d876ad nextcloud: Set resource limits 2024-08-24 23:31:22 +03:00
d1e8d8e356 bind: Fix resource limits 2024-08-24 23:28:28 +03:00
22c6fe1979 bind: Add resource limits 2024-08-24 23:25:40 +03:00
f53b31e030 bind: Use topology spread constraint instead of anti affinity rules 2024-08-24 23:22:34 +03:00
cb41b739cc passmower: Fix Dragonfly topology spread constraints 2024-08-24 23:05:13 +03:00
91af1911c4 rm users, now at k-space/members 2024-08-24 22:53:35 +03:00
Erki Aas
4532eccd6d proxy image artefacts through harbor 2024-08-24 19:36:10 +03:00
Erki Aas
d4913aacbf add netshoot container to debug network issues 2024-08-24 19:23:35 +03:00
Erki Aas
abe022eecc update argo readme 2024-08-24 19:23:17 +03:00
Erki Aas
4bcb0a8856 fix members argo app 2024-08-24 19:19:27 +03:00
Erki Aas
b849ac340e fix members argo app 2024-08-24 19:12:31 +03:00
Erki Aas
b922412417 fix members argo app 2024-08-24 19:10:14 +03:00
Erki Aas
2661fe211e manage members (oidcusers) with argocd 2024-08-24 19:05:53 +03:00
Erki Aas
a9406748c5 manage members (oidcusers) with argocd 2024-08-24 19:01:40 +03:00
Erki Aas
cc92ea67f4 upgrade wildduck components 2024-08-24 17:44:19 +03:00
Erki Aas
222d902ec2 cleanup old oidc-gateway 2024-08-24 16:29:24 +03:00
Erki Aas
65e30d5dec migrate most storage classes to proxmox-csi, allow it on masters 2024-08-24 16:29:24 +03:00
4210855827 freescout: Elaborate about mail sync 2024-08-24 15:49:05 +03:00
d7287018ac monitoring: Specify resource limits 2024-08-24 12:36:37 +03:00
3fbecab179 Move to PVE CSI provider 2024-08-24 12:15:30 +03:00
Erki Aas
024edc1c9b expose harbor via dedicated lb on storage nodes 2024-08-23 21:35:04 +03:00
Erki Aas
a94a3f829c expose harbor via dedicated lb on storage nodes 2024-08-23 21:35:04 +03:00
Erki Aas
36055cc869 migrate nextcloud to dragonfly 2024-08-23 21:35:04 +03:00
Erki Aas
aa91322ec6 remove grafana pv as it's using db now 2024-08-23 21:35:04 +03:00
c6c94b1901 test proxmox csi 2024-08-23 17:10:55 +03:00
67fb6c3727 Consolidate monitoring stack to Kube master nodes 2024-08-23 08:00:23 +03:00
Erki Aas
18483197c9 fix passmower image pulling 2024-08-22 14:12:54 +03:00
Erki Aas
a37d268574 temporarily disable middleware from pve 2024-08-22 14:12:54 +03:00
4b5e30f51f monitoring: Revert snmp-exporter because config file needs to be updated 2024-08-21 07:20:32 +03:00
78b0f1534a monitoring: Use gcr mirror for node exporter 2024-08-21 07:19:15 +03:00
0b03a720b3 monitoring: Bump versions, use gcr mirror 2024-08-21 07:17:05 +03:00
f1a2051838 monitoring: Move to topologySpreadConstraints 2024-08-21 07:11:06 +03:00
3280b25a83 Add more revisionHistoryLimit: 1 defs 2024-08-20 12:25:15 +03:00
0eec1fde8b gitea: Add revisionHistoryLimit 2024-08-20 12:21:36 +03:00
ede08c205b grafana: Use declarative data sources 2024-08-20 12:14:42 +03:00
666d900128 Restore minio storage class 2024-08-16 18:50:33 +03:00
bc31357d5b Integrate dos4dev PR #29: postgres-cluster docs 2024-08-16 18:07:45 +03:00
f3244afb20 woodpecker: Use RWX 2024-08-15 22:23:45 +03:00
Erki Aas
384a60244d update readme about network 2024-08-15 13:40:22 +03:00
Erki Aas
ed25720003 run traefik with 4 replicas 2024-08-15 12:43:08 +03:00
Erki Aas
5c1a894a43 add goredirect service manifest 2024-08-15 11:11:20 +03:00
0a9237fae9 wildduck: Limit CPU for Dragonfly 2024-08-15 10:58:34 +03:00
69dca7e1f2 wildduck: Add topologySpreadConstraints for Dragonfly 2024-08-15 09:52:38 +03:00
4d5c47e21b wildduck: Refined Dragonfly cleanup 2024-08-15 09:49:48 +03:00
b3f1eb069f wildduck: Cleanups 2024-08-15 09:37:24 +03:00
bbf421df63 wildduck: Use recreate strategy to avoid Kube scheduling deadlock 2024-08-15 09:24:16 +03:00
Erki Aas
9bf5e2408a migrate workers to infra vlan, use bgp for calico, use calico for lb service annoucements 2024-08-14 18:16:21 +03:00
351f0ae746 Remove more Mongoose 2024-08-14 11:02:45 +03:00
84bb476812 Mongo migrated to external Mongo, removing in-cluster Mongo definitions temporarily 2024-08-14 11:00:26 +03:00
07a132748b Restore mongo storage class 2024-08-14 10:49:46 +03:00
656f28a34c Move yamllint config to separate file 2024-08-14 10:30:08 +03:00
12466b19b1 bind, cert-manager: More updates 2024-08-14 10:07:26 +03:00
1d39827375 bind, cert-manager: Cleanups 2024-08-14 10:04:41 +03:00
3f4d89b4b1 dragonfly-operator-system: Add grep example 2024-08-14 09:33:45 +03:00
474ae64156 tigera-operator: Update README 2024-08-14 09:19:00 +03:00
1fa0577ce4 passmower: Cleanup 2024-08-14 08:12:37 +03:00
f8cd93aa9c passmower: Fix Dragonfly topology spread constraints 2024-08-14 07:55:24 +03:00
e22bf78b2e dragonfly-operator-system: Add Redis license notice 2024-08-14 07:53:55 +03:00
be5b036ab8 longhorn-system: Reddit link 2024-08-14 07:42:24 +03:00
a75f703eaa longhorn-system: Update README 2024-08-14 07:41:25 +03:00
2708e48850 longhorn-system: README fix 2024-08-14 07:37:23 +03:00
cfc5a739a1 longhorn-system: Updates 2024-08-14 07:36:31 +03:00
e5e4a07d01 dragonfly-operator-system: Update README 2024-08-14 07:08:26 +03:00
f902bbfe02 dragonfly-operator-system: Update README 2024-08-14 07:00:16 +03:00
70e589ef45 etherpad: Cleanup 2024-08-14 06:58:28 +03:00
b0befbcd69 freescout: Cleanup 2024-08-14 06:57:36 +03:00
Erki Aas
a09f7d4f7e remove rawfile-csi 2024-08-13 20:27:16 +03:00
Erki Aas
2f2fa1a99f migrate inventory to external s3 2024-08-13 20:18:58 +03:00
Erki Aas
66fbf32088 migrate wildduck to external mongo 2024-08-13 20:18:47 +03:00
9b698ea197 freescout: Remove unused reset-oidc-config.yaml 2024-08-13 14:51:33 +03:00
7aa26ea236 passmower: Add topologySpreadConstraints 2024-08-13 14:50:25 +03:00
7c16f84200 monitoring: Elaborate more about operator 2024-08-12 22:15:32 +03:00
c2d08d8a80 monitoring: Update README.md 2024-08-12 22:06:28 +03:00
7c2b862ca8 Move Ansible directory to separate repo 2024-08-12 21:41:36 +03:00
Erki Aas
68e936463b chore: make tegra jetson a misc node 2024-08-12 11:45:35 +03:00
8a1b0b52af add new worker9 2024-08-08 22:39:35 +03:00
6b24ede7ac Upgrade to Kubernetes 1.30 2024-08-08 19:45:46 +03:00
e0cf532e42 Upgrade to Kubernetes 1.29 2024-08-08 18:55:02 +03:00
Erki Aas
59373041cc passmower: run in 3 replicas 2024-08-08 15:53:53 +03:00
4e80899c77 Prepare for separation of ansible Git repo 2024-08-08 12:56:25 +03:00
Erki Aas
9c2b5c39ee fix/update harbor 2024-08-08 12:45:57 +03:00
d3eb888d58 doc: inventory: reference rosdump 2024-08-08 12:40:54 +03:00
3714b174e7 camtiler: disable, it broken 2024-08-03 09:03:14 +03:00
a1acb06e12 traefik: publish services (for argo healthy) 2024-08-03 09:03:13 +03:00
0b6ab650a2 argo: add apps (already) in argo to git (config drift) 2024-08-03 09:03:11 +03:00
35404464f4 argo: strongarm autosync to prevent further config drift
Commenting empty syncPolicy, otherwise argocd sees it as diff
2024-08-03 08:01:55 +03:00
41da5931f9 auth migra: whoami 2024-08-03 06:04:27 +03:00
6879a4e5a5 argo: drone no longer exists 2024-08-03 06:04:27 +03:00
9b2c655a02 camtiler: unify to cam.k-space.ee 2024-08-03 06:04:27 +03:00
8876300dc4 argo config drift: camtiler 2024-08-03 06:04:24 +03:00
8199b3b732 argo config drift: wildduck
Change for apps/StatefulSet/wildduck/wildduck-operator
caused by 2d25377090 applied by ArgoCD:
-      serviceAccountName: codemowers-io-wildduck-operator
+      serviceAccountName: codemowers-cloud-wildduck-operator
2024-08-03 05:35:31 +03:00
43c9b3aa93 argo config drift: woodpecker 2024-08-03 05:35:31 +03:00
504bd3012e argo config drift: doorboy 2024-08-03 04:27:31 +03:00
75b5d39880 signs: deploy with argo 2024-08-03 04:27:31 +03:00
7377b62b3f doc: readme tip + todo for argo 'user-facing' doc 2024-08-03 04:27:31 +03:00
cd13de6cee doc: Reword backlink warning
we already got more broken links :/
I don't really want it to be an agressive warn.
2024-08-03 04:27:31 +03:00
13da9a8877 Add redirects sign.k-space.ee, members.k-space.ee
There still are dead inventory links with members.k-space.ee
2024-08-03 04:27:31 +03:00
490770485d fixup auth2 → auth rename 2024-08-03 04:27:20 +03:00
ba48643a37 inventory: tls host is k-space.ee, not codemowers
seems like copy-pasta typo
2024-08-03 01:44:15 +03:00
Erki Aas
18a0079a21 chore: add eaas as contributor 2024-07-30 14:15:13 +03:00
Erki Aas
885b13ecd7 chore: move doorboy to hackerspace 2024-07-30 14:13:25 +03:00
Erki Aas
e17caa9c2d passmower: update login link template 2024-07-30 14:12:54 +03:00
Erki Aas
336ab2efa2 update readme 2024-07-30 12:40:01 +03:00
27a5fe14c7 docs: commit todo items 2024-07-30 11:03:00 +03:00
66034d2463 docs: mega refactor
Also bunch of edits at wiki.k-space.ee
2024-07-30 10:51:34 +03:00
186ea5d947 docs: hackerspace / Inventory-app 2024-07-30 10:33:25 +03:00
470d4f3459 docs: Slack bots 2024-07-30 10:32:57 +03:00
8ad6b989e5 Migrate signs.k-space.ee from GitLab to kube
copy from ripe87
2024-07-30 10:18:40 +03:00
b6bf3ab225 passmower users: list prefix before name 2024-07-30 08:00:14 +03:00
7cac31964d docs: camtiler & doors 2024-07-30 06:13:56 +03:00
a250363bb0 rm replaced-unused mysql-operator 2024-07-30 02:56:50 +03:00
Erki Aas
480ff4f426 update passmower deployment 2024-07-29 15:59:45 +03:00
b737d37b9c fmt ansible: compact and more readable 2024-07-28 22:28:30 +03:00
b4ad080e95 zrepl: enable prometheus for offsite 2024-07-28 21:46:26 +03:00
Simon
a5ad80d8cd Make login url clickable in emails 2024-07-28 18:42:38 +00:00
62be47c2e1 inventory: add ingress and other manifests 2024-07-28 20:58:25 +03:00
249ad2e9ed fix and update harbor install 2024-07-28 20:22:08 +03:00
0c38d2369b attempt to get kibana working 2024-07-28 20:22:08 +03:00
b07a5b9bc0 reconfigure grub only on x86 nodes 2024-07-28 20:22:08 +03:00
2d25377090 wildduck: migrate to dragonfly, disable network policies, upgrade wildduck-operator 2024-07-28 20:22:08 +03:00
73d185b2ee fix redirects 2024-07-28 20:22:08 +03:00
0eb2dc6503 deprecate crunchydata postgres operator 2024-07-28 20:22:08 +03:00
34f1b53544 zrepl: prometheus target 2024-07-28 20:00:51 +03:00
fd1aeaa1a3 Upgrade Calico 2024-07-28 10:38:25 +03:00
b8477de6a8 Upgrade cert-manager 2024-07-28 10:37:34 +03:00
2f712a935e fixup: nas root is not encrypted and failed 2024-07-28 03:32:11 +03:00
792ff38bea mv zrepl.yml to playbook.yml 2024-07-28 03:31:16 +03:00
e929b52e6d Fix ansible.cfg 2024-07-28 01:42:55 +03:00
b2b93879c2 mv to ansible/ 2024-07-27 23:55:16 +03:00
c222f22768 fix zrepl playbook 2024-07-27 23:54:29 +03:00
28ed62c40e migrate wildflock to new passmower 2024-07-27 23:51:04 +03:00
74600efb4c zrepl 2024-07-27 23:49:45 +03:00
79aaaf7498 add todo 2024-07-27 23:08:39 +03:00
f0b78f7b17 migrate grafana to new passmower and external db 2024-07-27 23:08:29 +03:00
ba520da57e update readme 2024-07-27 23:08:15 +03:00
30503ad121 update readme 2024-07-27 23:06:20 +03:00
fbe4a55251 migrate gitea to new passmower 2024-07-27 22:57:01 +03:00
37567eccf9 migrate wiki to new passmower 2024-07-27 22:57:01 +03:00
d3ba1cc05f add openebs-localpath 2024-07-27 22:57:01 +03:00
61b1b1d6ef migrate woodpecker to external mysql 2024-07-27 22:57:01 +03:00
1e8bccbfa3 migrate to new passmower 2024-07-27 22:57:01 +03:00
e89edca340 enable xfs quotas on worker node rootfs 2024-07-27 22:57:01 +03:00
2bb13ef505 manage kube-apiserver manifest with ansible 2024-07-27 22:57:01 +03:00
c44cfb8bc8 fix kubelogin 2024-07-27 22:57:01 +03:00
417f3ddcb8 Update storage nodes and readd Raspberry Pi 400 2024-07-27 22:11:38 +03:00
32fbd498cf Fix typo 2024-07-27 11:46:39 +03:00
97563e8092 Upgrade ECK operator 2024-07-27 10:50:17 +03:00
4141c6b8ae Add OpenSearch operator 2024-07-27 08:42:16 +03:00
bd26aa46b4 Upgrade Etherpad 2024-07-27 08:31:56 +03:00
92459ed68b Reorder SSH key update playbook 2024-07-27 08:30:53 +03:00
9cf57d8bc6 Upgrade MetalLB 2024-07-27 08:30:53 +03:00
af1c78dea6 deprecate members.k-space.ee 2024-07-27 03:17:24 +03:00
2e77813162 migrate to new passmower 2024-07-27 03:17:24 +03:00
ca623c11fd Update kubeadm, kubectl, kubelet deployment 2024-07-27 01:06:20 +03:00
047cbb5c6b traefik: upgrade to 3.1, migrate dashboard via ingressroute 2024-07-27 00:06:07 +03:00
3e52f37cde Add DragonflyDB operator 2024-07-26 17:46:45 +03:00
b955369e2a Upgrade CloudNativePG to 1.23.2 2024-07-26 17:35:42 +03:00
5e765e9788 Use Codemower's image for mikrotik-exporter 2024-07-26 14:15:18 +03:00
5d4f49409c Remove Keel annotations 2024-07-26 13:56:13 +03:00
de573721bd Deprecate Drone as it's devs moved on to develop Gitness 2024-07-26 13:51:55 +03:00
c868a62ab7 Update to Woodpecker 2.7.0 2024-07-26 13:26:24 +03:00
7b6f6252a5 Update external-dns 2024-07-26 13:16:49 +03:00
9223c956c0 Update Bind 9.19 to 9.20 2024-07-26 13:16:22 +03:00
1d4e5051d8 Add Prusa 3D printer web endpoint 2024-07-26 13:03:20 +03:00
56bb5be8a9 grafana: Upgrade and fix ACL
:# Please enter the commit message for your changes. Lines starting
2024-07-26 12:36:08 +03:00
d895360510 monitoring: Upgrade node-exporter 2024-07-25 19:17:24 +03:00
bc8de58ca8 monitoring: Upgrade blackbox-exporter 2024-07-25 19:17:24 +03:00
8d355ff9dc Update Prometheus operator 2024-07-25 19:17:24 +03:00
Erki Aas
dc2a08dc78 goredirect: fix mongo uri 2024-07-24 12:51:53 +03:00
19a0b70b9e woodpecker: fix agent 2024-07-19 19:49:32 +03:00
9c656b0ef9 woodpecker: restore storage from backup 2024-07-19 18:13:09 +03:00
278817249e Add Ansible tasks to update authorized SSH keys 2024-07-19 14:08:51 +03:00
cb5644c7f3 Ansible SSH multiplexing fixes 2024-07-19 12:55:40 +03:00
78ef148f83 Add Ansible playbook to update known_hosts and ssh_config 2024-07-19 11:49:47 +03:00
Erki Aas
c2b9ed0368 inventory: migrate to external mogno 2024-07-17 23:58:38 +03:00
Erki Aas
43abf125a9 pve: add pve-internal.k-space.ee for pve-csi in whitelisted codemowers.cloud cluster 2024-07-17 17:59:59 +03:00
Erki Aas
71d968a815 Upgrade longhorn to 1.6.2 2024-07-07 14:38:02 +03:00
Erki Aas
9b4976450f Upgrade longhorn to 1.5.5 2024-07-07 14:00:27 +03:00
27eb0aa6cc Bump Gitea to 1.22.1 2024-07-04 16:26:06 +03:00
f97a77e5aa rm dev.k-space.ee, VM deprecated 2024-06-20 17:27:35 +03:00
73faa9f89c argocd: Update Helm values for new Helm chart 2024-05-21 13:00:18 +03:00
51808b3c6b Update ansible-kubernetes.yml 2024-05-02 12:46:20 +00:00
07af1aa0bd mirror.gcr.io for harbor 2024-04-28 05:09:50 +03:00
f3cceca1c3 use gcr mirror for images with full docker.io path
cluster constantly failing due to rate limits,
please find a better solution
2024-04-28 05:01:02 +03:00
87bc4f1077 fix(camtiler): increase minio bucket quota to 150Gi 2024-02-23 15:54:52 +02:00
aa4ffcd1ad fix(camtiler): add minio console ingress 2024-02-23 15:54:24 +02:00
80ffdbbb80 fix(camtiler): disable broken egress network policies 2024-02-22 12:43:20 +02:00
51895a2a2b fix(camtiler): increase minio bucker quota 2024-02-22 12:43:20 +02:00
c6ea938214 fix(camtiler): add temporary ingress for camtiler dedicated s3 2024-02-22 12:43:20 +02:00
d40f7d2681 mongoexpress: fix usage 2024-02-22 12:43:20 +02:00
b990861040 mongodb: use mirror.gcr.io 2024-02-19 05:24:09 +02:00
477ba83ba4 mon: dc1 is decomissioned 2024-02-19 00:10:19 +02:00
3672197944 debug 2024-02-12 09:29:00 +02:00
0e884305cc change image for whoami-oidc
how about not using custom-patched 3yo stuff
2024-02-12 08:13:51 +02:00
4eb3649649 doc: adding new argocd app 2024-02-12 08:13:51 +02:00
d29a1a3531 whoami-oidc 2024-02-12 08:13:51 +02:00
a055c739c1 Revert "Add GH to Woodpecker"
This reverts commit ab3815e653.

https://github.com/woodpecker-ci/woodpecker/issues/138
2024-02-12 06:59:24 +02:00
ab3815e653 Add GH to Woodpecker 2024-02-12 06:57:27 +02:00
8d2ec43101 update woodpecker 2024-02-12 06:33:40 +02:00
a95f00aaf2 gitea: try fixing registration 2024-02-12 05:41:17 +02:00
3bcaa09004 whoami: update to non-deprecated image 2024-02-12 05:41:17 +02:00
b88165d2b3 fixup: int must be str 2024-02-12 03:40:21 +02:00
13d1f7bd88 gitea: upgrade
rm ENABLE_XORM_LOG: effectively replaced by LOG_SQL
MAILER: follow env deprecation
2024-02-12 03:38:21 +02:00
a6b1fb0752 nextcloud: migration done 2024-02-05 00:50:17 +02:00
4aec3b54ab nextcloud: add default phone region
Nextcloud complains if it is missing.
2024-02-05 00:49:48 +02:00
109855231b nextcloud: continue with migration 2024-02-05 00:02:21 +02:00
0bff249397 nextcloud: prepare for migration from ad.k-space.ee 2024-02-04 23:48:07 +02:00
d2b362f57d nextcloud: enable oidc registration 2024-02-04 21:06:02 +02:00
d92522b8e4 nextcloud: disable skeleton files 2024-02-04 20:42:19 +02:00
5b75e489e7 texts: remove unneeded <br/>
Paragraphs induce spacing automatically.
2024-02-04 20:00:59 +02:00
29c56b8d26 texts: auth → auth2 2024-02-04 19:56:43 +02:00
cf0650db06 hotfix: double camtiler storage
might work, might not
2024-02-03 12:31:51 +02:00
b9f1c376af Revert "nextcloud: use DNS for minio"
This reverts commit 290d1176fe.
2024-02-03 12:07:45 +02:00
290d1176fe nextcloud: use DNS for minio 2024-02-03 11:46:00 +02:00
ab7e4d10e4 Update README: Cluster access OIDC Client ID 2024-02-01 19:38:47 +02:00
776535d6d5 freescout: update image 2024-02-01 15:33:28 +02:00
f5bfc1c908 Revert "Add k-space:non-floor-nextcloud"
This reverts commit e6456b202d.
2024-01-31 23:02:11 +02:00
80370d1034 oidc: update image 2024-01-31 16:24:35 +02:00
e6456b202d Add k-space:non-floor-nextcloud
Temporary™ workaround
2024-01-31 14:48:19 +02:00
15606ee465 oidc: make k-space:onboarding members Passmower admins 2023-11-19 16:46:17 +02:00
0a9985edcc ripe87: add ripe87.k-space.ee website 2023-11-19 16:45:51 +02:00
Arti Zirk
9bcffbaff3 Fix godoor service restart 2023-10-13 16:08:06 +03:00
3f8f141d94 metallb-system: Switch Elisa, Eenet to ARP 2023-10-09 18:36:09 +03:00
09ff829c50 asterisk: update network policy 2023-10-09 13:45:23 +03:00
a76cfca7f2 monitoring: add ping-exporter 2023-10-04 20:46:25 +03:00
1e0bdf0559 monitoring: Switch Prometheus to local path provisioner 2023-09-23 11:55:56 +03:00
6f6a132e97 camtiler: Switch to dedicated Minio 2023-09-22 23:16:30 +03:00
5cf7cbb450 monitoring: Add BIND secondaries 2023-09-22 09:35:14 +03:00
98707c0d1c freescout: Update image 2023-09-21 07:49:29 +03:00
f0db5849c8 etherpad: Add network policy 2023-09-20 15:08:03 +03:00
efc76d7a10 wildduck: Add network policies for ZoneMTA and webmail 2023-09-17 11:52:52 +03:00
a0d48d4243 wildduck: Make sure Haraka uses DH params as well 2023-09-17 11:51:26 +03:00
3f5b90a546 wildduck: Make sure Haraka, Wildduck and ZoneMTA are scheduled on same hosts for MetalLB 2023-09-17 10:22:25 +03:00
13a2430e9d wildduck: configure hostname for haraka 2023-09-16 17:19:28 +03:00
4b76181210 wildduck: Fix mail submission from Wildduck and webmail 2023-09-16 15:08:58 +03:00
473a81521c wildduck: Bump replica count to 4 2023-09-16 14:49:01 +03:00
9a92c83b5a wildduck: Bump replica count to 3 2023-09-16 14:14:00 +03:00
f05cb6f9de wildduck: Switch to operator managed Mongo 2023-09-15 18:09:17 +03:00
671348a674 asterisk: allow prometheus in network policy 2023-09-15 13:23:23 +03:00
8482f77a47 asterisk: add network policies 2023-09-15 11:41:00 +03:00
0eafcfea18 Add inventory and k6.ee redirector 2023-09-15 10:24:36 +03:00
f40a61946d Remove minio.infra.k-space.ee 2023-09-15 02:29:01 +03:00
6dd2d17298 etherpad: Remove SSO requirement 2023-09-13 12:22:32 +03:00
4e1dbab080 freescout: fix cronjob, update images 2023-09-05 20:04:17 +03:00
1995358e99 openebs: Fix rawfile provisioner image digest 2023-09-02 11:49:13 +03:00
2c5721d5cf mysql-clusters: Fix external cluster 2023-09-02 11:49:13 +03:00
abb25a7eb0 mysql-clusters: Refer to generated phpMyAdmin config 2023-09-02 11:49:13 +03:00
36932bfcaa asterisk: forward voice ports from service 2023-09-01 13:59:45 +03:00
b11ac8bcae Updates and cleanups 2023-08-29 09:29:36 +03:00
4fa554da57 gitea: Allow access for k-space:friends 2023-08-28 21:11:43 +03:00
78931bbb4b oidc-gateway: Cleanups 2023-08-28 21:10:28 +03:00
c6eacfc9f2 metallb-system: Add Wildduck IP 2023-08-28 21:08:37 +03:00
f217f8eae7 monitoring: Clean up blackbox-exporter 2023-08-28 20:57:06 +03:00
fc92b0ce75 kube-system: Remove noisy KubernetesJobSlowCompletion alert 2023-08-28 20:55:28 +03:00
ae00e766d7 Merge remote-tracking branch 'origin/master' 2023-08-28 20:11:47 +03:00
912d15a23b nextcloud: add cron via readinessProbe; block external webcron; run as UID 1000 2023-08-28 20:11:40 +03:00
48567f0630 wildduck: Clean up configs 2023-08-27 20:24:36 +03:00
40445c299d wildduck: ZoneMTA config fixes 2023-08-27 16:55:48 +03:00
54207c482c rosdump: Easier to navigate commit messages 2023-08-26 08:54:04 +03:00
09a9bc4115 wildduck: Use toml files for ZoneMTA config 2023-08-25 09:40:03 +03:00
eafae2af3b Setup godoor before mjpeg-streamer for door controllers 2023-08-25 09:00:36 +03:00
3b31b9c94c Make image pull fail gracefully on door controllers 2023-08-25 08:59:00 +03:00
bec78de2f3 Deploy godoor from Docker image 2023-08-24 22:42:38 +03:00
9b2631f16c wildduck: Add README 2023-08-24 20:45:43 +03:00
f10ff329b7 wildduck: Update dedicated Mongo for Wildduck 2023-08-24 20:04:32 +03:00
a3539de9e0 wildduck: Remove haraka's redis as it's not used 2023-08-24 19:55:10 +03:00
0ed3010fed Migrate the rest of Wildduck stack 2023-08-24 19:53:07 +03:00
b98f173441 wildduck: Add operator 2023-08-24 08:48:33 +03:00
2500342e47 wildduck: Add ClamAV 2023-08-24 08:34:30 +03:00
430f5b0f0f wildduck: Add rspamd 2023-08-24 08:34:21 +03:00
e6a903cfef wildduck: Use updated image for wildflock 2023-08-22 10:14:18 +03:00
6752ca55ae wildduck: Allow sending only from @k-space.ee address in webmail 2023-08-22 07:29:18 +03:00
820c954319 Update mjpg-streamer service 2023-08-21 10:41:15 +03:00
cc51f3731a Elaborate how to configure additional domains for Bind 2023-08-20 09:35:26 +03:00
9dae1a832b gitea: Set imagePullPolicy to IfNotPresent 2023-08-20 08:04:13 +03:00
883da46a3b Update whole Bind setup 2023-08-19 23:39:13 +03:00
aacbb20e13 camtiler: Namespace change related fixes 2023-08-19 10:20:37 +03:00
90076f2dde wildduck: Updates 2023-08-19 10:01:09 +03:00
06757a81e5 logmower: Namespace change fixes 2023-08-19 09:50:53 +03:00
f67bd391bc kube-system: Bump kube-state-metrics to 2.9.2 2023-08-19 09:37:57 +03:00
e5e72de45b monitoring: Namespace change fixes 2023-08-19 09:30:10 +03:00
2e67269b5b prometheus-operator: Bump to 0.67.1 2023-08-19 09:26:13 +03:00
6e2f353916 Move Prometheus instance to monitoring namespace 2023-08-19 09:24:48 +03:00
62661efc42 prometheus-operator: Drop mfp-cyber.pub.k-space.ee 2023-08-19 08:41:44 +03:00
8f07b2ef89 camtiler: Fix event broker config 2023-08-18 16:40:18 +03:00
b80d566927 asterisk: Add pod monitor and alerting rules 2023-08-18 08:45:04 +03:00
b0fd37de01 asterisk: Disable colored logging and add metrics container port 2023-08-18 07:42:37 +03:00
95597c3103 prometheus-operator: Pin SNMP exporter 2023-08-18 00:50:27 +03:00
3a69c1a210 camtiler: Use external bucket 2023-08-17 23:53:24 +03:00
1361c9ec22 Migrate Asterisk 2023-08-17 23:38:26 +03:00
c8a7aecc2f camtiler: Allow hackerspace friends to see cams 2023-08-17 11:59:49 +03:00
bc5dcce5f7 wildflock: Limit ACL-s 2023-08-17 11:58:38 +03:00
d56348f9a6 wildduck: Add Prometheus exporter 2023-08-17 11:57:32 +03:00
a828b602d6 freescout: Add Job for resetting OIDC config 2023-08-16 22:41:41 +03:00
14a5d703cb wikijs: Add Job for resetting OIDC config 2023-08-16 22:18:30 +03:00
4fa49dbf8a mysql-clusters: Rename phpMyAdmin manifest 2023-08-16 15:56:29 +03:00
ebd723d8fd Remove keel 2023-08-16 11:38:05 +03:00
d4d44bc6d3 rosdump: Fix NetworkPolicies for in-kube Gitea 2023-08-16 11:35:41 +03:00
d6a1d48c03 logmower: Cleanups 2023-08-16 10:45:55 +03:00
6adcb53e96 gitea: Disable releases and wiki 2023-08-16 10:41:43 +03:00
af83e1783b Clean up operatorlib related stuff 2023-08-16 10:39:20 +03:00
49412781ea openebs: Pin specific image 2023-08-16 10:35:57 +03:00
d419ac56e1 Add CloudNativePG 2023-08-16 10:29:09 +03:00
5df71506cf etherpad: Switch to Passmower 2023-08-16 10:11:05 +03:00
508c03268e woodpecker-agent: Drop privileges 2023-08-16 10:10:21 +03:00
3dce3d07fd Work around unattended-upgrades quirk
https://github.com/kubernetes/kubernetes/issues/107043#issuecomment-997769940
2023-08-15 22:41:54 +03:00
f9393fd0da Add Ansible task to configure graceful shutdown of Kubelet 2023-08-15 21:58:23 +03:00
4c5a58f67d logmower: Reduce mongo agent noise 2023-08-15 13:54:49 +03:00
2e49b842a9 camtiler: Fix spammy mongo agent 2023-08-15 11:23:43 +03:00
46677df2a3 gitea: Switch to rootless image 2023-08-15 08:08:46 +03:00
ca4ded3d0d gitea: Cleanup config and rotate secrets 2023-08-14 23:38:01 +03:00
f0c4be9b7d prometheus-operator: Remove cone nodes 2023-08-14 22:25:56 +03:00
ce7f5f51fb prometheus-operator: Fix alertmanager config 2023-08-14 19:03:04 +03:00
e02a10b192 external-dns: Migrate k6.ee and kspace.ee 2023-08-14 18:59:15 +03:00
4d2071a5bd Move Kubernetes cluster bootstrap partially to Ansible 2023-08-13 20:21:15 +03:00
ecf9111f8f wildduck: Add session secret for wildflock 2023-08-13 18:48:45 +03:00
14617aad39 wildduck: Make wildflock HA 2023-08-13 18:38:26 +03:00
a00c85d5f6 Move whoami 2023-08-13 18:35:25 +03:00
0fce65b6a5 camtiler: Update cameras config 2023-08-13 08:17:32 +03:00
2ef01e2b28 camtiler: Require floor access ACL 2023-08-13 08:16:26 +03:00
d492b400fa Add door controller setup 2023-08-12 13:20:03 +03:00
612e788d9b gitea: Disable third party OIDC login 2023-08-11 15:02:36 +03:00
b3fe86ea90 drone: Clean up configs 2023-08-11 14:26:55 +03:00
ade71fffad gitea: Bump to 1.20.2 2023-08-11 14:10:40 +03:00
7a92a18bba gitea: Fix HTTP to HTTPS redirect and Git URI format 2023-08-11 14:05:05 +03:00
fe25d03989 Add Ansible config 2023-08-10 19:35:17 +03:00
d0bfdf5147 wildduck: Add ACL-s for webmail 2023-08-04 18:09:16 +03:00
66f2a9ada0 wildduck: Use upstream image for Wildduck webmail 2023-08-04 18:08:36 +03:00
c338ca3bed grafana: Use direct link for OIDC app listing
# Please enter the commit message for your changes. Lines starting
2023-08-04 18:06:36 +03:00
a97b664485 Update Kube API OIDC configuration 2023-08-03 17:05:11 +03:00
603b237091 Add Wikijs 2023-08-03 08:41:51 +03:00
29be7832c7 freescout: Fix S3 URL-s 2023-08-02 09:06:13 +03:00
06de7c53ba minio-clusters: Clean up ingresses 2023-08-01 21:11:13 +03:00
79f9704cf5 freescout: add deadline as workaround 2023-08-01 15:41:23 +03:00
7e1c99f12d freescout: refactor deployment for custom image and s3 support 2023-08-01 14:50:09 +03:00
cf8ca7457b oidc: update CRDs 2023-08-01 02:02:07 +03:00
5680b4df49 oidc-gateway: Better visualization for broken users 2023-07-31 12:38:33 +03:00
b01f073ced Add Woodpecker to application listing 2023-07-30 21:04:20 +03:00
222ba974e6 Direct OIDC login link for Gitea 2023-07-30 20:59:13 +03:00
1bf85cfd7b Add Freescout 2023-07-30 20:59:13 +03:00
aba2327740 Switch to Vanilla Redis 2023-07-30 20:59:13 +03:00
19ad42bd2b Rename alias generator to wildflock 2023-07-30 20:59:13 +03:00
a3b2f76652 oidc: update CRDs 2023-07-30 12:34:42 +03:00
fb55cd2ac7 Add Wildduck mail alias generator 2023-07-30 00:23:29 +03:00
c5cae07624 Migrate Nextcloud to Kube 2023-07-30 00:14:56 +03:00
21b583dc5b Remove irrelevant group membership checks 2023-07-29 15:02:15 +03:00
fe662dc408 More Gitea cleanups 2023-07-29 10:51:18 +03:00
6a9254da33 Clean up Etherpad 2023-07-29 09:19:42 +03:00
5259a7df04 gitea: Restore s6 init because of git zombie processes 2023-07-29 09:13:27 +03:00
8712786cfe oidc: update CRDs 2023-07-28 20:45:37 +03:00
b56376624e Migrate Gitea 2023-07-28 18:00:48 +03:00
5c8a166218 Set up Longhorn backups to ZFS box 2023-07-28 13:06:00 +03:00
c90a5bbf5e Deprecate Authelia 2023-07-28 12:23:29 +03:00
1db064a38a woodpecker: Pin specific Woodpecker Docker image 2023-07-28 12:22:05 +03:00
36a7eaa805 Bump Kubernetes deployment to 1.25.12 2023-07-28 12:22:05 +03:00
5d8670104a Upgrade to Longhorn 1.5.1 2023-07-28 12:22:05 +03:00
0b5c14903a Remove Woodpecker org membership limit 2023-07-28 12:22:05 +03:00
8d61764893 Bump Traefik from 2.9 to 2.10 2023-07-28 12:22:05 +03:00
2f1c0c3cc8 Bump metallb operator from v0.13.4 to v0.13.11 2023-07-28 12:22:05 +03:00
9a2fd034bb oidc: revert to Docker Hub images 2023-07-26 20:31:16 +03:00
6afda40b93 oidc: update CRDs 2023-07-26 20:25:39 +03:00
dd1ab10624 Merge remote-tracking branch 'origin/master' 2023-06-29 15:30:51 +03:00
2493266aed oidc: fix deployment 2023-06-29 15:30:40 +03:00
5a0821da0d Migrate whoami to new OIDC gateway 2023-06-29 14:49:10 +03:00
be330ad121 oidc: require custom username 2023-06-27 22:24:30 +03:00
045a8bb574 oidc: add oidc-gateway manifests 2023-06-27 14:01:44 +03:00
1d3d58f1a0 Add Woodpecker CI 2023-05-27 10:09:15 +03:00
5dc6dca28e external-dns: Enable support for DNSEndpoint CRD-s 2023-05-18 23:15:58 +03:00
e82fd3f543 authelia: Switch to KeyDB 2023-05-18 23:15:14 +03:00
8b0719234c camtiler: Updates 2023-05-18 22:55:40 +03:00
7abac4db0a nyancat: Move to internal IP 2023-05-18 22:54:50 +03:00
f14d2933d0 Upgrade to Longhorn 1.4.2 2023-05-18 22:46:54 +03:00
b415b8ca56 Upgrade to Grafana 8.5.24 2023-05-18 22:44:55 +03:00
8e796361c3 logmower: Switch to images from Docker Hub 2023-03-11 11:01:12 +02:00
a8bf83f9e5 Add minio-clusters namespace 2023-03-02 08:46:05 +02:00
0b0d9046d8 Add redis-clusters namespace 2023-03-02 07:53:37 +02:00
2343edbe6b Add mysql-clusters namespace 2023-02-26 11:15:48 +02:00
41b7b509f4 Add Crunchydata PGO 2023-02-26 11:09:11 +02:00
a51b041621 Upgrade to Kubernetes 1.24 and Longhorn 1.4.0 2023-02-20 11:16:12 +02:00
1d6cf0a521 camtiler: Restore cams on members site 2023-01-25 09:55:04 +02:00
19d66801df prometheus-operator: Update node-exporter and add pve2 2023-01-07 10:27:05 +02:00
d2a719af43 README: Improve cluster formation docs
- Begin code block with sudo to remind the following shall be ran as root.
- Remove hardcoded key, instead copy from ubutnu user.
2023-01-03 16:09:17 +00:00
34369d211b Add nyancat server 2023-01-03 10:25:08 +02:00
cadb38126b prometheus-operator: Prevent scrape timeouts 2022-12-26 14:15:05 +02:00
414d044909 prometheus-operator: Less noisy alerting from node-exporter 2022-12-24 21:11:00 +02:00
ea23a52d6b prometheus-operator: Remove bundle.yml 2022-12-24 21:07:07 +02:00
3458cbd694 Update README 2022-12-24 21:01:49 +02:00
0a40686c16 logmower: Remove explicit command for event source 2022-12-24 00:02:01 +02:00
222fca8b8f camtiler: Fix scheduling issues 2022-12-23 23:32:18 +02:00
75df3e2a41 logmower: Fix Mongo affinity rules 2022-12-23 23:31:10 +02:00
5516ad195c Add descheduler 2022-12-23 23:30:39 +02:00
d0ac3b0361 prometheus-operator: Remove noisy kube-state-metrics alerts 2022-12-23 23:30:13 +02:00
c7daada4f4 Bump kube-state-metrics to v2.7.0 2022-12-22 20:04:05 +02:00
3a11207783 prometheus-operator: Remove useless KubernetesCronjobTooLong alert 2022-12-21 14:59:16 +02:00
3586309c4e prometheus-operator: Post only critical alerts to Slack 2022-12-21 14:13:57 +02:00
960103eb40 prometheus-operator: Bump bundle version 2022-12-21 14:08:23 +02:00
34b48308ff camtiler: Split up manifests 2022-12-18 16:28:45 +02:00
d8471da75f Migrate doorboy to Kubernetes 2022-12-17 17:49:57 +02:00
3dfa8e3203 camtiler: Clean ups 2022-12-14 19:50:55 +02:00
2a8c685345 camtiler: Scale down motion detectors 2022-12-14 18:58:32 +02:00
bccd2c6458 logmower: Updates 2022-12-14 18:56:08 +02:00
c65835c6a4 Update external-dns 2022-12-14 18:46:00 +02:00
76cfcd083b camtiler: Specify Mongo collection for event source 2022-12-13 13:10:11 +02:00
98ae369b41 camtiler: Fix event broker image name 2022-12-13 12:51:52 +02:00
4ccfd3d21a Replace old log viewer with Logmower + camera-event-broker 2022-12-13 12:43:38 +02:00
ea9b63b7cc camtiler: Dozen updates 2022-12-12 20:37:03 +02:00
b5ee891c97 Introduce separated storage classes per workload type 2022-12-06 09:06:07 +02:00
eccfb43aa1 Add rawfile-localpv 2022-12-02 00:10:04 +02:00
8f99b1b03d Source meta-operator from separate repo 2022-11-13 07:19:56 +02:00
024897a083 kube-system: Record pod labels with kube-state-metrics 2022-11-12 17:52:59 +02:00
18c4764687 prometheus-exporter: Fix antiaffinity rule for Mikrotik exporter 2022-11-12 16:50:31 +02:00
7b9cb6184b prometheus-operator: Reduce retention size 2022-11-12 16:07:42 +02:00
9dd32af3cb logmower: Update shipper arguments 2022-11-10 21:07:54 +02:00
a1cc066927 README: Bump sysctl limits 2022-11-10 07:56:13 +02:00
029572872e logmower: Update env vars 2022-11-09 11:49:13 +02:00
30f1c32815 harbor: Reduce logging verbosity 2022-11-05 22:43:00 +02:00
0c14283136 Add logmower 2022-11-05 20:55:52 +02:00
587748343d traefik: Namespace filtering breaks allowExternalNameServices 2022-11-04 12:20:30 +02:00
1bcfbed130 traefik: Bump version 2022-10-21 08:30:04 +03:00
3b1cda8a58 traefik: Pull resources only from trusted namespaces 2022-10-21 08:27:53 +03:00
2fd0112c28 elastic-system: Exclude logging ECK stack itself 2022-10-21 00:57:11 +03:00
9275f745ce elastic-system: Remove Filebeat's dependency on Kibana 2022-10-21 00:56:54 +03:00
3d86b6acde elastic-system: Bump to 8.4.3 2022-10-14 20:18:28 +03:00
4a94cd4af0 longhorn-system: Remove Prometheus annotation as we use PodMonitor already
All checks were successful
continuous-integration/drone Build is passing
2022-10-14 15:03:48 +03:00
a27f273c0b Add Grafana 2022-10-14 14:38:23 +03:00
4686108f42 Switch to wildcard *.k-space.ee certificate 2022-10-14 14:32:36 +03:00
30b7e50afb kube-system: Add metrics-server 2022-10-14 14:23:21 +03:00
e4c9675b99 tigera-operator: Remove unrelated files 2022-10-14 14:05:40 +03:00
017bdd9fd8 tigera-operator: Upgrade Calico 2022-10-14 14:03:34 +03:00
0fd0094ba0 playground: Initial commit 2022-10-14 00:14:35 +03:00
d20fdf350d drone: Switch templates to drone-kaniko plugin 2022-10-12 14:24:57 +03:00
bac5040d2a README: access/auth: collapse bootstrapping
For 'how to connect to cluster', server-side setup
is not needed from connecting clients.
Hiding the section makes the steps more concise.
2022-10-11 10:47:41 +03:00
Danyliuk
4d5851259d Update .gitignore file. Add IntelliJ IDEA part 2022-10-08 16:43:48 +00:00
8ee1896a55 harbor: Move to storage nodes 2022-10-04 13:39:25 +03:00
04b786b18d prometheus-operator: Bump blackbox exporter replica count to 3 2022-10-04 10:11:53 +03:00
1d1764093b prometheus-operator: Remove pulled UPS-es 2022-10-03 10:04:24 +03:00
df6e268eda elastic-system: Add PodMonitor for exporter 2022-09-30 10:33:41 +03:00
00f8bfef6c elastic-system: Update sharding, enable memory-mapped IO, move to Longhorn 2022-09-30 10:21:10 +03:00
109859e07b elastic-system: Reduce replica count for Kibana 2022-09-28 11:01:08 +03:00
7e518da638 elastic-system: Make Kibana healthcheck work with anonymous auth 2022-09-28 11:00:38 +03:00
5ef5e14866 prometheus-operator: Specify priorityClassName: system-node-critical for node-exporters 2022-09-28 10:33:44 +03:00
310b2faaef prometheus-operator: Add node label to node-exporters 2022-09-28 09:32:31 +03:00
6b65de65d4 Move kube-state-metrics 2022-09-26 15:50:58 +03:00
02d1236eba elastic-system: Add Syslog ingestion 2022-09-23 16:37:29 +03:00
610ce0d490 elastic-system: Bump version to 2.4.0 2022-09-23 16:16:22 +03:00
051e300359 Update tech mapping 2022-09-21 17:12:24 +03:00
5b11b7f3a6 phpmyadmin: Use 6446 for MySQL Operator instances 2022-09-21 11:38:13 +03:00
546dc71450 prometheus-operator: Fix SNMP for older HP printers 2022-09-20 23:26:09 +03:00
26a35cd0c3 prometheus-operator: Add snmp_ prefix 2022-09-20 17:09:26 +03:00
790ffa175b prometheus-operator: Fix Alertmanager integration
All checks were successful
continuous-integration/drone Build is passing
2022-09-20 12:22:49 +03:00
9a672d7ef3 logging: Bump ZincSearch memory limit 2022-09-18 10:05:54 +03:00
d1cb00ff83 Reduce Filebeat logging verbosity 2022-09-17 08:06:42 +03:00
9cc39fcd17 argocd: Add members repo 2022-09-17 08:06:19 +03:00
ae8d03ec03 argocd: Add elastic-system 2022-09-17 08:05:47 +03:00
bf9d063b2c mysql-operator: Bump to version 8.0.30-2.0.6 2022-09-16 08:41:07 +03:00
2efaf7b456 mysql-operator: Fix network policy 2022-09-16 08:40:31 +03:00
c4208037e2 logging: Replace Graylog with ZincSearch 2022-09-16 08:34:53 +03:00
edcb6399df elastic-system: Fixes and cleanups 2022-09-16 08:24:13 +03:00
296 changed files with 18930 additions and 80788 deletions

View File

@@ -1,10 +0,0 @@
---
kind: pipeline
type: kubernetes
name: gitleaks
steps:
- name: gitleaks
image: zricethezav/gitleaks
commands:
- gitleaks detect --source=/drone/src

9
.gitignore vendored
View File

@@ -1,5 +1,14 @@
*.keys
*secrets.yml
*secret.yml
*.swp
*.save
*.1
# Kustomize with Helm and secrets:
charts/
*.env
### IntelliJ IDEA ###
.idea
*.iml

4
.yamllint Normal file
View File

@@ -0,0 +1,4 @@
extends: default
ignore-from-file: .gitignore
rules:
line-length: disable

137
CLUSTER.md Normal file
View File

@@ -0,0 +1,137 @@
# Kubernetes cluster
Kubernetes hosts run on [PVE Cluster](https://wiki.k-space.ee/en/hosting/proxmox). Hosts are listed in Ansible [inventory](ansible/inventory.yml).
## `kubectl`
- Authorization [ACLs](cluster-role-bindings.yml)
- [Troubleshooting `no such host`](#systemd-resolved-issues)
Authenticate to auth.k-space.ee:
```bash
kubectl krew install oidc-login
mkdir -p ~/.kube
cat << EOF > ~/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1EVXdNakEzTXpVMU1Wb1hEVE15TURReU9UQTNNelUxTVZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS2J2CjY3UFlXVHJMc3ZCQTZuWHUvcm55SlVhNnppTnNWTVN6N2w4ekhxM2JuQnhqWVNPUDJhN1RXTnpUTmZDanZBWngKTmlNbXJya1hpb2dYQWpVVkhSUWZlYm81TFIrb0JBOTdLWlcrN01UMFVJRXBuWVVaaTdBRHlaS01vcEJFUXlMNwp1SlU5UDhnNUR1T29FRHZieGJSMXFuV1JZRXpteFNmSFpocllpMVA3bFd4emkxR243eGRETFZaMjZjNm0xR3Y1CnViRjZyaFBXK1JSVkhiQzFKakJGeTBwRXdhYlUvUTd0Z2dic0JQUjk5NVZvMktCeElBelRmbHhVanlYVkJ3MjEKU2d3ZGI1amlpemxEM0NSbVdZZ0ZrRzd0NTVZeGF3ZmpaQjh5bW4xYjhUVjkwN3dRcG8veU8zM3RaaEE3L3BFUwpBSDJYeDk5bkpMbFVGVUtSY1A4Q0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZKNnZKeVk1UlJ1aklQWGxIK2ZvU3g2QzFRT2RNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBQ04zcGtCTVM3ekkrbUhvOWdTZQp6SzdXdjl3bXlCTVE5Q3crQXBSNnRBQXg2T1VIN0d1enc5TTV2bXNkYjkrYXBKMHBlZFB4SUg3YXZ1aG9SUXNMCkxqTzRSVm9BMG9aNDBZV3J3UStBR0dvdkZuaWNleXRNcFVSNEZjRXc0ZDRmcGl6V3d0TVNlRlRIUXR6WG84V2MKNFJGWC9xUXNVR1NWa01PaUcvcVVrSFpXQVgyckdhWXZ1Tkw2eHdSRnh5ZHpsRTFSUk56TkNvQzVpTXhjaVRNagpackEvK0pqVEFWU2FuNXZnODFOSmthZEphbmNPWmEwS3JEdkZzd1JJSG5CMGpMLzh3VmZXSTV6czZURU1VZUk1ClF6dU01QXUxUFZ4VXZJUGhlMHl6UXZjWDV5RlhnMkJGU3MzKzJBajlNcENWVTZNY2dSSTl5TTRicitFTUlHL0kKY0pjPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://master.kube.k-space.ee:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: oidc
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: oidc
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- oidc-login
- get-token
- --oidc-issuer-url=https://auth.k-space.ee/
- --oidc-client-id=passmower.kubelogin
- --oidc-extra-scope=profile,email,groups
- --listen-address=127.0.0.1:27890
command: kubectl
env: null
provideClusterInfo: false
EOF
# Test it:
kubectl get nodes # opens browser for authentication
```
### systemd-resolved issues
```sh
Unable to connect to the server: dial tcp: lookup master.kube.k-space.ee on 127.0.0.53:53: no such host
```
```
Network → VPN → `IPv4` → Other nameservers (Muud nimeserverid): `172.21.0.1`
Network → VPN → `IPv6` → Other nameservers (Muud nimeserverid): `2001:bb8:4008:21::1`
Network → VPN → `IPv4` → Search domains (Otsingudomeenid): `kube.k-space.ee`
Network → VPN → `IPv6` → Search domains (Otsingudomeenid): `kube.k-space.ee`
```
## Cluster formation
Created Ubuntu 22.04 VM-s on Proxmox with local storage.
Added some ARM64 workers by using Ubuntu 22.04 server on Raspberry Pi.
First master:
```
kubeadm init --token-ttl=120m --pod-network-cidr=10.244.0.0/16 --control-plane-endpoint "master.kube.k-space.ee:6443" --upload-certs --apiserver-cert-extra-sans master.kube.k-space.ee --node-name master1.kube.k-space.ee
```
Joining nodes:
```
# On a master:
kubeadm token create --print-join-command
# Joining node:
<printed join command --node-name "$(hostname -f)"
```
Set AZ labels:
```
for j in $(seq 1 9); do
for t in master mon worker; do
kubectl label nodes ${t}${j}.kube.k-space.ee topology.kubernetes.io/zone=node${j}
done
done
```
After forming the cluster add taints:
```bash
for j in $(seq 1 9); do
kubectl label nodes worker${j}.kube.k-space.ee node-role.kubernetes.io/worker=''
done
for j in $(seq 1 4); do
kubectl taint nodes mon${j}.kube.k-space.ee dedicated=monitoring:NoSchedule
kubectl label nodes mon${j}.kube.k-space.ee dedicated=monitoring
done
```
For `arm64` nodes add suitable taint to prevent scheduling non-multiarch images on them:
```bash
kubectl taint nodes worker9.kube.k-space.ee arch=arm64:NoSchedule
```
For door controllers:
```
for j in ground front back; do
kubectl taint nodes door-${j}.kube.k-space.ee dedicated=door:NoSchedule
kubectl label nodes door-${j}.kube.k-space.ee dedicated=door
kubectl taint nodes door-${j}.kube.k-space.ee arch=arm64:NoSchedule
done
```
## Technology mapping
Our self-hosted Kubernetes stack compared to AWS based deployments:
| Hipster startup | Self-hosted hackerspace | Purpose |
|-------------------|-------------------------------------|---------------------------------------------------------------------|
| AWS ALB | Traefik | Reverse proxy also known as ingress controller in Kubernetes jargon |
| AWS AMP | Prometheus Operator | Monitoring and alerting |
| AWS CloudTrail | ECK Operator | Log aggregation |
| AWS DocumentDB | MongoDB Community Operator | Highly available NoSQL database |
| AWS EBS | Longhorn | Block storage for arbitrary applications needing persistent storage |
| AWS EC2 | Proxmox | Virtualization layer |
| AWS ECR | Harbor | Docker registry |
| AWS EKS | kubeadm | Provision Kubernetes master nodes |
| AWS NLB | MetalLB | L2/L3 level load balancing |
| AWS RDS for MySQL | MySQL Operator | Provision highly available relational databases |
| AWS Route53 | Bind and RFC2136 | DNS records and Let's Encrypt DNS validation |
| AWS S3 | Minio Operator | Highly available object storage |
| AWS VPC | Calico | Overlay network |
| Dex | Passmower | ACL mapping and OIDC provider which integrates with GitHub/Samba |
| GitHub Actions | Woodpecker | Build Docker images |
| GitHub | Gitea | Source code management, issue tracking |
| GitHub OAuth2 | Samba (Active Directory compatible) | Source of truth for authentication and authorization |
| Gmail | Wildduck | E-mail |

View File

@@ -10,3 +10,4 @@ this Git repository happen:
* Song Meo <songmeo@k-space.ee>
* Rasmus Kallas <rasmus@k-space.ee>
* Kristjan Kuusk <kkuusk@k-space.ee>
* Erki Aas <eaas@k-space.ee>

294
README.md
View File

@@ -1,258 +1,72 @@
# Kubernetes cluster manifests
# k-space.ee infrastructure
Kubernetes manifests, Ansible [playbooks](ansible/README.md), and documentation for K-SPACE services.
## Introduction
<!-- TODO: Docs for adding to ArgoCD (auto-)sync -->
- Repo is deployed with [ArgoCD](https://argocd.k-space.ee). For `kubectl` access, see [CLUSTER.md](CLUSTER.md#kubectl).
- Debugging Kubernetes [on Wiki](https://wiki.k-space.ee/en/hosting/debugging-kubernetes)
- Need help? → [`#kube`](https://k-space-ee.slack.com/archives/C02EYV1NTM2)
This is the Kubernetes manifests of services running on k-space.ee domains:
Jump to docs: [inventory-app](hackerspace/README.md) / [cameras](_disabled/camtiler/README.md) / [doors](https://wiki.k-space.ee/en/hosting/doors) / [list of apps](https://auth.k-space.ee) // [all infra](ansible/inventory.yml) / [network](https://wiki.k-space.ee/en/hosting/network) / [retro](https://wiki.k-space.ee/en/hosting/retro) / [non-infra](https://wiki.k-space.ee)
- [Authelia](https://auth.k-space.ee) for authentication
- [Drone.io](https://drone.k-space.ee) for building Docker images
- [Harbor](https://harbor.k-space.ee) for hosting Docker images
- [ArgoCD](https://argocd.k-space.ee) for deploying Kubernetes manifests and
Helm charts into the cluster
- [camtiler](https://cams.k-space.ee) for cameras
- [Longhorn Dashboard](https://longhorn.k-space.ee) for administering
Longhorn storage
- [Kubernetes Dashboard](https://kubernetes-dashboard.k-space.ee/) for read-only overview
of the Kubernetes cluster
- [Wildduck Webmail](https://webmail.k-space.ee/)
Tip: Search the repo for `kind: xyz` for examples.
Most endpoints are protected by OIDC autentication or Authelia SSO middleware.
## Supporting services
- Build [Git](https://git.k-space.ee) repositories with [Woodpecker](https://woodpecker.k-space.ee)[^nodrone].
- Passmower: Authz with `kind: OIDCClient` (or `kind: OIDCMiddlewareClient`[^authz]).
- Traefik[^nonginx]: Expose services with `kind: Service` + `kind: Ingress` (TLS and DNS **included**).
[^nodrone]: Replaces Drone CI.
## Cluster access
### Additional
- bind: Manage _additional_ DNS records with `kind: DNSEndpoint`.
- [Prometheus](https://wiki.k-space.ee/en/hosting/monitoring): Collect metrics with `kind: PodMonitor` (alerts with `kind: PrometheusRule`).
- [Slack bots](SLACK.md) and Kubernetes [CLUSTER.md](CLUSTER.md) itself.
<!-- TODO: Redirects: external-dns.alpha.kubernetes.io/hostname + in -extras.yaml: IngressRoute and Middleware -->
General discussion is happening in the `#kube` Slack channel.
[^nonginx]: No nginx annotations! Use `kind: Ingress` instead. `IngressRoute` is not used as it doesn't support [`external-dns`](bind/README.md) out of the box.
[^authz]: Applications should use OpenID Connect (`kind: OIDCClient`) for authentication, whereever possible. If not possible, use `kind: OIDCMiddlewareClient` client, which will provide authentication via a Traefik middleware (`traefik.ingress.kubernetes.io/router.middlewares: passmower-proxmox@kubernetescrd`). Sometimes you might use both for extra security.
For bootstrap access obtain `/etc/kubernetes/admin.conf` from one of the master
nodes and place it under `~/.kube/config` on your machine.
### Network
Once Authelia is working, OIDC access for others can be enabled with
running following on Kubernetes masters:
All nodes are in Infra VLAN 21. Routing is implemented with BGP, all nodes and the router make a full-mesh. Both Serice LB IPs and Pod IPs are advertised to the router. Router does NAT for outbound pod traffic.
See the [Calico installation](tigera-operator/application.yml) for Kube side and Routing / BGP in the router.
Static routes for 193.40.103.36/30 have been added in pve nodes to make them communicating with Passmower via Traefik more stable - otherwise packets coming back to the PVE are routed directly via VLAN 21 internal IPs by the worker nodes, breaking TCP.
```bash
patch /etc/kubernetes/manifests/kube-apiserver.yaml - << EOF
@@ -23,6 +23,10 @@
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379
+ - --oidc-issuer-url=https://auth.k-space.ee
+ - --oidc-client-id=kubelogin
+ - --oidc-username-claim=preferred_username
+ - --oidc-groups-claim=groups
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
EOF
sudo systemctl daemon-reload
systemctl restart kubelet
```
<!-- Linked to by https://wiki.k-space.ee/e/en/hosting/storage -->
### Databases / -stores:
- Dragonfly: `kind: Dragonfly` (replaces Redis[^redisdead])
- Longhorn: `storageClassName: longhorn` (filesystem storage)
- Mongo[^mongoproblems]: `kind: MongoDBCommunity` (NAS* `inventory-mongodb`)
- Minio S3: `kind: MinioBucketClaim` with `class: dedicated` (NAS*: `class: external`)
- MariaDB*: search for `mysql`, `mariadb`[^mariadb] (replaces MySQL)
- Postgres*: hardcoded to [harbor/application.yml](harbor/application.yml)
- Seeded secrets: `kind: SecretClaim` (generates random secret in templated format)
- Secrets in git: https://git.k-space.ee/secretspace (members personal info, API credentials, see argocd/deploy_key.pub comment)
Afterwards following can be used to talk to the Kubernetes cluster using
OIDC credentials:
\* External, hosted directly on [nas.k-space.ee](https://wiki.k-space.ee/en/hosting/storage)
```bash
kubectl krew install oidc-login
mkdir -p ~/.kube
cat << EOF > ~/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1EVXdNakEzTXpVMU1Wb1hEVE15TURReU9UQTNNelUxTVZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS2J2CjY3UFlXVHJMc3ZCQTZuWHUvcm55SlVhNnppTnNWTVN6N2w4ekhxM2JuQnhqWVNPUDJhN1RXTnpUTmZDanZBWngKTmlNbXJya1hpb2dYQWpVVkhSUWZlYm81TFIrb0JBOTdLWlcrN01UMFVJRXBuWVVaaTdBRHlaS01vcEJFUXlMNwp1SlU5UDhnNUR1T29FRHZieGJSMXFuV1JZRXpteFNmSFpocllpMVA3bFd4emkxR243eGRETFZaMjZjNm0xR3Y1CnViRjZyaFBXK1JSVkhiQzFKakJGeTBwRXdhYlUvUTd0Z2dic0JQUjk5NVZvMktCeElBelRmbHhVanlYVkJ3MjEKU2d3ZGI1amlpemxEM0NSbVdZZ0ZrRzd0NTVZeGF3ZmpaQjh5bW4xYjhUVjkwN3dRcG8veU8zM3RaaEE3L3BFUwpBSDJYeDk5bkpMbFVGVUtSY1A4Q0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZKNnZKeVk1UlJ1aklQWGxIK2ZvU3g2QzFRT2RNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBQ04zcGtCTVM3ekkrbUhvOWdTZQp6SzdXdjl3bXlCTVE5Q3crQXBSNnRBQXg2T1VIN0d1enc5TTV2bXNkYjkrYXBKMHBlZFB4SUg3YXZ1aG9SUXNMCkxqTzRSVm9BMG9aNDBZV3J3UStBR0dvdkZuaWNleXRNcFVSNEZjRXc0ZDRmcGl6V3d0TVNlRlRIUXR6WG84V2MKNFJGWC9xUXNVR1NWa01PaUcvcVVrSFpXQVgyckdhWXZ1Tkw2eHdSRnh5ZHpsRTFSUk56TkNvQzVpTXhjaVRNagpackEvK0pqVEFWU2FuNXZnODFOSmthZEphbmNPWmEwS3JEdkZzd1JJSG5CMGpMLzh3VmZXSTV6czZURU1VZUk1ClF6dU01QXUxUFZ4VXZJUGhlMHl6UXZjWDV5RlhnMkJGU3MzKzJBajlNcENWVTZNY2dSSTl5TTRicitFTUlHL0kKY0pjPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://master.kube.k-space.ee:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: oidc
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: oidc
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- oidc-login
- get-token
- --oidc-issuer-url=https://auth.k-space.ee
- --oidc-client-id=kubelogin
- --oidc-use-pkce
- --oidc-extra-scope=profile,email,groups
- --listen-address=127.0.0.1:27890
command: kubectl
env: null
provideClusterInfo: false
EOF
```
[^mariadb]: As of 2024-07-30 used by auth, authelia, bitwarden, etherpad, freescout, git, grafana, nextcloud, wiki, woodpecker
For access control mapping see [cluster-role-bindings.yml](cluster-role-bindings.yml)
[^redisdead]: Redis has been replaced as redis-operatori couldn't handle itself: didn't reconcile after reboots, master URI was empty, and clients complained about missing masters. Dragonfly replaces KeyDB.
[^mongoproblems]: Mongo problems: Incompatible with rawfile csi (wiredtiger.wt corrupts), complicated resizing (PVCs from statefulset PVC template).
# Technology mapping
***
_This page is referenced by wiki [front page](https://wiki.k-space.ee) as **the** technical documentation for infra._
Our self-hosted Kubernetes stack compared to AWS based deployments:
## nas.k-space.ee pre-migration whouses listing
- S3: [minio-clusters](minio-clusters/README.md)
- postgres: only harbor, 172.20.43.1
| Hipster startup | Self-hosted hackerspace | Purpose |
|-----------------|-------------------------------------|---------------------------------------------------------------------|
| AWS EC2 | Proxmox | Virtualization layer |
| AWS EKS | kubeadm | Provision Kubernetes master nodes |
| AWS EBS | Longhorn | Block storage for arbitrary applications needing persistent storage |
| AWS NLB | MetalLB | L2/L3 level load balancing |
| AWS ALB | Traefik | Reverse proxy also known as ingress controller in Kubernetes jargon |
| AWS ECR | Harbor | Docker registry |
| AWS DocumentDB | MongoDB | NoSQL database |
| AWS S3 | Minio | Object storage |
| GitHub OAuth2 | Samba (Active Directory compatible) | Source of truth for authentication and authorization |
| Dex | Authelia | ACL mapping and OIDC provider which integrates with GitHub/Samba |
| GitHub | Gitea | Source code management, issue tracking |
| GitHub Actions | Drone | Build Docker images |
| Gmail | Wildduck | E-mail |
| AWS Route53 | Bind and RFC2136 | DNS records and Let's Encrypt DNS validation |
| AWS VPC | Calico | Overlay network |
### mongodb
- inventory
- wildduck
External dependencies running as classic virtual machines:
- Samba as Authelia's source of truth
- Bind as DNS server
## Adding applications
Deploy applications via [ArgoCD](https://argocd.k-space.ee)
We use Treafik with Authelia for Ingress.
Applications where possible and where applicable should use `Remote-User`
authentication. This prevents application exposure on public Internet.
Otherwise use OpenID Connect for authentication,
see Argo itself as an example how that is done.
See `kspace-camtiler/ingress.yml` for commented Ingress example.
Note that we do not use IngressRoute objects because they don't
support `external-dns` out of the box.
Do NOT add nginx annotations, we use Traefik.
Do NOT manually add DNS records, they are added by `external-dns`.
Do NOT manually create Certificate objects,
these should be handled by `tls:` section in Ingress.
## Cluster formation
Create Ubuntu 20.04 VM-s on Proxmox with local storage.
After machines have booted up and you can reach them via SSH:
```bash
# Enable required kernel modules
cat > /etc/modules << EOF
overlay
br_netfilter
EOF
cat /etc/modules | xargs -L 1 -t modprobe
# Finetune sysctl:
cat > /etc/sysctl.d/99-k8s.conf << EOF
net.ipv4.conf.all.accept_redirects = 0
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sysctl --system
# Disable Ubuntu caching DNS resolver
systemctl disable systemd-resolved.service
systemctl stop systemd-resolved
rm -fv /etc/resolv.conf
cat > /etc/resolv.conf << EOF
nameserver 1.1.1.1
nameserver 8.8.8.8
EOF
# Disable multipathd as Longhorn handles that itself
systemctl mask multipathd
systemctl disable multipathd
systemctl stop multipathd
# Disable Snapcraft
systemctl mask snapd
systemctl disable snapd
systemctl stop snapd
# Permit root login
sed -i -e 's/PermitRootLogin no/PermitRootLogin without-password/' /etc/ssh/sshd_config
systemctl reload ssh
cat << EOF > /root/.ssh/authorized_keys
sk-ecdsa-sha2-nistp256@openssh.com AAAAInNrLWVjZHNhLXNoYTItbmlzdHAyNTZAb3BlbnNzaC5jb20AAAAIbmlzdHAyNTYAAABBBD4/e9SWYWYoNZMkkF+NirhbmHuUgjoCap42kAq0pLIXFwIqgVTCre03VPoChIwBClc8RspLKqr5W3j0fG8QwnQAAAAEc3NoOg== lauri@lauri-x13
EOF
userdel -f ubuntu
apt-get remove -yq cloud-init
```
Install packages, for Raspbian set `OS=Debian_11`
```bash
OS=xUbuntu_20.04
VERSION=1.23
cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /
EOF
cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list
deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /
EOF
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers.gpg add -
curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers-cri-o.gpg add -
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list
apt-get update
apt-get install -yqq apt-transport-https curl cri-o cri-o-runc kubelet=1.23.5-00 kubectl=1.23.5-00 kubeadm=1.23.5-00
sudo systemctl daemon-reload
sudo systemctl enable crio --now
apt-mark hold kubelet kubeadm kubectl
sed -i -e 's/unqualified-search-registries = .*/unqualified-search-registries = ["docker.io"]/' /etc/containers/registries.conf
```
On master:
```
kubeadm init --token-ttl=120m --pod-network-cidr=10.244.0.0/16 --control-plane-endpoint "master.kube.k-space.ee:6443" --upload-certs --apiserver-cert-extra-sans master.kube.k-space.ee --node-name master1.kube.k-space.ee
```
For the `kubeadm join` command specify FQDN via `--node-name $(hostname -f)`.
After forming the cluster add taints:
```bash
for j in $(seq 1 9); do
kubectl label nodes worker${j}.kube.k-space.ee node-role.kubernetes.io/worker=''
done
for j in $(seq 1 3); do
kubectl taint nodes mon${j}.kube.k-space.ee dedicated=monitoring:NoSchedule
kubectl label nodes mon${j}.kube.k-space.ee dedicated=monitoring
done
for j in $(seq 1 4); do
kubectl taint nodes storage${j}.kube.k-space.ee dedicated=storage:NoSchedule
kubectl label nodes storage${j}.kube.k-space.ee dedicated=storage
done
```
On Raspberry Pi you need to take additonal steps:
* Manually enable cgroups by appending
`cgroup_memory=1 cgroup_enable=memory` to `/boot/cmdline.txt`,
* Disable swap with `swapoff -a; apt-get purge -y dphys-swapfile`
* For mounting Longhorn volumes on Rasbian install `open-iscsi`
For `arm64` nodes add suitable taint to prevent scheduling non-multiarch images on them:
```bash
kubectl taint nodes worker9.kube.k-space.ee arch=arm64:NoSchedule
```
### mariadb.infra.k-space.ee (DNS from ns1 to 172.20.36.1)
- freescout
- gitea nb! MYSQL_ROOT_PASSWORD seems to be invalid, might be ok to reset it upstream
- wiki
- nextcloud
- etherpad NB! probably NOT using kspace_etherpad_kube NB! does not take DNS likely due to netpol, hardcoded to 172.20.36.1
- grafana
- woodpecker

28
SLACK.md Normal file
View File

@@ -0,0 +1,28 @@
## Slack bots
### Doorboy3
https://api.slack.com/apps/A05NDB6FVJQ
Slack app author: rasmus
Managed by inventory-app:
- Incoming (open-commands) to `/api/slack/doorboy`, inventory-app authorizes based on command originating from #members or #work-shop && oidc access group (floor, workshop).
- Posts logs to a private channel. Restricted to 193.40.103.0/24.
Secrets as `SLACK_DOORLOG_CALLBACK` and `SLACK_VERIFICATION_TOKEN`.
### oidc-gateway
https://api.slack.com/apps/A05DART9PP1
Slack app author: eaas
Managed by passmower:
- Links e-mail to slackId.
- Login via Slack (not enabled).
Secrets as `slackId` and `slack-client`.
### podi-podi uuenduste spämmikoobas
https://api.slack.com/apps/A033RE9TUFK
Slack app author: rasmus
Posts Prometheus alerts to a private channel.
Secret as `slack-secrets`.

View File

@@ -0,0 +1,23 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: cnpg # aka in-cluster postgres
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: https://github.com/cloudnative-pg/cloudnative-pg
targetRevision: v1.25.1
path: releases
directory:
include: 'cnpg-1.25.1.yaml'
destination:
server: 'https://kubernetes.default.svc'
namespace: cnpg-system
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true
- ServerSideApply=true # Resource is too big to fit in 262144 bytes allowed annotation size.

View File

@@ -0,0 +1,21 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: mongodb-operator
namespace: argocd
spec:
project: k-space.ee
source:
# also depends on git@git.k-space.ee:secretspace/kube.git
repoURL: git@git.k-space.ee:k-space/kube.git
targetRevision: HEAD
path: mongodb-operator
destination:
server: 'https://kubernetes.default.svc'
namespace: mongodb-operator
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: mysql-clusters
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: mysql-clusters
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: mysql-clusters
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

1
_disabled/asterisk/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
conf

View File

@@ -0,0 +1,13 @@
# Asterisk
Asterisk is used as
This application is managed by [ArgoCD](https://argocd.k-space.ee/applications/argocd/asterisk)
Should ArgoCD be down manifests here can be applied with:
```
kubectl apply -n asterisk -f application.yaml
```
asterisk-secrets was dumped to git.k-space.ee/secretspace/kube:_disabled/asterisk

View File

@@ -0,0 +1,124 @@
---
apiVersion: v1
kind: Service
metadata:
name: asterisk
annotations:
external-dns.alpha.kubernetes.io/hostname: voip.k-space.ee
spec:
type: LoadBalancer
externalTrafficPolicy: Local
selector:
app: asterisk
ports:
- name: asterisk
protocol: UDP
port: 5060
- name: sip-data-10000
protocol: UDP
port: 10000
- name: sip-data-10001
protocol: UDP
port: 10001
- name: sip-data-10002
protocol: UDP
port: 10002
- name: sip-data-10003
protocol: UDP
port: 10003
- name: sip-data-10004
protocol: UDP
port: 10004
- name: sip-data-10005
protocol: UDP
port: 10005
- name: sip-data-10006
protocol: UDP
port: 10006
- name: sip-data-10007
protocol: UDP
port: 10007
- name: sip-data-10008
protocol: UDP
port: 10008
- name: sip-data-10009
protocol: UDP
port: 10009
- name: sip-data-10010
protocol: UDP
port: 10010
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: asterisk
labels:
app: asterisk
spec:
selector:
matchLabels:
app: asterisk
replicas: 1
template:
metadata:
labels:
app: asterisk
spec:
containers:
- name: asterisk
image: harbor.k-space.ee/k-space/asterisk
command:
- /usr/sbin/asterisk
args:
- -TWBpvvvdddf
volumeMounts:
- name: config
mountPath: /etc/asterisk
ports:
- containerPort: 8088
name: metrics
volumes:
- name: config
secret:
secretName: asterisk-secrets
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: asterisk
spec:
selector:
matchLabels:
app: asterisk
podMetricsEndpoints:
- port: metrics
---
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: asterisk
spec:
groups:
- name: asterisk
rules:
- alert: AsteriskPhoneNotRegistered
expr: asterisk_endpoints_state{resource=~"1.*"} < 2
for: 5m
labels:
severity: critical
annotations:
summary: "{{ $labels.resource }} is not registered."
- alert: AsteriskOutboundNumberNotRegistered
expr: asterisk_pjsip_outbound_registration_status == 0
for: 5m
labels:
severity: critical
annotations:
summary: "{{ $labels.username }} is not registered with provider."
- alert: AsteriskCallsPerMinuteLimitExceed
expr: asterisk_channels_duration_seconds > 10*60
for: 20m
labels:
severity: warning
annotations:
summary: "Call at channel {{ $labels.name }} is taking longer than 10m."

View File

@@ -0,0 +1,39 @@
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: asterisk
spec:
podSelector:
matchLabels:
app: asterisk
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus
- from:
- ipBlock:
cidr: 100.101.0.0/16
- from:
- ipBlock:
cidr: 100.102.0.0/16
- from:
- ipBlock:
cidr: 81.90.125.224/32 # Lauri home
- from:
- ipBlock:
cidr: 172.20.8.241/32 # Erki A
- from:
- ipBlock:
cidr: 212.47.211.10/32 # Elisa SIP
egress:
- to:
- ipBlock:
cidr: 212.47.211.10/32 # Elisa SIP

View File

@@ -0,0 +1,24 @@
# proxmox-csi
1. create role in pve if it doesn't exist
2. create user and assign permissions, preferrably at resource pool level
```
pveum user add ks-kubernetes-csi@pve
pveum aclmod /pool/kspace_pool -user ks-kubernetes-csi@pve -role CSI
pveum user token add ks-kubernetes-csi@pve cs -privsep 0
```
save the token!
3. apply `proxmox-csi-plugin.yml` and `storage-class.yaml`, delete proxmox-csi default storage classes from kube.
4. add the token from pve to `config.yaml` and create the secret: `kubectl -n csi-proxmox create secret generic proxmox-csi-plugin --from-file=config.yaml`
5. label the nodes according to allocation:
```
kubectl --kubeconfig ~/.kube/k-space label nodes worker1.kube.k-space.ee topology.kubernetes.io/region=pve-cluster topology.kubernetes.io/zone=pve1 --overwrite
kubectl --kubeconfig ~/.kube/k-space label nodes worker2.kube.k-space.ee topology.kubernetes.io/region=pve-cluster topology.kubernetes.io/zone=pve2 --overwrite
kubectl --kubeconfig ~/.kube/k-space label nodes worker3.kube.k-space.ee topology.kubernetes.io/region=pve-cluster topology.kubernetes.io/zone=pve8 --overwrite
kubectl --kubeconfig ~/.kube/k-space label nodes worker4.kube.k-space.ee topology.kubernetes.io/region=pve-cluster topology.kubernetes.io/zone=pve9 --overwrite
kubectl --kubeconfig ~/.kube/k-space label nodes master1.kube.k-space.ee topology.kubernetes.io/region=pve-cluster topology.kubernetes.io/zone=pve1 --overwrite
kubectl --kubeconfig ~/.kube/k-space label nodes master2.kube.k-space.ee topology.kubernetes.io/region=pve-cluster topology.kubernetes.io/zone=pve2 --overwrite
kubectl --kubeconfig ~/.kube/k-space label nodes master3.kube.k-space.ee topology.kubernetes.io/region=pve-cluster topology.kubernetes.io/zone=pve8 --overwrite
```

View File

@@ -0,0 +1,31 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: csi-proxmox
helmCharts:
- includeCRDs: true
name: &name proxmox-csi-plugin
releaseName: *name
repo: oci://ghcr.io/sergelogvinov/charts
valuesInline:
node:
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
storageClass:
- name: proxmox
fstype: xfs
storage: ks-pvs
cache: none
ssd: "true"
# Not in use, migrating off of NAS…
# - name: proxmox-nas
# fstype: xfs
# storage: ks-pvs-nas
# cache: none
# # ssd is false, https://github.com/sergelogvinov/proxmox-csi-plugin/issues/404
version: 0.3.12 # https://github.com/sergelogvinov/proxmox-csi-plugin/pkgs/container/charts%2Fproxmox-csi-plugin
resources:
- ssh://git@git.k-space.ee/secretspace/kube/proxmox-csi # secrets: proxmox-csi-plugin:config.yaml (cluster info)

View File

@@ -0,0 +1,382 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: discourse
annotations:
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
spec:
tls:
- hosts:
- "*.k-space.ee"
secretName:
rules:
- host: "discourse.k-space.ee"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: discourse
port:
name: http
---
apiVersion: v1
kind: Service
metadata:
name: discourse
spec:
type: ClusterIP
ipFamilyPolicy: SingleStack
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
selector:
app.kubernetes.io/instance: discourse
app.kubernetes.io/name: discourse
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: discourse
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: discourse
annotations:
reloader.stakater.com/auto: "true"
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/instance: discourse
app.kubernetes.io/name: discourse
strategy:
type: Recreate
template:
metadata:
labels:
app.kubernetes.io/instance: discourse
app.kubernetes.io/name: discourse
spec:
serviceAccountName: discourse
securityContext:
fsGroup: 0
fsGroupChangePolicy: Always
initContainers:
containers:
- name: discourse
image: docker.io/bitnami/discourse:3.3.2-debian-12-r0
imagePullPolicy: "IfNotPresent"
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- CHOWN
- SYS_CHROOT
- FOWNER
- SETGID
- SETUID
- DAC_OVERRIDE
drop:
- ALL
privileged: false
readOnlyRootFilesystem: false
runAsGroup: 0
runAsNonRoot: false
runAsUser: 0
seLinuxOptions: {}
seccompProfile:
type: RuntimeDefault
env:
- name: BITNAMI_DEBUG
value: "true"
- name: DISCOURSE_USERNAME
valueFrom:
secretKeyRef:
name: discourse-password
key: username
- name: DISCOURSE_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-password
key: password
- name: DISCOURSE_PORT_NUMBER
value: "8080"
- name: DISCOURSE_EXTERNAL_HTTP_PORT_NUMBER
value: "80"
- name: DISCOURSE_DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-postgresql
key: password
- name: POSTGRESQL_CLIENT_CREATE_DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-postgres-superuser
key: password
- name: POSTGRESQL_CLIENT_POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-postgres-superuser
key: password
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-redis
key: redis-password
envFrom:
- configMapRef:
name: discourse
- secretRef:
name: discourse-email
ports:
- name: http
containerPort: 8080
protocol: TCP
livenessProbe:
tcpSocket:
port: http
initialDelaySeconds: 500
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
readinessProbe:
httpGet:
path: /srv/status
port: http
initialDelaySeconds: 100
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
resources:
limits:
cpu: "6.0"
ephemeral-storage: 2Gi
memory: 12288Mi
requests:
cpu: "1.0"
ephemeral-storage: 50Mi
memory: 3072Mi
volumeMounts:
- name: discourse-data
mountPath: /bitnami/discourse
subPath: discourse
- name: sidekiq
image: docker.io/bitnami/discourse:3.3.2-debian-12-r0
imagePullPolicy: "IfNotPresent"
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- CHOWN
- SYS_CHROOT
- FOWNER
- SETGID
- SETUID
- DAC_OVERRIDE
drop:
- ALL
privileged: false
readOnlyRootFilesystem: false
runAsGroup: 0
runAsNonRoot: false
runAsUser: 0
seLinuxOptions: {}
seccompProfile:
type: RuntimeDefault
command:
- /opt/bitnami/scripts/discourse/entrypoint.sh
args:
- /opt/bitnami/scripts/discourse-sidekiq/run.sh
env:
- name: BITNAMI_DEBUG
value: "true"
- name: DISCOURSE_USERNAME
valueFrom:
secretKeyRef:
name: discourse-password
key: username
- name: DISCOURSE_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-password
key: password
- name: DISCOURSE_DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-postgresql
key: password
- name: DISCOURSE_POSTGRESQL_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-postgres-superuser
key: password
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-redis
key: redis-password
envFrom:
- configMapRef:
name: discourse
- secretRef:
name: discourse-email
livenessProbe:
exec:
command: ["/bin/sh", "-c", "pgrep -f ^sidekiq"]
initialDelaySeconds: 500
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
readinessProbe:
exec:
command: ["/bin/sh", "-c", "pgrep -f ^sidekiq"]
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
resources:
limits:
cpu: 750m
ephemeral-storage: 2Gi
memory: 768Mi
requests:
cpu: 500m
ephemeral-storage: 50Mi
memory: 512Mi
volumeMounts:
- name: discourse-data
mountPath: /bitnami/discourse
subPath: discourse
volumes:
- name: discourse-data
persistentVolumeClaim:
claimName: discourse-data
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: discourse-data
namespace: discourse
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "3Gi"
storageClassName: "proxmox-nas"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: discourse
namespace: discourse
data:
DISCOURSE_HOST: "discourse.k-space.ee"
DISCOURSE_SKIP_INSTALL: "yes"
DISCOURSE_PRECOMPILE_ASSETS: "no"
DISCOURSE_SITE_NAME: "K-Space Discourse"
DISCOURSE_USERNAME: "k-space"
DISCOURSE_EMAIL: "dos4dev@k-space.ee"
DISCOURSE_REDIS_HOST: "discourse-redis"
DISCOURSE_REDIS_PORT_NUMBER: "6379"
DISCOURSE_DATABASE_HOST: "discourse-postgres-rw"
DISCOURSE_DATABASE_PORT_NUMBER: "5432"
DISCOURSE_DATABASE_NAME: "discourse"
DISCOURSE_DATABASE_USER: "discourse"
POSTGRESQL_CLIENT_DATABASE_HOST: "discourse-postgres-rw"
POSTGRESQL_CLIENT_DATABASE_PORT_NUMBER: "5432"
POSTGRESQL_CLIENT_POSTGRES_USER: "postgres"
POSTGRESQL_CLIENT_CREATE_DATABASE_NAME: "discourse"
POSTGRESQL_CLIENT_CREATE_DATABASE_EXTENSIONS: "hstore,pg_trgm"
---
apiVersion: codemowers.cloud/v1beta1
kind: OIDCClient
metadata:
name: discourse
namespace: discourse
spec:
displayName: Discourse
uri: https://discourse.k-space.ee
redirectUris:
- https://discourse.k-space.ee/auth/oidc/callback
allowedGroups:
- k-space:floor
- k-space:friends
grantTypes:
- authorization_code
- refresh_token
responseTypes:
- code
availableScopes:
- openid
- profile
pkce: false
---
apiVersion: codemowers.cloud/v1beta1
kind: SecretClaim
metadata:
name: discourse-redis
namespace: discourse
spec:
size: 32
mapping:
- key: redis-password
value: "%(plaintext)s"
- key: REDIS_URI
value: "redis://:%(plaintext)s@discourse-redis"
---
apiVersion: dragonflydb.io/v1alpha1
kind: Dragonfly
metadata:
name: discourse-redis
namespace: discourse
spec:
authentication:
passwordFromSecret:
key: redis-password
name: discourse-redis
replicas: 3
resources:
limits:
cpu: 1000m
memory: 1Gi
topologySpreadConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: discourse-redis
app.kubernetes.io/part-of: dragonfly
---
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: discourse-postgres
namespace: discourse
spec:
instances: 1
enableSuperuserAccess: true
bootstrap:
initdb:
database: discourse
owner: discourse
secret:
name: discourse-postgresql
dataChecksums: true
encoding: 'UTF8'
storage:
size: 10Gi
storageClass: postgres

1
_disabled/freeswitch/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
PASSWORDS.xml

View File

@@ -0,0 +1,14 @@
<include>
<X-PRE-PROCESS cmd="set" data="default_password=">
<X-PRE-PROCESS cmd="set" data="ipcall_password="/>
<X-PRE-PROCESS cmd="set" data="1000_password="/>
<X-PRE-PROCESS cmd="set" data="1001_password="/>
<X-PRE-PROCESS cmd="set" data="1002_password="/>
<X-PRE-PROCESS cmd="set" data="1003_password="/>
<X-PRE-PROCESS cmd="set" data="1004_password="/>
<X-PRE-PROCESS cmd="set" data="1005_password="/>
<X-PRE-PROCESS cmd="set" data="1006_password="/>
<X-PRE-PROCESS cmd="set" data="1007_password="/>
<X-PRE-PROCESS cmd="set" data="1008_password="/>
<X-PRE-PROCESS cmd="set" data="1009_password="/>
</include>

View File

@@ -0,0 +1,7 @@
```
kubectl -n freeswitch create secret generic freeswitch-passwords --from-file freeswitch/PASSWORDS.xml
```
PASSWORDS.xml is in git.k-space.ee/secretspace/kube:_disabled/freeswitch
freeswitch-sounds was extracted form of http://files.freeswitch.org/releases/sounds/freeswitch-sounds-en-us-callie-32000-1.0.53.tar.gz (with /us/ at root of the volume)

View File

@@ -0,0 +1,567 @@
apiVersion: v1
kind: Service
metadata:
name: freeswitch
namespace: freeswitch
annotations:
external-dns.alpha.kubernetes.io/hostname: freeswitch.k-space.ee
metallb.universe.tf/address-pool: eenet
metallb.universe.tf/ip-allocated-from-pool: eenet
spec:
ports:
- name: sip-internal-udp
protocol: UDP
port: 5060
targetPort: 5060
nodePort: 31787
- name: sip-nat-udp
protocol: UDP
port: 5070
targetPort: 5070
nodePort: 32241
- name: sip-external-udp
protocol: UDP
port: 5080
targetPort: 5080
nodePort: 31354
- name: sip-data-10000
protocol: UDP
port: 10000
targetPort: 10000
nodePort: 30786
- name: sip-data-10001
protocol: UDP
port: 10001
targetPort: 10001
nodePort: 31788
- name: sip-data-10002
protocol: UDP
port: 10002
targetPort: 10002
nodePort: 30247
- name: sip-data-10003
protocol: UDP
port: 10003
targetPort: 10003
nodePort: 32389
- name: sip-data-10004
protocol: UDP
port: 10004
targetPort: 10004
nodePort: 30723
- name: sip-data-10005
protocol: UDP
port: 10005
targetPort: 10005
nodePort: 30295
- name: sip-data-10006
protocol: UDP
port: 10006
targetPort: 10006
nodePort: 30782
- name: sip-data-10007
protocol: UDP
port: 10007
targetPort: 10007
nodePort: 32165
- name: sip-data-10008
protocol: UDP
port: 10008
targetPort: 10008
nodePort: 30282
- name: sip-data-10009
protocol: UDP
port: 10009
targetPort: 10009
nodePort: 31325
- name: sip-data-10010
protocol: UDP
port: 10010
targetPort: 10010
nodePort: 31234
selector:
app: freeswitch
type: LoadBalancer
externalTrafficPolicy: Local
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
internalTrafficPolicy: Cluster
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: freeswitch-sounds
namespace: freeswitch
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
storageClassName: longhorn
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: freeswitch
namespace: freeswitch
labels:
app: freeswitch
annotations:
reloader.stakater.com/auto: "true" # reloader is disabled in cluster, (re)deploy it to use
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: freeswitch
template:
metadata:
labels:
app: freeswitch
spec:
volumes:
- name: config
configMap:
name: freeswitch-config
defaultMode: 420
- name: directory
configMap:
name: freeswitch-directory
defaultMode: 420
- name: sounds
persistentVolumeClaim:
claimName: freeswitch-sounds
- name: passwords
secret:
secretName: freeswitch-passwords
containers:
- name: freeswitch
image: mirror.gcr.io/dheaps/freeswitch:latest
env:
- name: SOUND_TYPES
value: en-us-callie
- name: SOUND_RATES
value: "32000"
resources: {}
volumeMounts:
- name: config
mountPath: /etc/freeswitch/sip_profiles/external/ipcall.xml
subPath: ipcall.xml
- name: config
mountPath: /etc/freeswitch/dialplan/default/00_outbound_ipcall.xml
subPath: 00_outbound_ipcall.xml
- name: config
mountPath: /etc/freeswitch/dialplan/public.xml
subPath: dialplan.xml
- name: config
mountPath: /etc/freeswitch/autoload_configs/switch.conf.xml
subPath: switch.xml
- name: config
mountPath: /etc/freeswitch/vars.xml
subPath: vars.xml
- name: passwords
mountPath: /etc/freeswitch/PASSWORDS.xml
subPath: PASSWORDS.xml
- name: directory
mountPath: /etc/freeswitch/directory/default
- name: sounds
mountPath: /usr/share/freeswitch/sounds
---
apiVersion: v1
kind: ConfigMap
metadata:
name: freeswitch-config
namespace: freeswitch
data:
dialplan.xml: |
<!--
NOTICE:
This context is usually accessed via the external sip profile listening on port 5080.
It is recommended to have separate inbound and outbound contexts. Not only for security
but clearing up why you would need to do such a thing. You don't want outside un-authenticated
callers hitting your default context which allows dialing calls thru your providers and results
in Toll Fraud.
-->
<!-- http://wiki.freeswitch.org/wiki/Dialplan_XML -->
<include>
<context name="public">
<extension name="unloop">
<condition field="${unroll_loops}" expression="^true$"/>
<condition field="${sip_looped_call}" expression="^true$">
<action application="deflect" data="${destination_number}"/>
</condition>
</extension>
<!--
Tag anything pass thru here as an outside_call so you can make sure not
to create any routing loops based on the conditions that it came from
the outside of the switch.
-->
<extension name="outside_call" continue="true">
<condition>
<action application="set" data="outside_call=true"/>
<action application="export" data="RFC2822_DATE=${strftime(%a, %d %b %Y %T %z)}"/>
</condition>
</extension>
<extension name="call_debug" continue="true">
<condition field="${call_debug}" expression="^true$" break="never">
<action application="info"/>
</condition>
</extension>
<extension name="public_extensions">
<condition field="destination_number" expression="^(10[01][0-9])$">
<action application="transfer" data="$1 XML default"/>
</condition>
</extension>
<extension name="public_conference_extensions">
<condition field="destination_number" expression="^(3[5-8][01][0-9])$">
<action application="transfer" data="$1 XML default"/>
</condition>
</extension>
<!--
You can place files in the public directory to get included.
-->
<X-PRE-PROCESS cmd="include" data="public/*.xml"/>
<!--
If you have made it this far lets challenge the caller and if they authenticate
lets try what they dialed in the default context. (commented out by default)
-->
<!-- TODO:
<extension name="check_auth" continue="true">
<condition field="${sip_authorized}" expression="^true$" break="never">
<anti-action application="respond" data="407"/>
</condition>
</extension>
-->
<extension name="transfer_to_default">
<condition>
<!-- TODO: proper ring grouping -->
<action application="bridge" data="user/1004@freeswitch.k-space.ee,user/1003@freeswitch.k-space.ee,sofia/gateway/ipcall/53543824"/>
</condition>
</extension>
</context>
</include>
ipcall.xml: |
<include>
<gateway name="ipcall">
<param name="proxy" value="sip.ipcall.ee"/>
<param name="register" value="true"/>
<param name="realm" value="sip.ipcall.ee"/>
<param name="username" value="6659652"/>
<param name="password" value="$${ipcall_password}"/>
<param name="from-user" value="6659652"/>
<param name="from-domain" value="sip.ipcall.ee"/>
<param name="extension" value="ring_group/default"/>
</gateway>
</include>
00_outbound_ipcall.xml: |
<extension name="outbound">
<!-- TODO: check toll_allow ? -->
<condition field="destination_number" expression="^(\d+)$">
<action application="set" data="sip_invite_domain=sip.ipcall.ee"/>
<action application="bridge" data="sofia/gateway/ipcall/${destination_number}"/>
</condition>
</extension>
switch.xml: |
<configuration name="switch.conf" description="Core Configuration">
<cli-keybindings>
<key name="1" value="help"/>
<key name="2" value="status"/>
<key name="3" value="show channels"/>
<key name="4" value="show calls"/>
<key name="5" value="sofia status"/>
<key name="6" value="reloadxml"/>
<key name="7" value="console loglevel 0"/>
<key name="8" value="console loglevel 7"/>
<key name="9" value="sofia status profile internal"/>
<key name="10" value="sofia profile internal siptrace on"/>
<key name="11" value="sofia profile internal siptrace off"/>
<key name="12" value="version"/>
</cli-keybindings>
<default-ptimes>
</default-ptimes>
<settings>
<param name="colorize-console" value="true"/>
<param name="dialplan-timestamps" value="false"/>
<param name="max-db-handles" value="50"/>
<param name="db-handle-timeout" value="10"/>
<param name="max-sessions" value="1000"/>
<param name="sessions-per-second" value="30"/>
<param name="loglevel" value="debug"/>
<param name="mailer-app" value="sendmail"/>
<param name="mailer-app-args" value="-t"/>
<param name="dump-cores" value="yes"/>
<param name="rtp-start-port" value="10000"/>
<param name="rtp-end-port" value="10010"/>
</settings>
</configuration>
vars.xml: |
<include>
<X-PRE-PROCESS cmd="set" data="disable_system_api_commands=true"/>
<X-PRE-PROCESS cmd="set" data="sound_prefix=$${sounds_dir}/en/us/callie"/>
<X-PRE-PROCESS cmd="set" data="domain=freeswitch.k-space.ee"/>
<X-PRE-PROCESS cmd="set" data="domain_name=$${domain}"/>
<X-PRE-PROCESS cmd="set" data="hold_music=local_stream://moh"/>
<X-PRE-PROCESS cmd="set" data="use_profile=external"/>
<X-PRE-PROCESS cmd="set" data="rtp_sdes_suites=AEAD_AES_256_GCM_8|AEAD_AES_128_GCM_8|AES_CM_256_HMAC_SHA1_80|AES_CM_192_HMAC_SHA1_80|AES_CM_128_HMAC_SHA1_80|AES_CM_256_HMAC_SHA1_32|AES_CM_192_HMAC_SHA1_32|AES_CM_128_HMAC_SHA1_32|AES_CM_128_NULL_AUTH"/>
<X-PRE-PROCESS cmd="set" data="global_codec_prefs=OPUS,G722,PCMU,PCMA,H264,VP8"/>
<X-PRE-PROCESS cmd="set" data="outbound_codec_prefs=OPUS,G722,PCMU,PCMA,H264,VP8"/>
<X-PRE-PROCESS cmd="set" data="xmpp_client_profile=xmppc"/>
<X-PRE-PROCESS cmd="set" data="xmpp_server_profile=xmpps"/>
<X-PRE-PROCESS cmd="set" data="bind_server_ip=auto"/>
<X-PRE-PROCESS cmd="stun-set" data="external_rtp_ip=host:freeswitch.k-space.ee"/>
<X-PRE-PROCESS cmd="stun-set" data="external_sip_ip=host:freeswitch.k-space.ee"/>
<X-PRE-PROCESS cmd="set" data="unroll_loops=true"/>
<X-PRE-PROCESS cmd="set" data="outbound_caller_name=FreeSWITCH"/>
<X-PRE-PROCESS cmd="set" data="outbound_caller_id=0000000000"/>
<X-PRE-PROCESS cmd="set" data="call_debug=false"/>
<X-PRE-PROCESS cmd="set" data="console_loglevel=info"/>
<X-PRE-PROCESS cmd="set" data="default_areacode=372"/>
<X-PRE-PROCESS cmd="set" data="default_country=EE"/>
<X-PRE-PROCESS cmd="set" data="presence_privacy=false"/>
<X-PRE-PROCESS cmd="set" data="au-ring=%(400,200,383,417);%(400,2000,383,417)"/>
<X-PRE-PROCESS cmd="set" data="be-ring=%(1000,3000,425)"/>
<X-PRE-PROCESS cmd="set" data="ca-ring=%(2000,4000,440,480)"/>
<X-PRE-PROCESS cmd="set" data="cn-ring=%(1000,4000,450)"/>
<X-PRE-PROCESS cmd="set" data="cy-ring=%(1500,3000,425)"/>
<X-PRE-PROCESS cmd="set" data="cz-ring=%(1000,4000,425)"/>
<X-PRE-PROCESS cmd="set" data="de-ring=%(1000,4000,425)"/>
<X-PRE-PROCESS cmd="set" data="dk-ring=%(1000,4000,425)"/>
<X-PRE-PROCESS cmd="set" data="dz-ring=%(1500,3500,425)"/>
<X-PRE-PROCESS cmd="set" data="eg-ring=%(2000,1000,475,375)"/>
<X-PRE-PROCESS cmd="set" data="es-ring=%(1500,3000,425)"/>
<X-PRE-PROCESS cmd="set" data="fi-ring=%(1000,4000,425)"/>
<X-PRE-PROCESS cmd="set" data="fr-ring=%(1500,3500,440)"/>
<X-PRE-PROCESS cmd="set" data="hk-ring=%(400,200,440,480);%(400,3000,440,480)"/>
<X-PRE-PROCESS cmd="set" data="hu-ring=%(1250,3750,425)"/>
<X-PRE-PROCESS cmd="set" data="il-ring=%(1000,3000,400)"/>
<X-PRE-PROCESS cmd="set" data="in-ring=%(400,200,425,375);%(400,2000,425,375)"/>
<X-PRE-PROCESS cmd="set" data="jp-ring=%(1000,2000,420,380)"/>
<X-PRE-PROCESS cmd="set" data="ko-ring=%(1000,2000,440,480)"/>
<X-PRE-PROCESS cmd="set" data="pk-ring=%(1000,2000,400)"/>
<X-PRE-PROCESS cmd="set" data="pl-ring=%(1000,4000,425)"/>
<X-PRE-PROCESS cmd="set" data="ro-ring=%(1850,4150,475,425)"/>
<X-PRE-PROCESS cmd="set" data="rs-ring=%(1000,4000,425)"/>
<X-PRE-PROCESS cmd="set" data="ru-ring=%(800,3200,425)"/>
<X-PRE-PROCESS cmd="set" data="sa-ring=%(1200,4600,425)"/>
<X-PRE-PROCESS cmd="set" data="tr-ring=%(2000,4000,450)"/>
<X-PRE-PROCESS cmd="set" data="uk-ring=%(400,200,400,450);%(400,2000,400,450)"/>
<X-PRE-PROCESS cmd="set" data="us-ring=%(2000,4000,440,480)"/>
<X-PRE-PROCESS cmd="set" data="bong-ring=v=-7;%(100,0,941.0,1477.0);v=-7;>=2;+=.1;%(1400,0,350,440)"/>
<X-PRE-PROCESS cmd="set" data="beep=%(1000,0,640)"/>
<X-PRE-PROCESS cmd="set" data="sit=%(274,0,913.8);%(274,0,1370.6);%(380,0,1776.7)"/>
<X-PRE-PROCESS cmd="set" data="df_us_ssn=(?!219099999|078051120)(?!666|000|9\d{2})\d{3}(?!00)\d{2}(?!0{4})\d{4}"/>
<X-PRE-PROCESS cmd="set" data="df_luhn=?:4[0-9]{12}(?:[0-9]{3})?|5[1-5][0-9]{14}|3[47][0-9]{13}|3(?:0[0-5]|[68][0-9])[0-9]{11}|6(?:011|5[0-9]{2})[0-9]{12}|(?:2131|1800|35\d{3})\d{11}"/>
<XX-PRE-PROCESS cmd="set" data="digits_dialed_filter=(($${df_luhn})|($${df_us_ssn}))"/>
<X-PRE-PROCESS cmd="set" data="default_provider=sip.ipcall.ee"/>
<X-PRE-PROCESS cmd="set" data="default_provider_username="/>
<X-PRE-PROCESS cmd="set" data="default_provider_password="/>
<X-PRE-PROCESS cmd="set" data="default_provider_from_domain=sip.ipcall.ee"/>
<X-PRE-PROCESS cmd="set" data="default_provider_register=true"/>
<X-PRE-PROCESS cmd="set" data="default_provider_contact=1004"/>
<X-PRE-PROCESS cmd="set" data="sip_tls_version=tlsv1,tlsv1.1,tlsv1.2"/>
<X-PRE-PROCESS cmd="set" data="sip_tls_ciphers=ALL:!ADH:!LOW:!EXP:!MD5:@STRENGTH"/>
<X-PRE-PROCESS cmd="set" data="internal_auth_calls=true"/>
<X-PRE-PROCESS cmd="set" data="internal_sip_port=5060"/>
<X-PRE-PROCESS cmd="set" data="internal_tls_port=5061"/>
<X-PRE-PROCESS cmd="set" data="internal_ssl_enable=false"/>
<X-PRE-PROCESS cmd="set" data="external_auth_calls=false"/>
<X-PRE-PROCESS cmd="set" data="external_sip_port=5080"/>
<X-PRE-PROCESS cmd="set" data="external_tls_port=5081"/>
<X-PRE-PROCESS cmd="set" data="external_ssl_enable=false"/>
<X-PRE-PROCESS cmd="set" data="rtp_video_max_bandwidth_in=3mb"/>
<X-PRE-PROCESS cmd="set" data="rtp_video_max_bandwidth_out=3mb"/>
<X-PRE-PROCESS cmd="set" data="suppress_cng=true"/>
<X-PRE-PROCESS cmd="set" data="rtp_liberal_dtmf=true"/>
<X-PRE-PROCESS cmd="set" data="video_mute_png=$${images_dir}/default-mute.png"/>
<X-PRE-PROCESS cmd="set" data="video_no_avatar_png=$${images_dir}/default-avatar.png"/>
<X-PRE-PROCESS cmd="include" data="PASSWORDS.xml"/>
</include>
---
apiVersion: v1
kind: ConfigMap
metadata:
name: freeswitch-directory
namespace: freeswitch
data:
1000.xml: |
<include>
<user id="1000">
<params>
<param name="password" value="$${1000_password}"/>
</params>
<variables>
<variable name="toll_allow" value="domestic,local"/>
<variable name="accountcode" value="1000"/>
<variable name="user_context" value="default"/>
<variable name="effective_caller_id_name" value=""/>
<variable name="effective_caller_id_number" value="1000"/>
<variable name="outbound_caller_id_name" value="$${outbound_caller_name}"/>
<variable name="outbound_caller_id_number" value="$${outbound_caller_id}"/>
</variables>
</user>
</include>
1001.xml: |
<include>
<user id="1001">
<params>
<param name="password" value="$${1001_password}"/>
</params>
<variables>
<variable name="toll_allow" value="domestic,local"/>
<variable name="accountcode" value="1001"/>
<variable name="user_context" value="default"/>
<variable name="effective_caller_id_name" value=""/>
<variable name="effective_caller_id_number" value="1001"/>
<variable name="outbound_caller_id_name" value="$${outbound_caller_name}"/>
<variable name="outbound_caller_id_number" value="$${outbound_caller_id}"/>
</variables>
</user>
</include>
1002.xml: |
<include>
<user id="1002">
<params>
<param name="password" value="$${1002_password}"/>
</params>
<variables>
<variable name="toll_allow" value="domestic,local"/>
<variable name="accountcode" value="1002"/>
<variable name="user_context" value="default"/>
<variable name="effective_caller_id_name" value=""/>
<variable name="effective_caller_id_number" value="1002"/>
<variable name="outbound_caller_id_name" value="$${outbound_caller_name}"/>
<variable name="outbound_caller_id_number" value="$${outbound_caller_id}"/>
</variables>
</user>
</include>
1003.xml: |
<include>
<user id="1003">
<params>
<param name="password" value="$${1003_password}"/>
</params>
<variables>
<variable name="toll_allow" value="domestic,local"/>
<variable name="accountcode" value="1003"/>
<variable name="user_context" value="default"/>
<variable name="effective_caller_id_name" value="Erki A"/>
<variable name="effective_caller_id_number" value="1003"/>
<variable name="outbound_caller_id_name" value="$${outbound_caller_name}"/>
<variable name="outbound_caller_id_number" value="$${outbound_caller_id}"/>
</variables>
</user>
</include>
1004.xml: |
<include>
<user id="1004">
<params>
<param name="password" value="$${1004_password}"/>
</params>
<variables>
<variable name="toll_allow" value="domestic,local"/>
<variable name="accountcode" value="1004"/>
<variable name="user_context" value="default"/>
<variable name="effective_caller_id_name" value="Erki A"/>
<variable name="effective_caller_id_number" value="1004"/>
<variable name="outbound_caller_id_name" value="$${outbound_caller_name}"/>
<variable name="outbound_caller_id_number" value="$${outbound_caller_id}"/>
</variables>
</user>
</include>
1005.xml: |
<include>
<user id="1005">
<params>
<param name="password" value="$${1005_password}"/>
</params>
<variables>
<variable name="toll_allow" value="domestic,local"/>
<variable name="accountcode" value="1005"/>
<variable name="user_context" value="default"/>
<variable name="effective_caller_id_name" value=""/>
<variable name="effective_caller_id_number" value="1005"/>
<variable name="outbound_caller_id_name" value="$${outbound_caller_name}"/>
<variable name="outbound_caller_id_number" value="$${outbound_caller_id}"/>
</variables>
</user>
</include>
1006.xml: |
<include>
<user id="1006">
<params>
<param name="password" value="$${1006_password}"/>
</params>
<variables>
<variable name="toll_allow" value="domestic,local"/>
<variable name="accountcode" value="1006"/>
<variable name="user_context" value="default"/>
<variable name="effective_caller_id_name" value=""/>
<variable name="effective_caller_id_number" value="1006"/>
<variable name="outbound_caller_id_name" value="$${outbound_caller_name}"/>
<variable name="outbound_caller_id_number" value="$${outbound_caller_id}"/>
</variables>
</user>
</include>
1007.xml: |
<include>
<user id="1007">
<params>
<param name="password" value="$${1007_password}"/>
</params>
<variables>
<variable name="toll_allow" value="domestic,local"/>
<variable name="accountcode" value="1007"/>
<variable name="user_context" value="default"/>
<variable name="effective_caller_id_name" value=""/>
<variable name="effective_caller_id_number" value="1007"/>
<variable name="outbound_caller_id_name" value="$${outbound_caller_name}"/>
<variable name="outbound_caller_id_number" value="$${outbound_caller_id}"/>
</variables>
</user>
</include>
1008.xml: |
<include>
<user id="1008">
<params>
<param name="password" value="$${1008_password}"/>
</params>
<variables>
<variable name="toll_allow" value="domestic,local"/>
<variable name="accountcode" value="1008"/>
<variable name="user_context" value="default"/>
<variable name="effective_caller_id_name" value=""/>
<variable name="effective_caller_id_number" value="1008"/>
<variable name="outbound_caller_id_name" value="$${outbound_caller_name}"/>
<variable name="outbound_caller_id_number" value="$${outbound_caller_id}"/>
</variables>
</user>
</include>
1009.xml: |
<include>
<user id="1009">
<params>
<param name="password" value="$${1009_password}"/>
</params>
<variables>
<variable name="toll_allow" value="domestic,local"/>
<variable name="accountcode" value="1009"/>
<variable name="user_context" value="default"/>
<variable name="effective_caller_id_name" value=""/>
<variable name="effective_caller_id_number" value="1009"/>
<variable name="outbound_caller_id_name" value="$${outbound_caller_name}"/>
<variable name="outbound_caller_id_number" value="$${outbound_caller_id}"/>
</variables>
</user>
</include>

View File

@@ -0,0 +1,49 @@
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: freeswitch
spec:
podSelector:
matchLabels:
app: freeswitch
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus
- from:
- ipBlock:
cidr: 100.101.0.0/16
- from:
- ipBlock:
cidr: 100.102.0.0/16
- from:
- ipBlock:
cidr: 81.90.125.224/32 # Lauri home
- from:
- ipBlock:
cidr: 172.20.8.241/32 # Erki A
- from:
- ipBlock:
cidr: 212.47.211.10/32 # Elisa SIP
- from:
- ipBlock:
cidr: 212.47.211.10/32 # Elisa SIP
egress:
- to:
- ipBlock:
cidr: 212.47.211.10/32 # Elisa SIP
- to:
- ipBlock:
cidr: 195.222.16.38/32 # Elisa SIP
- to:
ports:
- port: 53
protocol: UDP

View File

@@ -62,7 +62,7 @@ spec:
serviceAccountName: local-path-provisioner-service-account
containers:
- name: local-path-provisioner
image: rancher/local-path-provisioner:v0.0.22
image: mirror.gcr.io/rancher/local-path-provisioner:v0.0.22
imagePullPolicy: IfNotPresent
command:
- local-path-provisioner
@@ -151,7 +151,7 @@ data:
spec:
containers:
- name: helper-pod
image: busybox
image: mirror.gcr.io/library/busybox
imagePullPolicy: IfNotPresent

View File

@@ -1,5 +1,7 @@
# Logging infrastructure
Note: This is deprecated since we moved to [Logmower stack](https://github.com/logmower)
## Background
Fluent Bit picks up the logs from Kubernetes workers and sends them to Graylog
@@ -14,7 +16,7 @@ To deploy:
```
kubectl create namespace logging
kubectl apply -n logging -f mongodb-support.yml -f application.yml -f filebeat.yml -f networkpolicy-base.yml
kubectl apply -n logging -f zinc.yml -f application.yml -f filebeat.yml -f networkpolicy-base.yml
kubectl rollout restart -n logging daemonset.apps/filebeat
```

View File

@@ -0,0 +1,185 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: logging
data:
filebeat.yml: |-
logging:
level: warning
setup:
ilm:
enabled: false
template:
name: filebeat
pattern: filebeat-*
http.enabled: true
filebeat.autodiscover:
providers:
- type: kubernetes
host: ${NODE_NAME}
hints.enabled: true
hints.default_config:
type: container
paths:
- /var/log/containers/*${data.kubernetes.container.id}.log
output:
elasticsearch:
hosts:
- http://zinc:4080
path: "/es/"
index: "filebeat-%{+yyyy.MM.dd}"
username: "${ZINC_FIRST_ADMIN_USER}"
password: "${ZINC_FIRST_ADMIN_PASSWORD}"
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
spec:
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 50%
selector:
matchLabels:
app: filebeat
template:
metadata:
labels:
app: filebeat
annotations:
co.elastic.logs/json.keys_under_root: "true"
spec:
serviceAccountName: filebeat
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:8.4.1
args:
- -c
- /etc/filebeat.yml
- -e
securityContext:
runAsUser: 0
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: ZINC_FIRST_ADMIN_USER
value: admin
- name: ZINC_FIRST_ADMIN_PASSWORD
value: salakala
ports:
- containerPort: 5066
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: filebeat-config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
- name: exporter
image: sepa/beats-exporter
args:
- -p=5066
ports:
- containerPort: 8080
name: exporter
protocol: TCP
volumes:
- name: filebeat-config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
- name: data
hostPath:
path: /var/lib/filebeat-data
type: DirectoryOrCreate
tolerations:
- operator: "Exists"
effect: "NoExecute"
- operator: "Exists"
effect: "NoSchedule"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: logging-filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: logging
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: logging
labels:
app: filebeat
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: filebeat
spec:
podSelector:
matchLabels:
app: filebeat
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: prometheus-operator
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus
ports:
- protocol: TCP
port: 8080
egress:
- to:
- podSelector:
matchLabels:
app: zinc
ports:
- protocol: TCP
port: 4080
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: filebeat
spec:
selector:
matchLabels:
app: filebeat
podMetricsEndpoints:
- port: exporter

122
_disabled/logging/zinc.yml Normal file
View File

@@ -0,0 +1,122 @@
apiVersion: v1
kind: Service
metadata:
name: zinc
spec:
clusterIP: None
selector:
app: zinc
ports:
- name: http
port: 4080
targetPort: 4080
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zinc
spec:
serviceName: zinc
replicas: 1
selector:
matchLabels:
app: zinc
template:
metadata:
labels:
app: zinc
spec:
securityContext:
fsGroup: 2000
runAsUser: 10000
runAsGroup: 3000
runAsNonRoot: true
containers:
- name: zinc
image: public.ecr.aws/zinclabs/zinc:latest
env:
- name: GIN_MODE
value: release
- name: ZINC_FIRST_ADMIN_USER
value: admin
- name: ZINC_FIRST_ADMIN_PASSWORD
value: salakala
- name: ZINC_DATA_PATH
value: /data
imagePullPolicy: Always
resources:
limits:
cpu: "4"
memory: 4Gi
requests:
cpu: 32m
memory: 50Mi
ports:
- containerPort: 4080
name: http
volumeMounts:
- name: data
mountPath: /data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
storageClassName: longhorn
resources:
requests:
storage: 20Gi
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: zinc
annotations:
cert-manager.io/cluster-issuer: default
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd
spec:
rules:
- host: zinc.k-space.ee
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: zinc
port:
number: 4080
tls:
- hosts:
- zinc.k-space.ee
secretName: zinc-tls
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: zinc
spec:
podSelector:
matchLabels:
app: zinc
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: filebeat
ports:
- protocol: TCP
port: 4080
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik

View File

@@ -0,0 +1,21 @@
# MongoDB Community Kubernetes Operator
## Derployment
With ArgoCD. Render it locally:
```sh
kustomize build . --enable-helm
```
# Instantiating databases
For each application include mongodb-netpol.yaml and kustomization in resources:
```yaml
resources:
- https://git.k-space.ee/k-space/kube//mongodb-operator/mongodb-netpol.yaml
- https://github.com/mongodb/mongodb-kubernetes-operator//config/rbac/?ref=v0.13.0
```
```
kubectl create secret generic -n <application> mongodb-application-user-password --from-literal="password=$(cat /dev/urandom | base64 | head -c 30)"
```

View File

@@ -0,0 +1,13 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: mongodb-operator
# spec: https://kubectl.docs.kubernetes.io/references/kustomize/builtins/#_helmchartinflationgenerator_
helmCharts:
- includeCRDs: true
name: &name community-operator
releaseName: *name
repo: https://mongodb.github.io/helm-charts
valuesFile: values.yaml
version: 0.13.0 # helm search repo mongodb/community-operator --versions

View File

@@ -0,0 +1,25 @@
# Allow any pod in this namespace to connect to MongoDB and
# allow cluster members to talk to eachother
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: mongodb-operator
spec:
podSelector:
matchLabels:
app: mongodb-svc
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector: {}
ports:
- port: 27017
egress:
- to:
- podSelector:
matchLabels:
app: mongodb-svc
ports:
- port: 27017

View File

@@ -0,0 +1,10 @@
# MariaDB clusters
This is namespace for MariaDB clusters managed by Codemowers' sample
[mysql-database-operator](https://github.com/codemowers/operatorlib/tree/main/samples/mysql-database-operator)
which is deployed via [ArgoCD](https://argocd.k-space.ee/applications/argocd/mysql-database-operator)
```
kubectl create namespace mysql-clusters
kubectl apply -n mysql-clusters -f application.yaml
```

View File

@@ -0,0 +1,24 @@
---
apiVersion: codemowers.cloud/v1beta1
kind: MysqlDatabaseClass
metadata:
name: dedicated
annotations:
kubernetes.io/description: "Dedicated MySQL cluster"
spec:
reclaimPolicy: Retain
replicas: 3
routers: 2
storageClass: mysql
podSpec:
containers:
- name: mariadb
image: mirror.gcr.io/library/mariadb:10.9.7@sha256:198c7a5fea3d7285762042a628fe8f83f0a7ccef559605b4cc9502e65210880b
imagePullPolicy: IfNotPresent
nodeSelector:
dedicated: storage
tolerations:
- effect: NoSchedule
key: dedicated
operator: Equal
value: storage

View File

@@ -0,0 +1,40 @@
---
apiVersion: codemowers.cloud/v1beta1
kind: MysqlDatabaseClass
metadata:
name: external
annotations:
kubernetes.io/description: "External MySQL cluster"
spec:
reclaimPolicy: Retain
shared: true
---
apiVersion: v1
kind: Service
metadata:
name: primary-external
spec:
clusterIP: None
---
apiVersion: v1
kind: Service
metadata:
name: external
spec:
clusterIP: None
---
kind: Endpoints
apiVersion: v1
metadata:
name: primary-external
subsets:
- addresses:
- ip: 172.20.36.1
---
kind: Endpoints
apiVersion: v1
metadata:
name: external
subsets:
- addresses:
- ip: 172.20.36.1

View File

@@ -1,9 +1,21 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: phpmyadmin
namespace: mysql-clusters
data:
config.user.inc.php: |
<?php
for ($i = 1; isset($hosts[$i - 1]); $i++) {
$cfg['Servers'][$i]['ssl'] = true;
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: phpmyadmin
labels:
app: phpmyadmin
namespace: mysql-clusters
spec:
# phpMyAdmin session handling is not really compatible with more replicas
replicas: 1
@@ -17,32 +29,56 @@ spec:
spec:
containers:
- name: phpmyadmin
image: phpmyadmin/phpmyadmin
image: mirror.gcr.io/phpmyadmin/phpmyadmin
ports:
- name: web
containerPort: 80
protocol: TCP
env:
- name: PMA_ARBITRARY
value: "1"
- name: PMA_HOSTS
value: mysql-cluster.etherpad.svc.cluster.local,mariadb.authelia,mariadb.nextcloud,172.20.36.1
valueFrom:
configMapKeyRef:
name: phpmyadmin-connections
key: PMA_HOSTS
- name: PMA_PORTS
valueFrom:
configMapKeyRef:
name: phpmyadmin-connections
key: PMA_HOSTS
- name: PMA_ABSOLUTE_URI
value: https://phpmyadmin.k-space.ee/
- name: UPLOAD_LIMIT
value: 10G
volumes:
- name: config
configMap:
name: phpmyadmin
---
apiVersion: codemowers.cloud/v1beta1
kind: OIDCMiddlewareClient
metadata:
name: phpmyadmin
spec:
displayName: phpMyAdmin
uri: 'https://phpmyadmin.k-space.ee'
headerMapping:
email: Remote-Email
groups: Remote-Groups
name: Remote-Name
user: Remote-Username
allowedGroups:
- k-space:floor
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: phpmyadmin
namespace: mysql-clusters
annotations:
kubernetes.io/ingress.class: traefik
cert-manager.io/cluster-issuer: default
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
traefik.ingress.kubernetes.io/router.middlewares: mysql-clusters-phpmyadmin@kubernetescrd
spec:
rules:
- host: phpmyadmin.k-space.ee
@@ -57,15 +93,13 @@ spec:
number: 80
tls:
- hosts:
- phpmyadmin.k-space.ee
secretName: phpmyadmin-tls
- "*.k-space.ee"
---
apiVersion: v1
kind: Service
metadata:
name: phpmyadmin
labels:
app: phpmyadmin
namespace: mysql-clusters
spec:
selector:
app: phpmyadmin
@@ -73,36 +107,3 @@ spec:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: phpmyadmin
spec:
podSelector:
matchLabels:
app: phpmyadmin
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
ports:
- protocol: TCP
port: 80
egress:
- # Allow connecting to MySQL instance in any namespace
to:
- namespaceSelector: {}
ports:
- port: 3306
- # Allow connecting to any MySQL instance outside the cluster
to:
- ipBlock:
cidr: 0.0.0.0/0
ports:
- protocol: TCP
port: 3306

View File

@@ -0,0 +1,25 @@
---
apiVersion: codemowers.cloud/v1beta1
kind: MysqlDatabaseClass
metadata:
name: shared
annotations:
kubernetes.io/description: "Shared MySQL cluster"
spec:
reclaimPolicy: Retain
shared: true
replicas: 3
routers: 2
storageClass: mysql
podSpec:
containers:
- name: mariadb
image: mirror.gcr.io/library/mariadb:10.9.7@sha256:198c7a5fea3d7285762042a628fe8f83f0a7ccef559605b4cc9502e65210880b
imagePullPolicy: IfNotPresent
nodeSelector:
dedicated: storage
tolerations:
- effect: NoSchedule
key: dedicated
operator: Equal
value: storage

View File

@@ -0,0 +1,20 @@
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: mysql
annotations:
kubernetes.io/description: |
Storage class for MySQL, MariaDB and similar applications that
implement high availability in application layer.
This storage class uses XFS, has no block level redundancy and
has block device level caching disabled.
provisioner: csi.proxmox.sinextra.dev
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
parameters:
csi.storage.k8s.io/fstype: xfs
storage: ks-pvs
cache: none
ssd: "true"

View File

@@ -0,0 +1,20 @@
# XFS hostpath based local PV-s
```
wget https://openebs.github.io/charts/openebs-operator-lite.yaml
kubectl apply -f openebs-operator-lite.yaml -f storage-class.yaml
```
# Raw file based local PV-s
### TO BE DEPRECATED
The manifests were rendered using Helm template from https://github.com/openebs/rawfile-localpv
and subsequently modified
```
kubectl create namespace openebs
kubectl apply -n openebs -f rawfile.yaml
```

View File

@@ -0,0 +1,937 @@
# This manifest deploys the OpenEBS control plane components, with associated CRs & RBAC rules
# NOTE: On GKE, deploy the openebs-operator.yaml in admin context
# Create the OpenEBS namespace
apiVersion: v1
kind: Namespace
metadata:
name: openebs
---
# Create Maya Service Account
apiVersion: v1
kind: ServiceAccount
metadata:
name: openebs-maya-operator
namespace: openebs
---
# Define Role that allows operations on K8s pods/deployments
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: openebs-maya-operator
rules:
- apiGroups: ["*"]
resources: ["nodes", "nodes/proxy"]
verbs: ["*"]
- apiGroups: ["*"]
resources: ["namespaces", "services", "pods", "pods/exec", "deployments", "deployments/finalizers", "replicationcontrollers", "replicasets", "events", "endpoints", "configmaps", "secrets", "jobs", "cronjobs"]
verbs: ["*"]
- apiGroups: ["*"]
resources: ["statefulsets", "daemonsets"]
verbs: ["*"]
- apiGroups: ["*"]
resources: ["resourcequotas", "limitranges"]
verbs: ["list", "watch"]
- apiGroups: ["*"]
resources: ["ingresses", "horizontalpodautoscalers", "verticalpodautoscalers", "poddisruptionbudgets", "certificatesigningrequests"]
verbs: ["list", "watch"]
- apiGroups: ["*"]
resources: ["storageclasses", "persistentvolumeclaims", "persistentvolumes"]
verbs: ["*"]
- apiGroups: ["apiextensions.k8s.io"]
resources: ["customresourcedefinitions"]
verbs: [ "get", "list", "create", "update", "delete", "patch"]
- apiGroups: ["openebs.io"]
resources: [ "*"]
verbs: ["*"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["get", "create", "update"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
---
# Bind the Service Account with the Role Privileges.
# TODO: Check if default account also needs to be there
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: openebs-maya-operator
subjects:
- kind: ServiceAccount
name: openebs-maya-operator
namespace: openebs
roleRef:
kind: ClusterRole
name: openebs-maya-operator
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.5.0
creationTimestamp: null
name: blockdevices.openebs.io
spec:
group: openebs.io
names:
kind: BlockDevice
listKind: BlockDeviceList
plural: blockdevices
shortNames:
- bd
singular: blockdevice
scope: Namespaced
versions:
- additionalPrinterColumns:
- jsonPath: .spec.nodeAttributes.nodeName
name: NodeName
type: string
- jsonPath: .spec.path
name: Path
priority: 1
type: string
- jsonPath: .spec.filesystem.fsType
name: FSType
priority: 1
type: string
- jsonPath: .spec.capacity.storage
name: Size
type: string
- jsonPath: .status.claimState
name: ClaimState
type: string
- jsonPath: .status.state
name: Status
type: string
- jsonPath: .metadata.creationTimestamp
name: Age
type: date
name: v1alpha1
schema:
openAPIV3Schema:
description: BlockDevice is the Schema for the blockdevices API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: DeviceSpec defines the properties and runtime status of a BlockDevice
properties:
aggregateDevice:
description: AggregateDevice was intended to store the hierarchical information in cases of LVM. However this is currently not implemented and may need to be re-looked into for better design. To be deprecated
type: string
capacity:
description: Capacity
properties:
logicalSectorSize:
description: LogicalSectorSize is blockdevice logical-sector size in bytes
format: int32
type: integer
physicalSectorSize:
description: PhysicalSectorSize is blockdevice physical-Sector size in bytes
format: int32
type: integer
storage:
description: Storage is the blockdevice capacity in bytes
format: int64
type: integer
required:
- storage
type: object
claimRef:
description: ClaimRef is the reference to the BDC which has claimed this BD
properties:
apiVersion:
description: API version of the referent.
type: string
fieldPath:
description: 'If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.'
type: string
kind:
description: 'Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
name:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names'
type: string
namespace:
description: 'Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/'
type: string
resourceVersion:
description: 'Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency'
type: string
uid:
description: 'UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids'
type: string
type: object
details:
description: Details contain static attributes of BD like model,serial, and so forth
properties:
compliance:
description: Compliance is standards/specifications version implemented by device firmware such as SPC-1, SPC-2, etc
type: string
deviceType:
description: DeviceType represents the type of device like sparse, disk, partition, lvm, crypt
enum:
- disk
- partition
- sparse
- loop
- lvm
- crypt
- dm
- mpath
type: string
driveType:
description: DriveType is the type of backing drive, HDD/SSD
enum:
- HDD
- SSD
- Unknown
- ""
type: string
firmwareRevision:
description: FirmwareRevision is the disk firmware revision
type: string
hardwareSectorSize:
description: HardwareSectorSize is the hardware sector size in bytes
format: int32
type: integer
logicalBlockSize:
description: LogicalBlockSize is the logical block size in bytes reported by /sys/class/block/sda/queue/logical_block_size
format: int32
type: integer
model:
description: Model is model of disk
type: string
physicalBlockSize:
description: PhysicalBlockSize is the physical block size in bytes reported by /sys/class/block/sda/queue/physical_block_size
format: int32
type: integer
serial:
description: Serial is serial number of disk
type: string
vendor:
description: Vendor is vendor of disk
type: string
type: object
devlinks:
description: DevLinks contains soft links of a block device like /dev/by-id/... /dev/by-uuid/...
items:
description: DeviceDevLink holds the mapping between type and links like by-id type or by-path type link
properties:
kind:
description: Kind is the type of link like by-id or by-path.
enum:
- by-id
- by-path
type: string
links:
description: Links are the soft links
items:
type: string
type: array
type: object
type: array
filesystem:
description: FileSystem contains mountpoint and filesystem type
properties:
fsType:
description: Type represents the FileSystem type of the block device
type: string
mountPoint:
description: MountPoint represents the mountpoint of the block device.
type: string
type: object
nodeAttributes:
description: NodeAttributes has the details of the node on which BD is attached
properties:
nodeName:
description: NodeName is the name of the Kubernetes node resource on which the device is attached
type: string
type: object
parentDevice:
description: "ParentDevice was intended to store the UUID of the parent Block Device as is the case for partitioned block devices. \n For example: /dev/sda is the parent for /dev/sda1 To be deprecated"
type: string
partitioned:
description: Partitioned represents if BlockDevice has partitions or not (Yes/No) Currently always default to No. To be deprecated
enum:
- "Yes"
- "No"
type: string
path:
description: Path contain devpath (e.g. /dev/sdb)
type: string
required:
- capacity
- devlinks
- nodeAttributes
- path
type: object
status:
description: DeviceStatus defines the observed state of BlockDevice
properties:
claimState:
description: ClaimState represents the claim state of the block device
enum:
- Claimed
- Unclaimed
- Released
type: string
state:
description: State is the current state of the blockdevice (Active/Inactive/Unknown)
enum:
- Active
- Inactive
- Unknown
type: string
required:
- claimState
- state
type: object
type: object
served: true
storage: true
subresources: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.5.0
creationTimestamp: null
name: blockdeviceclaims.openebs.io
spec:
group: openebs.io
names:
kind: BlockDeviceClaim
listKind: BlockDeviceClaimList
plural: blockdeviceclaims
shortNames:
- bdc
singular: blockdeviceclaim
scope: Namespaced
versions:
- additionalPrinterColumns:
- jsonPath: .spec.blockDeviceName
name: BlockDeviceName
type: string
- jsonPath: .status.phase
name: Phase
type: string
- jsonPath: .metadata.creationTimestamp
name: Age
type: date
name: v1alpha1
schema:
openAPIV3Schema:
description: BlockDeviceClaim is the Schema for the blockdeviceclaims API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: DeviceClaimSpec defines the request details for a BlockDevice
properties:
blockDeviceName:
description: BlockDeviceName is the reference to the block-device backing this claim
type: string
blockDeviceNodeAttributes:
description: BlockDeviceNodeAttributes is the attributes on the node from which a BD should be selected for this claim. It can include nodename, failure domain etc.
properties:
hostName:
description: HostName represents the hostname of the Kubernetes node resource where the BD should be present
type: string
nodeName:
description: NodeName represents the name of the Kubernetes node resource where the BD should be present
type: string
type: object
deviceClaimDetails:
description: Details of the device to be claimed
properties:
allowPartition:
description: AllowPartition represents whether to claim a full block device or a device that is a partition
type: boolean
blockVolumeMode:
description: 'BlockVolumeMode represents whether to claim a device in Block mode or Filesystem mode. These are use cases of BlockVolumeMode: 1) Not specified: VolumeMode check will not be effective 2) VolumeModeBlock: BD should not have any filesystem or mountpoint 3) VolumeModeFileSystem: BD should have a filesystem and mountpoint. If DeviceFormat is specified then the format should match with the FSType in BD'
type: string
formatType:
description: Format of the device required, eg:ext4, xfs
type: string
type: object
deviceType:
description: DeviceType represents the type of drive like SSD, HDD etc.,
nullable: true
type: string
hostName:
description: Node name from where blockdevice has to be claimed. To be deprecated. Use NodeAttributes.HostName instead
type: string
resources:
description: Resources will help with placing claims on Capacity, IOPS
properties:
requests:
additionalProperties:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: 'Requests describes the minimum resources required. eg: if storage resource of 10G is requested minimum capacity of 10G should be available TODO for validating'
type: object
required:
- requests
type: object
selector:
description: Selector is used to find block devices to be considered for claiming
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
items:
description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
properties:
key:
description: key is the label key that the selector applies to.
type: string
operator:
description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
items:
type: string
type: array
required:
- key
- operator
type: object
type: array
matchLabels:
additionalProperties:
type: string
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
type: object
status:
description: DeviceClaimStatus defines the observed state of BlockDeviceClaim
properties:
phase:
description: Phase represents the current phase of the claim
type: string
required:
- phase
type: object
type: object
served: true
storage: true
subresources: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
---
# This is the node-disk-manager related config.
# It can be used to customize the disks probes and filters
apiVersion: v1
kind: ConfigMap
metadata:
name: openebs-ndm-config
namespace: openebs
labels:
openebs.io/component-name: ndm-config
data:
# udev-probe is default or primary probe it should be enabled to run ndm
# filterconfigs contains configs of filters. To provide a group of include
# and exclude values add it as , separated string
node-disk-manager.config: |
probeconfigs:
- key: udev-probe
name: udev probe
state: true
- key: seachest-probe
name: seachest probe
state: false
- key: smart-probe
name: smart probe
state: true
filterconfigs:
- key: os-disk-exclude-filter
name: os disk exclude filter
state: true
exclude: "/,/etc/hosts,/boot"
- key: vendor-filter
name: vendor filter
state: true
include: ""
exclude: "CLOUDBYT,OpenEBS"
- key: path-filter
name: path filter
state: true
include: ""
exclude: "/dev/loop,/dev/fd0,/dev/sr0,/dev/ram,/dev/md,/dev/dm-,/dev/rbd,/dev/zd"
# metconfig can be used to decorate the block device with different types of labels
# that are available on the node or come in a device properties.
# node labels - the node where bd is discovered. A whitlisted label prefixes
# attribute labels - a property of the BD can be added as a ndm label as ndm.io/<property>=<property-value>
metaconfigs:
- key: node-labels
name: node labels
pattern: ""
- key: device-labels
name: device labels
type: ""
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: openebs-ndm
namespace: openebs
labels:
name: openebs-ndm
openebs.io/component-name: ndm
openebs.io/version: 3.5.0
spec:
selector:
matchLabels:
name: openebs-ndm
openebs.io/component-name: ndm
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
name: openebs-ndm
openebs.io/component-name: ndm
openebs.io/version: 3.5.0
spec:
# By default the node-disk-manager will be run on all kubernetes nodes
# If you would like to limit this to only some nodes, say the nodes
# that have storage attached, you could label those node and use
# nodeSelector.
#
# e.g. label the storage nodes with - "openebs.io/nodegroup"="storage-node"
# kubectl label node <node-name> "openebs.io/nodegroup"="storage-node"
#nodeSelector:
# "openebs.io/nodegroup": "storage-node"
serviceAccountName: openebs-maya-operator
hostNetwork: true
# host PID is used to check status of iSCSI Service when the NDM
# API service is enabled
#hostPID: true
containers:
- name: node-disk-manager
image: openebs/node-disk-manager:2.1.0
args:
- -v=4
# The feature-gate is used to enable the new UUID algorithm.
- --feature-gates="GPTBasedUUID"
# Use partition table UUID instead of create single partition to get
# partition UUID. Require `GPTBasedUUID` to be enabled with.
# - --feature-gates="PartitionTableUUID"
# Detect changes to device size, filesystem and mount-points without restart.
# - --feature-gates="ChangeDetection"
# The feature gate is used to start the gRPC API service. The gRPC server
# starts at 9115 port by default. This feature is currently in Alpha state
# - --feature-gates="APIService"
# The feature gate is used to enable NDM, to create blockdevice resources
# for unused partitions on the OS disk
# - --feature-gates="UseOSDisk"
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
volumeMounts:
- name: config
mountPath: /host/node-disk-manager.config
subPath: node-disk-manager.config
readOnly: true
# make udev database available inside container
- name: udev
mountPath: /run/udev
- name: procmount
mountPath: /host/proc
readOnly: true
- name: devmount
mountPath: /dev
- name: basepath
mountPath: /var/openebs/ndm
- name: sparsepath
mountPath: /var/openebs/sparse
env:
# namespace in which NDM is installed will be passed to NDM Daemonset
# as environment variable
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
# pass hostname as env variable using downward API to the NDM container
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# specify the directory where the sparse files need to be created.
# if not specified, then sparse files will not be created.
- name: SPARSE_FILE_DIR
value: "/var/openebs/sparse"
# Size(bytes) of the sparse file to be created.
- name: SPARSE_FILE_SIZE
value: "10737418240"
# Specify the number of sparse files to be created
- name: SPARSE_FILE_COUNT
value: "0"
livenessProbe:
exec:
command:
- pgrep
- "ndm"
initialDelaySeconds: 30
periodSeconds: 60
volumes:
- name: config
configMap:
name: openebs-ndm-config
- name: udev
hostPath:
path: /run/udev
type: Directory
# mount /proc (to access mount file of process 1 of host) inside container
# to read mount-point of disks and partitions
- name: procmount
hostPath:
path: /proc
type: Directory
- name: devmount
# the /dev directory is mounted so that we have access to the devices that
# are connected at runtime of the pod.
hostPath:
path: /dev
type: Directory
- name: basepath
hostPath:
path: /var/openebs/ndm
type: DirectoryOrCreate
- name: sparsepath
hostPath:
path: /var/openebs/sparse
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: openebs-ndm-operator
namespace: openebs
labels:
name: openebs-ndm-operator
openebs.io/component-name: ndm-operator
openebs.io/version: 3.5.0
spec:
selector:
matchLabels:
name: openebs-ndm-operator
openebs.io/component-name: ndm-operator
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
name: openebs-ndm-operator
openebs.io/component-name: ndm-operator
openebs.io/version: 3.5.0
spec:
serviceAccountName: openebs-maya-operator
containers:
- name: node-disk-operator
image: openebs/node-disk-operator:2.1.0
imagePullPolicy: IfNotPresent
env:
- name: WATCH_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
# the service account of the ndm-operator pod
- name: SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
- name: OPERATOR_NAME
value: "node-disk-operator"
- name: CLEANUP_JOB_IMAGE
value: "openebs/linux-utils:3.5.0"
# OPENEBS_IO_IMAGE_PULL_SECRETS environment variable is used to pass the image pull secrets
# to the cleanup pod launched by NDM operator
#- name: OPENEBS_IO_IMAGE_PULL_SECRETS
# value: ""
livenessProbe:
httpGet:
path: /healthz
port: 8585
initialDelaySeconds: 15
periodSeconds: 20
readinessProbe:
httpGet:
path: /readyz
port: 8585
initialDelaySeconds: 5
periodSeconds: 10
---
# Create NDM cluster exporter deployment.
# This is an optional component and is not required for the basic
# functioning of NDM
apiVersion: apps/v1
kind: Deployment
metadata:
name: openebs-ndm-cluster-exporter
namespace: openebs
labels:
name: openebs-ndm-cluster-exporter
openebs.io/component-name: ndm-cluster-exporter
openebs.io/version: 3.5.0
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
name: openebs-ndm-cluster-exporter
openebs.io/component-name: ndm-cluster-exporter
template:
metadata:
labels:
name: openebs-ndm-cluster-exporter
openebs.io/component-name: ndm-cluster-exporter
openebs.io/version: 3.5.0
spec:
serviceAccountName: openebs-maya-operator
containers:
- name: ndm-cluster-exporter
image: openebs/node-disk-exporter:2.1.0
command:
- /usr/local/bin/exporter
args:
- "start"
- "--mode=cluster"
- "--port=$(METRICS_LISTEN_PORT)"
- "--metrics=/metrics"
ports:
- containerPort: 9100
protocol: TCP
name: metrics
imagePullPolicy: IfNotPresent
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: METRICS_LISTEN_PORT
value: :9100
---
# Create NDM cluster exporter service
# This is optional and required only when
# ndm-cluster-exporter deployment is used
apiVersion: v1
kind: Service
metadata:
name: openebs-ndm-cluster-exporter-service
namespace: openebs
labels:
name: openebs-ndm-cluster-exporter-service
openebs.io/component-name: ndm-cluster-exporter
app: openebs-ndm-exporter
spec:
clusterIP: None
ports:
- name: metrics
port: 9100
targetPort: 9100
selector:
name: openebs-ndm-cluster-exporter
---
# Create NDM node exporter daemonset.
# This is an optional component used for getting disk level
# metrics from each of the storage nodes
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: openebs-ndm-node-exporter
namespace: openebs
labels:
name: openebs-ndm-node-exporter
openebs.io/component-name: ndm-node-exporter
openebs.io/version: 3.5.0
spec:
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
name: openebs-ndm-node-exporter
openebs.io/component-name: ndm-node-exporter
template:
metadata:
labels:
name: openebs-ndm-node-exporter
openebs.io/component-name: ndm-node-exporter
openebs.io/version: 3.5.0
spec:
serviceAccountName: openebs-maya-operator
containers:
- name: node-disk-exporter
image: openebs/node-disk-exporter:2.1.0
command:
- /usr/local/bin/exporter
args:
- "start"
- "--mode=node"
- "--port=$(METRICS_LISTEN_PORT)"
- "--metrics=/metrics"
ports:
- containerPort: 9101
protocol: TCP
name: metrics
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: METRICS_LISTEN_PORT
value: :9101
---
# Create NDM node exporter service
# This is optional and required only when
# ndm-node-exporter daemonset is used
apiVersion: v1
kind: Service
metadata:
name: openebs-ndm-node-exporter-service
namespace: openebs
labels:
name: openebs-ndm-node-exporter
openebs.io/component: openebs-ndm-node-exporter
app: openebs-ndm-exporter
spec:
clusterIP: None
ports:
- name: metrics
port: 9101
targetPort: 9101
selector:
name: openebs-ndm-node-exporter
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: openebs-localpv-provisioner
namespace: openebs
labels:
name: openebs-localpv-provisioner
openebs.io/component-name: openebs-localpv-provisioner
openebs.io/version: 3.5.0
spec:
selector:
matchLabels:
name: openebs-localpv-provisioner
openebs.io/component-name: openebs-localpv-provisioner
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
name: openebs-localpv-provisioner
openebs.io/component-name: openebs-localpv-provisioner
openebs.io/version: 3.5.0
spec:
serviceAccountName: openebs-maya-operator
containers:
- name: openebs-provisioner-hostpath
imagePullPolicy: IfNotPresent
image: openebs/provisioner-localpv:3.5.0
args:
- "--bd-time-out=$(BDC_BD_BIND_RETRIES)"
env:
# OPENEBS_IO_K8S_MASTER enables openebs provisioner to connect to K8s
# based on this address. This is ignored if empty.
# This is supported for openebs provisioner version 0.5.2 onwards
#- name: OPENEBS_IO_K8S_MASTER
# value: "http://10.128.0.12:8080"
# OPENEBS_IO_KUBE_CONFIG enables openebs provisioner to connect to K8s
# based on this config. This is ignored if empty.
# This is supported for openebs provisioner version 0.5.2 onwards
#- name: OPENEBS_IO_KUBE_CONFIG
# value: "/home/ubuntu/.kube/config"
# This sets the number of times the provisioner should try
# with a polling interval of 5 seconds, to get the Blockdevice
# Name from a BlockDeviceClaim, before the BlockDeviceClaim
# is deleted. E.g. 12 * 5 seconds = 60 seconds timeout
- name: BDC_BD_BIND_RETRIES
value: "12"
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: OPENEBS_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
# OPENEBS_SERVICE_ACCOUNT provides the service account of this pod as
# environment variable
- name: OPENEBS_SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
- name: OPENEBS_IO_ENABLE_ANALYTICS
value: "true"
- name: OPENEBS_IO_INSTALLER_TYPE
value: "openebs-operator-lite"
- name: OPENEBS_IO_HELPER_IMAGE
value: "openebs/linux-utils:3.5.0"
- name: OPENEBS_IO_BASE_PATH
value: "/var/openebs/local"
# LEADER_ELECTION_ENABLED is used to enable/disable leader election. By default
# leader election is enabled.
#- name: LEADER_ELECTION_ENABLED
# value: "true"
# OPENEBS_IO_IMAGE_PULL_SECRETS environment variable is used to pass the image pull secrets
# to the helper pod launched by local-pv hostpath provisioner
#- name: OPENEBS_IO_IMAGE_PULL_SECRETS
# value: ""
# Process name used for matching is limited to the 15 characters
# present in the pgrep output.
# So fullname can't be used here with pgrep (>15 chars).A regular expression
# that matches the entire command name has to specified.
# Anchor `^` : matches any string that starts with `provisioner-loc`
# `.*`: matches any string that has `provisioner-loc` followed by zero or more char
livenessProbe:
exec:
command:
- sh
- -c
- test `pgrep -c "^provisioner-loc.*"` = 1
initialDelaySeconds: 30
periodSeconds: 60
---

View File

@@ -0,0 +1,16 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: openebs-hostpath-xfs
annotations:
openebs.io/cas-type: local
cas.openebs.io/config: |
- name: StorageType
value: "hostpath"
- name: BasePath
value: "/var/openebs/local/"
- name: XFSQuota
enabled: "true"
provisioner: openebs.io/local
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete

View File

@@ -0,0 +1,10 @@
# Playground
Playground namespace is accessible to `Developers` AD group.
Novel log aggregator is being developer in this namespace:
```
kubectl create secret generic -n playground mongodb-application-readwrite-password --from-literal="password=$(cat /dev/urandom | base64 | head -c 30)"
kubectl create secret generic -n playground mongodb-application-readonly-password --from-literal="password=$(cat /dev/urandom | base64 | head -c 30)"
kubectl apply -n playground -f logging.yml -f mongodb-support.yml -f mongoexpress.yml -f networkpolicy-base.yml

View File

@@ -0,0 +1,263 @@
---
apiVersion: mongodbcommunity.mongodb.com/v1
kind: MongoDBCommunity
metadata:
name: mongodb
spec:
additionalMongodConfig:
systemLog:
quiet: true
members: 3
type: ReplicaSet
version: "5.0.13"
security:
authentication:
modes: ["SCRAM"]
users:
- name: readwrite
db: application
passwordSecretRef:
name: mongodb-application-readwrite-password
roles:
- name: readWrite
db: application
scramCredentialsSecretName: mongodb-application-readwrite
- name: readonly
db: application
passwordSecretRef:
name: mongodb-application-readonly-password
roles:
- name: readOnly
db: application
scramCredentialsSecretName: mongodb-application-readonly
statefulSet:
spec:
logLevel: WARN
template:
spec:
containers:
- name: mongod
resources:
requests:
cpu: 100m
memory: 2Gi
limits:
cpu: 2000m
memory: 2Gi
- name: mongodb-agent
resources:
requests:
cpu: 1m
memory: 100Mi
limits: {}
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- mongodb-svc
topologyKey: kubernetes.io/hostname
nodeSelector:
dedicated: monitoring
tolerations:
- key: dedicated
operator: Equal
value: monitoring
effect: NoSchedule
volumeClaimTemplates:
- metadata:
name: logs-volume
spec:
storageClassName: local-path
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 512Mi
- metadata:
name: data-volume
spec:
storageClassName: local-path
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: log-shipper
spec:
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 50%
selector:
matchLabels:
app: log-shipper
template:
metadata:
labels:
app: log-shipper
spec:
serviceAccountName: log-shipper
containers:
- name: log-shipper
image: harbor.k-space.ee/k-space/log-shipper
securityContext:
runAsUser: 0
env:
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: MONGODB_HOST
valueFrom:
secretKeyRef:
name: mongodb-application-readwrite
key: connectionString.standard
ports:
- containerPort: 8000
name: metrics
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: etcmachineid
mountPath: /etc/machine-id
readOnly: true
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
volumes:
- name: etcmachineid
hostPath:
path: /etc/machine-id
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
tolerations:
- operator: "Exists"
effect: "NoExecute"
- operator: "Exists"
effect: "NoSchedule"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: logging-log-shipper
subjects:
- kind: ServiceAccount
name: log-shipper
namespace: playground
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: log-shipper
labels:
app: log-shipper
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: log-shipper
spec:
podSelector:
matchLabels:
app: log-shipper
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: prometheus-operator
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus
egress:
- to:
- podSelector:
matchLabels:
app: mongodb-svc
ports:
- port: 27017
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: log-viewer-backend
spec:
podSelector:
matchLabels:
app: log-viewer-backend
policyTypes:
- Ingress
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: mongodb-svc
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: log-viewer-frontend
spec:
podSelector:
matchLabels:
app: log-viewer-frontend
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: log-shipper
spec:
selector:
matchLabels:
app: log-shipper
podMetricsEndpoints:
- port: metrics

View File

@@ -0,0 +1,103 @@
---
apiVersion: codemowers.cloud/v1beta1
kind: OIDCClient
metadata:
name: whoami-oidc
namespace: whoami-oidc
spec:
displayName: Whoami OIDC
uri: https://whoami-oidc.k-space.ee
redirectUris:
- https://whoami-oidc.k-space.ee/auth/callback
grantTypes:
- authorization_code
- refresh_token
responseTypes:
- code
availableScopes:
- openid
- profile
pkce: false
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: whoami-oidc
labels:
app.kubernetes.io/name: whoami-oidc
spec:
replicas: 1
revisionHistoryLimit: 0
selector:
matchLabels:
app.kubernetes.io/name: whoami-oidc
template:
metadata:
labels:
app.kubernetes.io/name: whoami-oidc
spec:
containers:
- name: whoami-oidc
image: harbor.k-space.ee/rasmus/oidctest:latest@sha256:55927b9a50580fb087277af25fbc492b5ab4abcc1926c29ed40c190a99ced77b
env:
- name: OIDC_ROOT_URL
value: https://whoami-oidc.k-space.ee
- name: OIDC_PROVIDER
valueFrom:
secretKeyRef:
name: oidc-client-whoami-oidc-owner-secrets
key: OIDC_GATEWAY_URI
- name: OIDC_CLIENT_ID
valueFrom:
secretKeyRef:
name: oidc-client-whoami-oidc-owner-secrets
key: OIDC_CLIENT_ID
- name: OIDC_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: oidc-client-whoami-oidc-owner-secrets
key: OIDC_CLIENT_SECRET
ports:
- containerPort: 9009
name: http
resources:
limits:
cpu: "1"
memory: "512Mi"
---
apiVersion: v1
kind: Service
metadata:
name: whoami-oidc
spec:
selector:
app.kubernetes.io/name: whoami-oidc
ports:
- port: 80
name: http
targetPort: http
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: whoami-oidc
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
spec:
rules:
- host: whoami-oidc.k-space.ee
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: whoami-oidc
port:
name: http
tls:
- hosts:
- "*.k-space.ee"

View File

@@ -0,0 +1 @@
argocd/appications/argocd-image-updater.yaml

View File

@@ -1,46 +1,58 @@
# Workflow
Most applications in our Kubernetes cluster are managed by ArgoCD.
Most notably operators are NOT managed by ArgoCD.
## Managing applications
Update apps (see TODO below):
# Deployment
```
for j in asterisk bind camtiler etherpad freescout gitea grafana hackerspace nextcloud nyancat rosdump traefik wiki wildduck; do
cat << EOF >> applications/$j.yaml
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: $j
namespace: argocd
annotations:
# Works with only Kustomize and Helm. Kustomize is easy, see https://github.com/argoproj-labs/argocd-image-updater/tree/master/manifests/base for an example.
argocd-image-updater.argoproj.io/image-list: TODO:^2 # semver 2.*.*
argocd-image-updater.argoproj.io/write-back-method: git
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: $j
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: $j
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true
EOF
done
find applications -name "*.yaml" -exec kubectl apply -n argocd -f {} \;
```
To deploy ArgoCD:
### Repository secrets
1. Generate keys locally with `ssh-keygen -f argo`
2. Add `argo.pub` in `git.k-space.ee/<your>/<repo>` → Settings → Deploy keys
3. Add `argo` (private key) at https://argocd.k-space.ee/settings/repos along with referenced repo.
## Argo Deployment
To deploy ArgoCD itself:
```bash
helm repo add argo-cd https://argoproj.github.io/argo-helm
kubectl create secret -n argocd generic argocd-secret # Initialize empty secret for sessions
kubectl create secret -n argocd generic argocd-secret # Empty secret for sessions
kubectl label -n argocd secret oidc-client-argocd-owner-secrets app.kubernetes.io/part-of=argocd
helm template -n argocd --release-name k6 argo-cd/argo-cd --include-crds -f values.yaml > argocd.yml
kubectl apply -f argocd.yml -n argocd
kubectl -n argocd rollout restart deployment/k6-argocd-redis
kubectl -n argocd rollout restart deployment/k6-argocd-repo-server
kubectl -n argocd rollout restart deployment/k6-argocd-server
kubectl -n argocd rollout restart deployment/k6-argocd-notifications-controller
kubectl -n argocd rollout restart statefulset/k6-argocd-application-controller
kubectl apply -f argocd.yml -f application-extras.yml -f redis.yaml -f monitoring.yml -n argocd
kubectl -n argocd rollout restart deployment/k6-argocd-redis deployment/k6-argocd-repo-server deployment/k6-argocd-server deployment/k6-argocd-notifications-controller statefulset/k6-argocd-application-controller
```
Note: Refer to Authelia README for OIDC secret setup
# Setting up Git secrets
Generate SSH key to access Gitea:
```
ssh-keygen -t ecdsa -f id_ecdsa -C argocd.k-space.ee -P ''
kubectl -n argocd create secret generic gitea-kube \
--from-literal=type=git \
--from-literal=url=git@git.k-space.ee:k-space/kube \
--from-file=sshPrivateKey=id_ecdsa
kubectl -n argocd create secret generic gitea-kube-staging \
--from-literal=type=git \
--from-literal=url=git@git.k-space.ee:k-space/kube-staging \
--from-file=sshPrivateKey=id_ecdsa
kubectl label -n argocd secret gitea-kube argocd.argoproj.io/secret-type=repository
kubectl label -n argocd secret gitea-kube-staging argocd.argoproj.io/secret-type=repository
rm -fv id_ecdsa
```
Have Gitea admin reset password for user `argocd` and log in with that account.
Add the SSH key for user `argocd` from file `id_ecdsa.pub`.
Delete any other SSH keys associated with Gitea user `argocd`.
WARN: ArgoCD doesn't host its own redis, Dragonfly must be able to independently cold-start.

View File

@@ -0,0 +1,38 @@
---
apiVersion: codemowers.cloud/v1beta1
kind: OIDCClient
metadata:
name: argocd
namespace: argocd
spec:
displayName: Argo CD
uri: https://argocd.k-space.ee
redirectUris:
- https://argocd.k-space.ee/auth/callback
- http://localhost:8085/auth/callback
allowedGroups:
- k-space:kubernetes:admins
grantTypes:
- authorization_code
- refresh_token
responseTypes:
- code
availableScopes:
- openid
- profile
pkce: false
---
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
namespace: argocd
name: k-space.ee
spec:
clusterResourceWhitelist:
- group: '*'
kind: '*'
destinations:
- namespace: '*'
server: '*'
sourceRepos:
- '*'

View File

@@ -0,0 +1,18 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: argocd-applications
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: argocd/applications
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: argocd
syncPolicy:
automated:
prune: false

View File

@@ -0,0 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: argocd-image-updater
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'https://github.com/argoproj-labs/argocd-image-updater.git'
path: manifests/base
targetRevision: stable
destination:
server: 'https://kubernetes.default.svc'
namespace: argocd
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -1,17 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: camtiler
name: bind
namespace: argocd
spec:
project: default
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: camtiler
path: bind
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: camtiler
namespace: bind
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true
- CreateNamespace=true

View File

@@ -0,0 +1,21 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: cert-manager
namespace: argocd
spec:
project: k-space.ee
source:
# also depends on git@git.k-space.ee:secretspace/kube.git
repoURL: git@git.k-space.ee:k-space/kube.git
targetRevision: HEAD
path: cert-manager
destination:
server: 'https://kubernetes.default.svc'
namespace: cert-manager
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,23 @@
# See [/dragonfly/README.md](/dragonfly-operator-system/README.md)
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: dragonfly # replaces redis and keydb
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: https://github.com/dragonflydb/dragonfly-operator
targetRevision: v1.1.11 # https://github.com/dragonflydb/dragonfly-operator/releases
path: manifests
directory:
include: 'dragonfly-operator.yaml'
destination:
server: 'https://kubernetes.default.svc'
namespace: dragonfly-operator-system
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -1,23 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: elastic-system
namespace: argocd
spec:
project: default
destination:
server: 'https://kubernetes.default.svc'
namespace: elastic-system
syncPolicy:
automated: {}
syncOptions:
- CreateNamespace=true
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: elastic-system
targetRevision: HEAD
ignoreDifferences:
- group: admissionregistration.k8s.io
kind: ValidatingWebhookConfiguration
jqPathExpressions:
- '.webhooks[]?.clientConfig.caBundle'

View File

@@ -1,10 +1,11 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: etherpad
namespace: argocd
spec:
project: default
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: etherpad
@@ -13,5 +14,7 @@ spec:
server: 'https://kubernetes.default.svc'
namespace: etherpad
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true
- CreateNamespace=true

View File

@@ -1,17 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: external-dns
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: external-dns
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: external-dns
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,21 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: external-snapshotter
namespace: argocd
spec:
project: k-space.ee
source:
# also depends on git@git.k-space.ee:secretspace/kube.git
repoURL: git@git.k-space.ee:k-space/kube.git
targetRevision: HEAD
path: external-snapshotter
destination:
server: 'https://kubernetes.default.svc'
namespace: kube-system
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: freescout
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: freescout
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: freescout
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,21 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: frigate
namespace: argocd
spec:
project: k-space.ee
source:
# also depends on git@git.k-space.ee:secretspace/kube.git
repoURL: git@git.k-space.ee:k-space/kube.git
targetRevision: HEAD
path: frigate
destination:
server: 'https://kubernetes.default.svc'
namespace: frigate
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: gitea
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: gitea
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: gitea
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,21 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: grafana
namespace: argocd
spec:
project: k-space.ee
source:
# also depends on git@git.k-space.ee:secretspace/kube.git
repoURL: git@git.k-space.ee:k-space/kube.git
targetRevision: HEAD
path: grafana
destination:
server: 'https://kubernetes.default.svc'
namespace: grafana
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -1,17 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: authelia
name: hackerspace
namespace: argocd
spec:
project: default
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: authelia
path: hackerspace
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: authelia
namespace: hackerspace
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,21 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: harbor-operator
namespace: argocd
spec:
project: k-space.ee
source:
# also depends on git@git.k-space.ee:secretspace/kube.git
repoURL: git@git.k-space.ee:k-space/kube.git
targetRevision: HEAD
path: harbor-operator
destination:
server: 'https://kubernetes.default.svc'
namespace: harbor-operator
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -1,17 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: harbor
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: harbor
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: harbor
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@@ -1,17 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: keel
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: keel
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: keel
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: kube-system
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: kube-system
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: kube-system
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -1,10 +1,11 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: kubernetes-dashboard
namespace: argocd
spec:
project: default
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: kubernetes-dashboard
@@ -13,5 +14,7 @@ spec:
server: 'https://kubernetes.default.svc'
namespace: kubernetes-dashboard
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -1,17 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: logging
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: logging
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: logging
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@@ -1,22 +1,21 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: metallb-system
namespace: argocd
spec:
project: default
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: metallb-system
# also depends on git@git.k-space.ee:secretspace/kube.git
repoURL: git@git.k-space.ee:k-space/kube.git
targetRevision: HEAD
path: metallb-system
destination:
server: 'https://kubernetes.default.svc'
namespace: metallb-system
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true
ignoreDifferences:
- group: apiextensions.k8s.io
kind: CustomResourceDefinition
jqPathExpressions:
- '.spec.conversion.webhook.clientConfig.caBundle'

View File

@@ -0,0 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: minio-clusters
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: minio-clusters
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: minio-clusters
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: monitoring
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: monitoring
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: monitoring
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -1,17 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: mysql-operator
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: mysql-operator
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: mysql-operator
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: nextcloud
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: nextcloud
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: nextcloud
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: nyancat
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: nyancat
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: nyancat
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: members
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:secretspace/members.git'
path: members
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: passmower
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -1,17 +1,18 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: drone
name: passmower
namespace: argocd
spec:
project: default
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: drone
path: passmower
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: drone
namespace: passmower
syncPolicy:
syncOptions:
- CreateNamespace=true
automated:
prune: true

View File

@@ -0,0 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: pgweb
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: pgweb
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: pgweb
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -1,17 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: phpmyadmin
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: phpmyadmin
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: phpmyadmin
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,24 @@
# Note: Do not put any Prometheus instances or exporters in this namespace, instead have them in `monitoring` namespace
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: prometheus-operator
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: https://github.com/prometheus-operator/prometheus-operator.git
targetRevision: v0.82.0
path: .
kustomize:
namespace: prometheus-operator
destination:
server: 'https://kubernetes.default.svc'
namespace: prometheus-operator
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true
- ServerSideApply=true # Resource is too big to fit in 262144 bytes allowed annotation size.

View File

@@ -1,14 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: prometheus-operator
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: prometheus-operator
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: prometheus-operator

View File

@@ -1,17 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: reloader
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: reloader
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: reloader
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: ripe87
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: ripe87
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: ripe87
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: rook-ceph
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: rook-ceph
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: rook-ceph
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -1,10 +1,11 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: rosdump
namespace: argocd
spec:
project: default
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: rosdump
@@ -13,5 +14,7 @@ spec:
server: 'https://kubernetes.default.svc'
namespace: rosdump
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true
- CreateNamespace=true

View File

@@ -0,0 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: secret-claim-operator
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: https://github.com/codemowers/operatorlib
path: samples/secret-claim-operator
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: secret-claim-operator
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: signs
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: signs
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: signs
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,24 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: tigera-operator
namespace: argocd
spec:
project: k-space.ee
source:
# also depends on git@git.k-space.ee:secretspace/kube.git
repoURL: git@git.k-space.ee:k-space/kube.git
targetRevision: HEAD
path: tigera-operator
destination:
server: 'https://kubernetes.default.svc'
namespace: tigera-operator
# also houses calico-system and calico-apiserver
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true
- ServerSideApply=true # Resource is too big to fit in 262144 bytes allowed annotation size.
- Force=true # `--force-conflicts`, according to https://docs.tigera.io/calico/latest/operations/upgrading/kubernetes-upgrade

View File

@@ -0,0 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: traefik
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: traefik
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: traefik
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -1,17 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: foobar
name: whoami
namespace: argocd
spec:
project: default
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: foobar
path: whoami
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: foobar
namespace: whoami
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -1,17 +1,20 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: drone-execution
name: wiki
namespace: argocd
spec:
project: default
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: drone-execution
path: wiki
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: drone-execution
namespace: wiki
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -1,10 +1,11 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: wildduck
namespace: argocd
spec:
project: default
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: wildduck
@@ -13,5 +14,7 @@ spec:
server: 'https://kubernetes.default.svc'
namespace: wildduck
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,21 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: woodpecker
namespace: argocd
spec:
project: k-space.ee
source:
# also depends on git@git.k-space.ee:secretspace/kube.git
repoURL: git@git.k-space.ee:k-space/kube.git
targetRevision: HEAD
path: woodpecker
destination:
server: 'https://kubernetes.default.svc'
namespace: woodpecker
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

2
argocd/deploy_key.pub Normal file
View File

@@ -0,0 +1,2 @@
# used for git.k-space: k-space/kube, secretspace/kube, secretspace/members
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOxYpFf85Vnxw7WNb/V5dtZT0PJ4VbBhdBNscDd8TVv/ argocd.k-space.ee

50
argocd/redis.yaml Normal file
View File

@@ -0,0 +1,50 @@
---
apiVersion: codemowers.cloud/v1beta1
kind: SecretClaim
metadata:
name: argocd-redis
namespace: argocd
spec:
size: 32
mapping:
- key: redis-password
value: "%(plaintext)s"
- key: REDIS_URI
value: "redis://:%(plaintext)s@argocd-redis"
---
apiVersion: dragonflydb.io/v1alpha1
kind: Dragonfly
metadata:
name: argocd-redis
namespace: argocd
spec:
authentication:
passwordFromSecret:
key: redis-password
name: argocd-redis
replicas: 3
resources:
limits:
cpu: 1000m
memory: 1Gi
topologySpreadConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: argocd-redis
app.kubernetes.io/part-of: dragonfly
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: argocd-redis
namespace: argocd
spec:
selector:
matchLabels:
app: argocd-redis
app.kubernetes.io/part-of: dragonfly
podMetricsEndpoints:
- port: admin

View File

@@ -1,79 +1,29 @@
global:
logLevel: warn
domain: argocd.k-space.ee
# We use Authelia OIDC instead of Dex
dex:
enabled: false
# Maybe one day switch to Redis HA?
redis:
enabled: false
redis-ha:
enabled: false
externalRedis:
host: argocd-redis
existingSecret: argocd-redis
server:
# HTTPS is implemented by Traefik
extraArgs:
- --insecure
ingress:
enabled: true
annotations:
cert-manager.io/cluster-issuer: default
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
hosts:
- argocd.k-space.ee
tls:
extraTls:
- hosts:
- argocd.k-space.ee
secretName: argocd-server-tls
configEnabled: true
config:
admin.enabled: "false"
url: https://argocd.k-space.ee
application.instanceLabelKey: argocd.argoproj.io/instance
oidc.config: |
name: Authelia
issuer: https://auth.k-space.ee
clientID: argocd
cliClientID: argocd
clientSecret: $oidc.config.clientSecret
requestedIDTokenClaims:
groups:
essential: true
requestedScopes:
- openid
- profile
- email
- groups
resource.customizations: |
# https://github.com/argoproj/argo-cd/issues/1704
networking.k8s.io/Ingress:
health.lua: |
hs = {}
hs.status = "Healthy"
return hs
# Members of ArgoCD Admins group in AD/Samba are allowed to administer Argo
rbacConfig:
policy.default: role:readonly
policy.csv: |
# Map AD groups to ArgoCD roles
g, Developers, role:developers
g, ArgoCD Admins, role:admin
# Allow developers to read objects
p, role:developers, applications, get, */*, allow
p, role:developers, certificates, get, *, allow
p, role:developers, clusters, get, *, allow
p, role:developers, repositories, get, *, allow
p, role:developers, projects, get, *, allow
p, role:developers, accounts, get, *, allow
p, role:developers, gpgkeys, get, *, allow
p, role:developers, logs, get, */*, allow
p, role:developers, applications, restart, default/camtiler, allow
p, role:developers, applications, override, default/camtiler, allow
p, role:developers, applications, action/apps/Deployment/restart, default/camtiler, allow
p, role:developers, applications, sync, default/camtiler, allow
p, role:developers, applications, update, default/camtiler, allow
- "*.k-space.ee"
metrics:
enabled: true
@@ -95,11 +45,64 @@ controller:
enabled: true
configs:
params:
server.insecure: true
rbac:
policy.default: role:admin
policy.csv: |
# Map AD groups to ArgoCD roles
g, Developers, role:developers
g, ArgoCD Admins, role:admin
# Allow developers to read objects
p, role:developers, applications, get, */*, allow
p, role:developers, certificates, get, *, allow
p, role:developers, clusters, get, *, allow
p, role:developers, repositories, get, *, allow
p, role:developers, projects, get, *, allow
p, role:developers, accounts, get, *, allow
p, role:developers, gpgkeys, get, *, allow
p, role:developers, logs, get, */*, allow
p, role:developers, applications, restart, default/camtiler, allow
p, role:developers, applications, override, default/camtiler, allow
p, role:developers, applications, action/apps/Deployment/restart, default/camtiler, allow
p, role:developers, applications, sync, default/camtiler, allow
p, role:developers, applications, update, default/camtiler, allow
# argocd-image-updater
p, role:image-updater, applications, get, */*, allow
p, role:image-updater, applications, update, */*, allow
g, image-updater, role:image-updater
cm:
kustomize.buildOptions: --enable-helm
admin.enabled: "false"
resource.customizations: |
# https://github.com/argoproj/argo-cd/issues/1704
networking.k8s.io/Ingress:
health.lua: |
hs = {}
hs.status = "Healthy"
return hs
apiextensions.k8s.io/CustomResourceDefinition:
ignoreDifferences: |
jsonPointers:
- "x-kubernetes-validations"
oidc.config: |
name: OpenID Connect
issuer: https://auth.k-space.ee/
clientID: $oidc-client-argocd-owner-secrets:OIDC_CLIENT_ID
cliClientID: $oidc-client-argocd-owner-secrets:OIDC_CLIENT_ID
clientSecret: $oidc-client-argocd-owner-secrets:OIDC_CLIENT_SECRET
requestedIDTokenClaims:
groups:
essential: true
requestedScopes:
- openid
- profile
- email
- groups
secret:
createSecret: false
knownHosts:
data:
ssh_known_hosts: |
ssh:
knownHosts: |
# Copy-pasted from `ssh-keyscan git.k-space.ee`
git.k-space.ee ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCF1+/TDRXuGwsu4SZQQwQuJusb7W1OciGAQp/ZbTTvKD+0p7fV6dXyUlWjdFmITrFNYDreDnMiOS+FvE62d2Z0=
git.k-space.ee ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDsLyRuubdIUnTKEqOipu+9x+FforrC8+oxulVrl0ECgdIRBQnLQXIspTNwuC3MKJ4z+DPbndSt8zdN33xWys8UNEs3V5/W6zsaW20tKiaX75WK5eOL4lIDJi/+E97+c0aZBXamhxTrgkRVJ5fcAkY6C5cKEmVM5tlke3v3ihLq78/LpJYv+P947NdnthYE2oc+XGp/elZ0LNfWRPnd///+ykbwWirvQm+iiDz7PMVKkb+Q7l3vw4+zneKJWAyFNrm+aewyJV9lFZZJuHliwlHGTriSf6zhMAWyJzvYqDAN6iT5yi9KGKw60J6vj2GLuK4ULVblTyP9k9+3iELKSWW5

2
authelia/.gitignore vendored
View File

@@ -1,2 +0,0 @@
application-secrets.y*ml
oidc-secrets.y*ml

Some files were not shown because too many files have changed in this diff Show More