Compare commits

..

No commits in common. "master" and "master" have entirely different histories.

172 changed files with 1951 additions and 3618 deletions
.gitignoreCLUSTER.mdREADME.md
_disabled
argocd
asterisk
bind
camtiler
cert-manager
cnpg-system
default
dragonfly-operator-system
elastic-system
etherpad
freescout
freeswitch
frigate
gitea
grafana
hackerspace
harbor
kubernetes-dashboard
local-path-storage
logging
logmower
longhorn-system
minio-clusters
monitoring
mysql-clusters

4
.gitignore vendored

@ -5,10 +5,6 @@
*.save *.save
*.1 *.1
# Kustomize with Helm and secrets:
charts/
*.env
### IntelliJ IDEA ### ### IntelliJ IDEA ###
.idea .idea
*.iml *.iml

@ -35,6 +35,7 @@ users:
- get-token - get-token
- --oidc-issuer-url=https://auth.k-space.ee/ - --oidc-issuer-url=https://auth.k-space.ee/
- --oidc-client-id=passmower.kubelogin - --oidc-client-id=passmower.kubelogin
- --oidc-use-pkce
- --oidc-extra-scope=profile,email,groups - --oidc-extra-scope=profile,email,groups
- --listen-address=127.0.0.1:27890 - --listen-address=127.0.0.1:27890
command: kubectl command: kubectl

@ -6,17 +6,15 @@ Kubernetes manifests, Ansible [playbooks](ansible/README.md), and documentation
- Debugging Kubernetes [on Wiki](https://wiki.k-space.ee/en/hosting/debugging-kubernetes) - Debugging Kubernetes [on Wiki](https://wiki.k-space.ee/en/hosting/debugging-kubernetes)
- Need help? → [`#kube`](https://k-space-ee.slack.com/archives/C02EYV1NTM2) - Need help? → [`#kube`](https://k-space-ee.slack.com/archives/C02EYV1NTM2)
Jump to docs: [inventory-app](hackerspace/README.md) / [cameras](_disabled/camtiler/README.md) / [doors](https://wiki.k-space.ee/en/hosting/doors) / [list of apps](https://auth.k-space.ee) // [all infra](ansible/inventory.yml) / [network](https://wiki.k-space.ee/en/hosting/network/sensitive) / [retro](https://wiki.k-space.ee/en/hosting/retro) / [non-infra](https://wiki.k-space.ee) Jump to docs: [inventory-app](hackerspace/README.md) / [cameras](camtiler/README.md) / [doors](https://wiki.k-space.ee/en/hosting/doors) / [list of apps](https://auth.k-space.ee) // [all infra](ansible/inventory.yml) / [network](https://wiki.k-space.ee/en/hosting/network/sensitive) / [retro](https://wiki.k-space.ee/en/hosting/retro) / [non-infra](https://wiki.k-space.ee)
Tip: Search the repo for `kind: xyz` for examples. Tip: Search the repo for `kind: xyz` for examples.
## Supporting services ## Supporting services
- Build [Git](https://git.k-space.ee) repositories with [Woodpecker](https://woodpecker.k-space.ee)[^nodrone]. - Build [Git](https://git.k-space.ee) repositories with [Woodpecker](https://woodpecker.k-space.ee).
- Passmower: Authz with `kind: OIDCClient` (or `kind: OIDCMiddlewareClient`[^authz]). - Passmower: Authz with `kind: OIDCClient` (or `kind: OIDCMiddlewareClient`[^authz]).
- Traefik[^nonginx]: Expose services with `kind: Service` + `kind: Ingress` (TLS and DNS **included**). - Traefik[^nonginx]: Expose services with `kind: Service` + `kind: Ingress` (TLS and DNS **included**).
[^nodrone]: Replaces Drone CI.
### Additional ### Additional
- bind: Manage _additional_ DNS records with `kind: DNSEndpoint`. - bind: Manage _additional_ DNS records with `kind: DNSEndpoint`.
- [Prometheus](https://wiki.k-space.ee/en/hosting/monitoring): Collect metrics with `kind: PodMonitor` (alerts with `kind: PrometheusRule`). - [Prometheus](https://wiki.k-space.ee/en/hosting/monitoring): Collect metrics with `kind: PodMonitor` (alerts with `kind: PrometheusRule`).
@ -34,20 +32,19 @@ Static routes for 193.40.103.36/30 have been added in pve nodes to make them com
<!-- Linked to by https://wiki.k-space.ee/e/en/hosting/storage --> <!-- Linked to by https://wiki.k-space.ee/e/en/hosting/storage -->
### Databases / -stores: ### Databases / -stores:
- KeyDB: `kind: KeydbClaim` (replaces Redis[^redisdead])
- Dragonfly: `kind: Dragonfly` (replaces Redis[^redisdead]) - Dragonfly: `kind: Dragonfly` (replaces Redis[^redisdead])
- Longhorn: `storageClassName: longhorn` (filesystem storage) - Longhorn: `storageClassName: longhorn` (filesystem storage)
- Mongo[^mongoproblems]: `kind: MongoDBCommunity` (NAS* `inventory-mongodb`) - Mongo[^mongoproblems]: `kind: MongoDBCommunity` (NAS* `inventory-mongodb`)
- Minio S3: `kind: MinioBucketClaim` with `class: dedicated` (NAS*: `class: external`) - Minio S3: `kind: MinioBucketClaim` with `class: dedicated` (NAS*: `class: external`)
- MariaDB*: search for `mysql`, `mariadb`[^mariadb] (replaces MySQL) - MariaDB*: search for `mysql`, `mariadb`[^mariadb] (replaces MySQL)
- Postgres*: hardcoded to [harbor/application.yml](harbor/application.yml) - Postgres*: hardcoded to [harbor/application.yml](harbor/application.yml)
- Seeded secrets: `kind: SecretClaim` (generates random secret in templated format)
- Secrets in git: https://git.k-space.ee/secretspace (members personal info, API credentials, see argocd/deploy_key.pub comment)
\* External, hosted directly on [nas.k-space.ee](https://wiki.k-space.ee/en/hosting/storage) \* External, hosted directly on [nas.k-space.ee](https://wiki.k-space.ee/en/hosting/storage)
[^mariadb]: As of 2024-07-30 used by auth, authelia, bitwarden, etherpad, freescout, git, grafana, nextcloud, wiki, woodpecker [^mariadb]: As of 2024-07-30 used by auth, authelia, bitwarden, etherpad, freescout, git, grafana, nextcloud, wiki, woodpecker
[^redisdead]: Redis has been replaced as redis-operatori couldn't handle itself: didn't reconcile after reboots, master URI was empty, and clients complained about missing masters. Dragonfly replaces KeyDB. [^redisdead]: Redis has been replaced as redis-operatori couldn't handle itself: didn't reconcile after reboots, master URI was empty, and clients complained about missing masters. ArgoCD still hosts its own Redis.
[^mongoproblems]: Mongo problems: Incompatible with rawfile csi (wiredtiger.wt corrupts), complicated resizing (PVCs from statefulset PVC template). [^mongoproblems]: Mongo problems: Incompatible with rawfile csi (wiredtiger.wt corrupts), complicated resizing (PVCs from statefulset PVC template).

@ -1,15 +0,0 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: camtiler
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: camtiler
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: camtiler

@ -1,382 +0,0 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: discourse
annotations:
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
spec:
tls:
- hosts:
- "*.k-space.ee"
secretName:
rules:
- host: "discourse.k-space.ee"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: discourse
port:
name: http
---
apiVersion: v1
kind: Service
metadata:
name: discourse
spec:
type: ClusterIP
ipFamilyPolicy: SingleStack
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
selector:
app.kubernetes.io/instance: discourse
app.kubernetes.io/name: discourse
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: discourse
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: discourse
annotations:
reloader.stakater.com/auto: "true"
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/instance: discourse
app.kubernetes.io/name: discourse
strategy:
type: Recreate
template:
metadata:
labels:
app.kubernetes.io/instance: discourse
app.kubernetes.io/name: discourse
spec:
serviceAccountName: discourse
securityContext:
fsGroup: 0
fsGroupChangePolicy: Always
initContainers:
containers:
- name: discourse
image: docker.io/bitnami/discourse:3.3.2-debian-12-r0
imagePullPolicy: "IfNotPresent"
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- CHOWN
- SYS_CHROOT
- FOWNER
- SETGID
- SETUID
- DAC_OVERRIDE
drop:
- ALL
privileged: false
readOnlyRootFilesystem: false
runAsGroup: 0
runAsNonRoot: false
runAsUser: 0
seLinuxOptions: {}
seccompProfile:
type: RuntimeDefault
env:
- name: BITNAMI_DEBUG
value: "true"
- name: DISCOURSE_USERNAME
valueFrom:
secretKeyRef:
name: discourse-password
key: username
- name: DISCOURSE_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-password
key: password
- name: DISCOURSE_PORT_NUMBER
value: "8080"
- name: DISCOURSE_EXTERNAL_HTTP_PORT_NUMBER
value: "80"
- name: DISCOURSE_DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-postgresql
key: password
- name: POSTGRESQL_CLIENT_CREATE_DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-postgres-superuser
key: password
- name: POSTGRESQL_CLIENT_POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-postgres-superuser
key: password
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-redis
key: redis-password
envFrom:
- configMapRef:
name: discourse
- secretRef:
name: discourse-email
ports:
- name: http
containerPort: 8080
protocol: TCP
livenessProbe:
tcpSocket:
port: http
initialDelaySeconds: 500
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
readinessProbe:
httpGet:
path: /srv/status
port: http
initialDelaySeconds: 100
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
resources:
limits:
cpu: "6.0"
ephemeral-storage: 2Gi
memory: 12288Mi
requests:
cpu: "1.0"
ephemeral-storage: 50Mi
memory: 3072Mi
volumeMounts:
- name: discourse-data
mountPath: /bitnami/discourse
subPath: discourse
- name: sidekiq
image: docker.io/bitnami/discourse:3.3.2-debian-12-r0
imagePullPolicy: "IfNotPresent"
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- CHOWN
- SYS_CHROOT
- FOWNER
- SETGID
- SETUID
- DAC_OVERRIDE
drop:
- ALL
privileged: false
readOnlyRootFilesystem: false
runAsGroup: 0
runAsNonRoot: false
runAsUser: 0
seLinuxOptions: {}
seccompProfile:
type: RuntimeDefault
command:
- /opt/bitnami/scripts/discourse/entrypoint.sh
args:
- /opt/bitnami/scripts/discourse-sidekiq/run.sh
env:
- name: BITNAMI_DEBUG
value: "true"
- name: DISCOURSE_USERNAME
valueFrom:
secretKeyRef:
name: discourse-password
key: username
- name: DISCOURSE_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-password
key: password
- name: DISCOURSE_DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-postgresql
key: password
- name: DISCOURSE_POSTGRESQL_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-postgres-superuser
key: password
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-redis
key: redis-password
envFrom:
- configMapRef:
name: discourse
- secretRef:
name: discourse-email
livenessProbe:
exec:
command: ["/bin/sh", "-c", "pgrep -f ^sidekiq"]
initialDelaySeconds: 500
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
readinessProbe:
exec:
command: ["/bin/sh", "-c", "pgrep -f ^sidekiq"]
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
resources:
limits:
cpu: 750m
ephemeral-storage: 2Gi
memory: 768Mi
requests:
cpu: 500m
ephemeral-storage: 50Mi
memory: 512Mi
volumeMounts:
- name: discourse-data
mountPath: /bitnami/discourse
subPath: discourse
volumes:
- name: discourse-data
persistentVolumeClaim:
claimName: discourse-data
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: discourse-data
namespace: discourse
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "3Gi"
storageClassName: "proxmox-nas"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: discourse
namespace: discourse
data:
DISCOURSE_HOST: "discourse.k-space.ee"
DISCOURSE_SKIP_INSTALL: "yes"
DISCOURSE_PRECOMPILE_ASSETS: "no"
DISCOURSE_SITE_NAME: "K-Space Discourse"
DISCOURSE_USERNAME: "k-space"
DISCOURSE_EMAIL: "dos4dev@k-space.ee"
DISCOURSE_REDIS_HOST: "discourse-redis"
DISCOURSE_REDIS_PORT_NUMBER: "6379"
DISCOURSE_DATABASE_HOST: "discourse-postgres-rw"
DISCOURSE_DATABASE_PORT_NUMBER: "5432"
DISCOURSE_DATABASE_NAME: "discourse"
DISCOURSE_DATABASE_USER: "discourse"
POSTGRESQL_CLIENT_DATABASE_HOST: "discourse-postgres-rw"
POSTGRESQL_CLIENT_DATABASE_PORT_NUMBER: "5432"
POSTGRESQL_CLIENT_POSTGRES_USER: "postgres"
POSTGRESQL_CLIENT_CREATE_DATABASE_NAME: "discourse"
POSTGRESQL_CLIENT_CREATE_DATABASE_EXTENSIONS: "hstore,pg_trgm"
---
apiVersion: codemowers.cloud/v1beta1
kind: OIDCClient
metadata:
name: discourse
namespace: discourse
spec:
displayName: Discourse
uri: https://discourse.k-space.ee
redirectUris:
- https://discourse.k-space.ee/auth/oidc/callback
allowedGroups:
- k-space:floor
- k-space:friends
grantTypes:
- authorization_code
- refresh_token
responseTypes:
- code
availableScopes:
- openid
- profile
pkce: false
---
apiVersion: codemowers.cloud/v1beta1
kind: SecretClaim
metadata:
name: discourse-redis
namespace: discourse
spec:
size: 32
mapping:
- key: redis-password
value: "%(plaintext)s"
- key: REDIS_URI
value: "redis://:%(plaintext)s@discourse-redis"
---
apiVersion: dragonflydb.io/v1alpha1
kind: Dragonfly
metadata:
name: discourse-redis
namespace: discourse
spec:
authentication:
passwordFromSecret:
key: redis-password
name: discourse-redis
replicas: 3
resources:
limits:
cpu: 1000m
memory: 1Gi
topologySpreadConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: discourse-redis
app.kubernetes.io/part-of: dragonfly
---
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: discourse-postgres
namespace: discourse
spec:
instances: 1
enableSuperuserAccess: true
bootstrap:
initdb:
database: discourse
owner: discourse
secret:
name: discourse-postgresql
dataChecksums: true
encoding: 'UTF8'
storage:
size: 10Gi
storageClass: postgres

@ -1,11 +1,68 @@
# Workflow
Most applications in our Kubernetes cluster are managed by ArgoCD. Most applications in our Kubernetes cluster are managed by ArgoCD.
Most notably operators are NOT managed by ArgoCD. Most notably operators are NOT managed by ArgoCD.
## Managing applications Adding to `applications/`: `kubectl apply -f newapp.yaml`
Update apps (see TODO below):
# Deployment
To deploy ArgoCD:
```bash
helm repo add argo-cd https://argoproj.github.io/argo-helm
kubectl create secret -n argocd generic argocd-secret # Initialize empty secret for sessions
helm template -n argocd --release-name k6 argo-cd/argo-cd --include-crds -f values.yaml > argocd.yml
kubectl apply -f argocd.yml -f application-extras.yml -n argocd
kubectl -n argocd rollout restart deployment/k6-argocd-redis
kubectl -n argocd rollout restart deployment/k6-argocd-repo-server
kubectl -n argocd rollout restart deployment/k6-argocd-server
kubectl -n argocd rollout restart deployment/k6-argocd-notifications-controller
kubectl -n argocd rollout restart statefulset/k6-argocd-application-controller
kubectl label -n argocd secret oidc-client-argocd-owner-secrets app.kubernetes.io/part-of=argocd
```
# Setting up Git secrets
Generate SSH key to access Gitea:
``` ```
for j in asterisk bind camtiler etherpad freescout gitea grafana hackerspace nextcloud nyancat rosdump traefik wiki wildduck; do ssh-keygen -t ecdsa -f id_ecdsa -C argocd.k-space.ee -P ''
kubectl -n argocd create secret generic gitea-kube \
--from-literal=type=git \
--from-literal=url=git@git.k-space.ee:k-space/kube \
--from-file=sshPrivateKey=id_ecdsa
kubectl -n argocd create secret generic gitea-kube-staging \
--from-literal=type=git \
--from-literal=url=git@git.k-space.ee:k-space/kube-staging \
--from-file=sshPrivateKey=id_ecdsa
kubectl -n argocd create secret generic gitea-kube-members \
--from-literal=type=git \
--from-literal=url=git@git.k-space.ee:k-space/kube-members \
--from-file=sshPrivateKey=id_ecdsa
kubectl -n argocd create secret generic gitea-members \
--from-literal=type=git \
--from-literal=url=git@git.k-space.ee:k-space/kube-members \
--from-file=sshPrivateKey=id_ecdsa
kubectl label -n argocd secret gitea-kube argocd.argoproj.io/secret-type=repository
kubectl label -n argocd secret gitea-kube-staging argocd.argoproj.io/secret-type=repository
kubectl label -n argocd secret gitea-kube-members argocd.argoproj.io/secret-type=repository
kubectl label -n argocd secret gitea-members argocd.argoproj.io/secret-type=repository
rm -fv id_ecdsa
```
Have Gitea admin reset password for user `argocd` and log in with that account.
Add the SSH key for user `argocd` from file `id_ecdsa.pub`.
Delete any other SSH keys associated with Gitea user `argocd`.
# Managing applications
To update apps:
```
for j in asterisk bind camtiler etherpad freescout gitea grafana hackerspace nextcloud nyancat rosdump traefik wiki wildduck woodpecker; do
cat << EOF >> applications/$j.yaml cat << EOF >> applications/$j.yaml
--- ---
apiVersion: argoproj.io/v1alpha1 apiVersion: argoproj.io/v1alpha1
@ -13,10 +70,6 @@ kind: Application
metadata: metadata:
name: $j name: $j
namespace: argocd namespace: argocd
annotations:
# Works with only Kustomize and Helm. Kustomize is easy, see https://github.com/argoproj-labs/argocd-image-updater/tree/master/manifests/base for an example.
argocd-image-updater.argoproj.io/image-list: TODO:^2 # semver 2.*.*
argocd-image-updater.argoproj.io/write-back-method: git
spec: spec:
project: k-space.ee project: k-space.ee
source: source:
@ -35,24 +88,3 @@ EOF
done done
find applications -name "*.yaml" -exec kubectl apply -n argocd -f {} \; find applications -name "*.yaml" -exec kubectl apply -n argocd -f {} \;
``` ```
### Repository secrets
1. Generate keys locally with `ssh-keygen -f argo`
2. Add `argo.pub` in `git.k-space.ee/<your>/<repo>` → Settings → Deploy keys
3. Add `argo` (private key) at https://argocd.k-space.ee/settings/repos along with referenced repo.
## Argo Deployment
To deploy ArgoCD itself:
```bash
helm repo add argo-cd https://argoproj.github.io/argo-helm
kubectl create secret -n argocd generic argocd-secret # Empty secret for sessions
kubectl label -n argocd secret oidc-client-argocd-owner-secrets app.kubernetes.io/part-of=argocd
helm template -n argocd --release-name k6 argo-cd/argo-cd --include-crds -f values.yaml > argocd.yml
kubectl apply -f argocd.yml -f application-extras.yml -f redis.yaml -f monitoring.yml -n argocd
kubectl -n argocd rollout restart deployment/k6-argocd-redis deployment/k6-argocd-repo-server deployment/k6-argocd-server deployment/k6-argocd-notifications-controller statefulset/k6-argocd-application-controller
```
WARN: ArgoCD doesn't host its own redis, Dragonfly must be able to independently cold-start.

@ -9,7 +9,6 @@ spec:
uri: https://argocd.k-space.ee uri: https://argocd.k-space.ee
redirectUris: redirectUris:
- https://argocd.k-space.ee/auth/callback - https://argocd.k-space.ee/auth/callback
- http://localhost:8085/auth/callback
allowedGroups: allowedGroups:
- k-space:kubernetes:admins - k-space:kubernetes:admins
grantTypes: grantTypes:

@ -1,20 +0,0 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: argocd-image-updater
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'https://github.com/argoproj-labs/argocd-image-updater.git'
path: manifests/base
targetRevision: stable
destination:
server: 'https://kubernetes.default.svc'
namespace: argocd
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

@ -0,0 +1,15 @@
# ---
# apiVersion: argoproj.io/v1alpha1
# kind: Application
# metadata:
# name: camtiler
# namespace: argocd
# spec:
# project: k-space.ee
# source:
# repoURL: 'git@git.k-space.ee:k-space/kube.git'
# path: camtiler
# targetRevision: HEAD
# destination:
# server: 'https://kubernetes.default.svc'
# namespace: camtiler

@ -1,21 +0,0 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: cert-manager
namespace: argocd
spec:
project: k-space.ee
source:
# also depends on git@git.k-space.ee:secretspace/kube.git
repoURL: git@git.k-space.ee:k-space/kube.git
targetRevision: HEAD
path: cert-manager
destination:
server: 'https://kubernetes.default.svc'
namespace: cert-manager
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

@ -1,23 +0,0 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: cnpg # aka in-cluster postgres
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: https://github.com/cloudnative-pg/cloudnative-pg
targetRevision: v1.25.1
path: releases
directory:
include: 'cnpg-1.25.1.yaml'
destination:
server: 'https://kubernetes.default.svc'
namespace: cnpg-system
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true
- ServerSideApply=true # Resource is too big to fit in 262144 bytes allowed annotation size.

@ -1,23 +0,0 @@
# See [/dragonfly/README.md](/dragonfly-operator-system/README.md)
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: dragonfly # replaces redis and keydb
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: https://github.com/dragonflydb/dragonfly-operator
targetRevision: v1.1.11
path: manifests
directory:
include: 'dragonfly-operator.yaml'
destination:
server: 'https://kubernetes.default.svc'
namespace: dragonfly-operator-system
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

@ -5,7 +5,7 @@ metadata:
name: kubernetes-dashboard name: kubernetes-dashboard
namespace: argocd namespace: argocd
spec: spec:
project: k-space.ee project: default
source: source:
repoURL: 'git@git.k-space.ee:k-space/kube.git' repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: kubernetes-dashboard path: kubernetes-dashboard

@ -2,17 +2,17 @@
apiVersion: argoproj.io/v1alpha1 apiVersion: argoproj.io/v1alpha1
kind: Application kind: Application
metadata: metadata:
name: ripe87 name: logmower
namespace: argocd namespace: argocd
spec: spec:
project: k-space.ee project: k-space.ee
source: source:
repoURL: 'git@git.k-space.ee:k-space/kube.git' repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: ripe87 path: logmower
targetRevision: HEAD targetRevision: HEAD
destination: destination:
server: 'https://kubernetes.default.svc' server: 'https://kubernetes.default.svc'
namespace: ripe87 namespace: logmower
syncPolicy: syncPolicy:
automated: automated:
prune: true prune: true

@ -7,7 +7,7 @@ metadata:
spec: spec:
project: k-space.ee project: k-space.ee
source: source:
repoURL: 'git@git.k-space.ee:secretspace/members.git' repoURL: 'git@git.k-space.ee:k-space/members.git'
path: members path: members
targetRevision: HEAD targetRevision: HEAD
destination: destination:

@ -2,17 +2,19 @@
apiVersion: argoproj.io/v1alpha1 apiVersion: argoproj.io/v1alpha1
kind: Application kind: Application
metadata: metadata:
name: passmower name: postgres-clusters
namespace: argocd namespace: argocd
spec: spec:
project: k-space.ee project: k-space.ee
source: source:
repoURL: 'git@git.k-space.ee:k-space/kube.git' repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: passmower path: postgres-clusters
targetRevision: HEAD targetRevision: HEAD
destination: destination:
server: 'https://kubernetes.default.svc' server: 'https://kubernetes.default.svc'
namespace: passmower namespace: postgres-clusters
syncPolicy: syncPolicy:
automated: automated:
prune: true prune: true
syncOptions:
- CreateNamespace=true

@ -1,24 +0,0 @@
# Note: Do not put any Prometheus instances or exporters in this namespace, instead have them in `monitoring` namespace
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: prometheus-operator
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: https://github.com/prometheus-operator/prometheus-operator.git
targetRevision: v0.82.0
path: .
kustomize:
namespace: prometheus-operator
destination:
server: 'https://kubernetes.default.svc'
namespace: prometheus-operator
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true
- ServerSideApply=true # Resource is too big to fit in 262144 bytes allowed annotation size.

@ -2,17 +2,17 @@
apiVersion: argoproj.io/v1alpha1 apiVersion: argoproj.io/v1alpha1
kind: Application kind: Application
metadata: metadata:
name: pgweb name: redis-clusters
namespace: argocd namespace: argocd
spec: spec:
project: k-space.ee project: k-space.ee
source: source:
repoURL: 'git@git.k-space.ee:k-space/kube.git' repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: pgweb path: redis-clusters
targetRevision: HEAD targetRevision: HEAD
destination: destination:
server: 'https://kubernetes.default.svc' server: 'https://kubernetes.default.svc'
namespace: pgweb namespace: redis-clusters
syncPolicy: syncPolicy:
automated: automated:
prune: true prune: true

@ -1,20 +0,0 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: secret-claim-operator
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: https://github.com/codemowers/operatorlib
path: samples/secret-claim-operator
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: secret-claim-operator
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

@ -1,24 +0,0 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: tigera-operator
namespace: argocd
spec:
project: k-space.ee
source:
# also depends on git@git.k-space.ee:secretspace/kube.git
repoURL: git@git.k-space.ee:k-space/kube.git
targetRevision: HEAD
path: tigera-operator
destination:
server: 'https://kubernetes.default.svc'
namespace: tigera-operator
# also houses calico-system and calico-apiserver
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true
- ServerSideApply=true # Resource is too big to fit in 262144 bytes allowed annotation size.
- Force=true # `--force-conflicts`, according to https://docs.tigera.io/calico/latest/operations/upgrading/kubernetes-upgrade

@ -5,7 +5,7 @@ metadata:
name: whoami name: whoami
namespace: argocd namespace: argocd
spec: spec:
project: k-space.ee project: default
source: source:
repoURL: 'git@git.k-space.ee:k-space/kube.git' repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: whoami path: whoami

@ -7,10 +7,9 @@ metadata:
spec: spec:
project: k-space.ee project: k-space.ee
source: source:
# also depends on git@git.k-space.ee:secretspace/kube.git repoURL: 'git@git.k-space.ee:k-space/kube.git'
repoURL: git@git.k-space.ee:k-space/kube.git
targetRevision: HEAD
path: woodpecker path: woodpecker
targetRevision: HEAD
destination: destination:
server: 'https://kubernetes.default.svc' server: 'https://kubernetes.default.svc'
namespace: woodpecker namespace: woodpecker

@ -1,2 +0,0 @@
# used for git.k-space: k-space/kube, secretspace/kube, secretspace/members
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOxYpFf85Vnxw7WNb/V5dtZT0PJ4VbBhdBNscDd8TVv/ argocd.k-space.ee

@ -14,11 +14,13 @@ externalRedis:
existingSecret: argocd-redis existingSecret: argocd-redis
server: server:
# HTTPS is implemented by Traefik
ingress: ingress:
enabled: true enabled: true
annotations: annotations:
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
traefik.ingress.kubernetes.io/router.entrypoints: websecure traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
hosts: hosts:
- argocd.k-space.ee - argocd.k-space.ee
tls: tls:
@ -67,12 +69,7 @@ configs:
p, role:developers, applications, action/apps/Deployment/restart, default/camtiler, allow p, role:developers, applications, action/apps/Deployment/restart, default/camtiler, allow
p, role:developers, applications, sync, default/camtiler, allow p, role:developers, applications, sync, default/camtiler, allow
p, role:developers, applications, update, default/camtiler, allow p, role:developers, applications, update, default/camtiler, allow
# argocd-image-updater
p, role:image-updater, applications, get, */*, allow
p, role:image-updater, applications, update, */*, allow
g, image-updater, role:image-updater
cm: cm:
kustomize.buildOptions: --enable-helm
admin.enabled: "false" admin.enabled: "false"
resource.customizations: | resource.customizations: |
# https://github.com/argoproj/argo-cd/issues/1704 # https://github.com/argoproj/argo-cd/issues/1704

@ -32,8 +32,14 @@ spec:
cidr: 172.20.8.241/32 # Erki A cidr: 172.20.8.241/32 # Erki A
- from: - from:
- ipBlock: - ipBlock:
cidr: 212.47.211.10/32 # Elisa SIP cidr: 195.222.16.36/32 # Elisa SIP
- from:
- ipBlock:
cidr: 195.222.16.38/32 # Elisa SIP
egress: egress:
- to: - to:
- ipBlock: - ipBlock:
cidr: 212.47.211.10/32 # Elisa SIP cidr: 195.222.16.36/32 # Elisa SIP
- to:
- ipBlock:
cidr: 195.222.16.38/32 # Elisa SIP

@ -36,7 +36,7 @@ which are internally exposed IP-s of the secondaries.
To configure TSIG secrets: To configure TSIG secrets:
```sh ```
kubectl create secret generic -n bind bind-readonly-secret \ kubectl create secret generic -n bind bind-readonly-secret \
--from-file=readonly.key --from-file=readonly.key
kubectl create secret generic -n bind bind-readwrite-secret \ kubectl create secret generic -n bind bind-readwrite-secret \
@ -45,8 +45,9 @@ kubectl create secret generic -n bind external-dns
kubectl -n bind delete secret tsig-secret kubectl -n bind delete secret tsig-secret
kubectl -n bind create secret generic tsig-secret \ kubectl -n bind create secret generic tsig-secret \
--from-literal=TSIG_SECRET=$(cat readwrite.key | grep secret | cut -d '"' -f 2) --from-literal=TSIG_SECRET=$(cat readwrite.key | grep secret | cut -d '"' -f 2)
kubectl -n cert-manager delete secret tsig-secret
# ^ same tsig-secret is in git.k-space.ee/secretspace/kube cert-manager kubectl -n cert-manager create secret generic tsig-secret \
--from-literal=TSIG_SECRET=$(cat readwrite.key | grep secret | cut -d '"' -f 2)
``` ```
# Serving additional zones # Serving additional zones

@ -50,7 +50,7 @@ spec:
spec: spec:
containers: containers:
- name: bind-secondary - name: bind-secondary
image: mirror.gcr.io/internetsystemsconsortium/bind9:9.20 image: internetsystemsconsortium/bind9:9.20
resources: resources:
limits: limits:
cpu: 100m cpu: 100m

@ -17,7 +17,7 @@ spec:
serviceAccountName: external-dns serviceAccountName: external-dns
containers: containers:
- name: external-dns - name: external-dns
image: registry.k8s.io/external-dns/external-dns:v0.16.1 image: registry.k8s.io/external-dns/external-dns:v0.14.2
resources: resources:
limits: limits:
cpu: 100m cpu: 100m

@ -17,7 +17,7 @@ spec:
serviceAccountName: external-dns serviceAccountName: external-dns
containers: containers:
- name: external-dns - name: external-dns
image: registry.k8s.io/external-dns/external-dns:v0.16.1 image: registry.k8s.io/external-dns/external-dns:v0.14.2
resources: resources:
limits: limits:
cpu: 100m cpu: 100m
@ -29,10 +29,10 @@ spec:
- secretRef: - secretRef:
name: tsig-secret name: tsig-secret
args: args:
- --log-level=debug
- --events - --events
- --registry=noop - --registry=noop
- --provider=rfc2136 - --provider=rfc2136
- --source=ingress
- --source=service - --source=service
- --source=crd - --source=crd
- --domain-filter=k6.ee - --domain-filter=k6.ee
@ -73,3 +73,8 @@ spec:
recordType: A recordType: A
targets: targets:
- 62.65.250.2 - 62.65.250.2
- dnsName: k-space.ee
recordTTL: 300
recordType: MX
targets:
- 10 mail.k-space.ee

@ -17,7 +17,7 @@ spec:
serviceAccountName: external-dns serviceAccountName: external-dns
containers: containers:
- name: external-dns - name: external-dns
image: registry.k8s.io/external-dns/external-dns:v0.16.1 image: registry.k8s.io/external-dns/external-dns:v0.14.2
resources: resources:
limits: limits:
cpu: 100m cpu: 100m

Before

(image error) Size: 7.8 KiB

After

(image error) Size: 7.8 KiB

1
cert-manager/.gitignore vendored Normal file

@ -0,0 +1 @@
cert-manager.yaml

@ -7,7 +7,7 @@ Refer to the [Bind primary Ansible playbook](https://git.k-space.ee/k-space/ansi
[Bind namespace on Kubernetes cluster](https://git.k-space.ee/k-space/kube/src/branch/master/bind) [Bind namespace on Kubernetes cluster](https://git.k-space.ee/k-space/kube/src/branch/master/bind)
for more details for more details
# For developer # For user
Use `Certificate` CRD of cert-manager, refer to Use `Certificate` CRD of cert-manager, refer to
[official documentation](https://cert-manager.io/docs/usage/certificate/). [official documentation](https://cert-manager.io/docs/usage/certificate/).
@ -15,14 +15,23 @@ Use `Certificate` CRD of cert-manager, refer to
To find usage examples in this repository use To find usage examples in this repository use
`grep -r -A10 "^kind: Certificate" .` `grep -r -A10 "^kind: Certificate" .`
# Deployment # For administrator
With ArgoCD. Render it locally:
```sh Deployed with:
kustomize build . --enable-helm
```
curl -L https://github.com/jetstack/cert-manager/releases/download/v1.15.1/cert-manager.yaml -O
kubectl apply -f cert-manager.yaml
```
To update the issuer configuration or TSIG secret:
```
kubectl apply -f default-issuer.yml
kubectl -n cert-manager create secret generic tsig-secret \
--from-literal=TSIG_SECRET=<secret>
``` ```
## Webhook timeout
Workaround for webhook timeout issue https://github.com/jetstack/cert-manager/issues/2602 Workaround for webhook timeout issue https://github.com/jetstack/cert-manager/issues/2602
It's not very clear why this is happening, deserves further investigation - presumably Calico related somehow: It's not very clear why this is happening, deserves further investigation - presumably Calico related somehow:

@ -9,7 +9,7 @@ spec:
email: info@k-space.ee email: info@k-space.ee
server: https://acme-v02.api.letsencrypt.org/directory server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef: privateKeySecretRef:
name: example-issuer-account-key # auto-generated by cert-manager name: example-issuer-account-key
solvers: solvers:
- dns01: - dns01:
rfc2136: rfc2136:

@ -1,21 +0,0 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: cert-manager
# spec: https://kubectl.docs.kubernetes.io/references/kustomize/builtins/#_helmchartinflationgenerator_
helmCharts:
- includeCRDs: true
name: &name cert-manager
releaseName: *name
repo: https://charts.jetstack.io
valuesInline:
namespace: *name
global:
leaderElection:
namespace: *name
version: v1.18.1
resources:
- ssh://git@git.k-space.ee/secretspace/kube/cert-manager # secrets (.env): tsig-secret
- ./default.yaml

8
cnpg-system/README.md Normal file

@ -0,0 +1,8 @@
# CloudNativePG
To deploy:
```
kubectl apply --server-side -f \
https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/release-1.23/releases/cnpg-1.23.2.yaml
```

@ -15,7 +15,7 @@ spec:
spec: spec:
containers: containers:
- name: netshoot - name: netshoot
image: mirror.gcr.io/nicolaka/netshoot:latest image: nicolaka/netshoot
command: command:
- /bin/bash - /bin/bash
args: args:

@ -26,7 +26,12 @@ To achieve high availabilllity use 2+ replicas with correctly configured
`topologySpreadConstraints`. `topologySpreadConstraints`.
# For administrators # For administrators
See [/argocd/applications/dragonfly.yaml](/argocd/applications/dragonfly.yaml)
The operator was deployed with following snippet:
```
kubectl apply -f https://raw.githubusercontent.com/dragonflydb/dragonfly-operator/v1.1.6/manifests/dragonfly-operator.yaml
```
To upgrade refer to To upgrade refer to
[github.com/dragonflydb/dragonfly-operator](https://github.com/dragonflydb/dragonfly-operator/releases), [github.com/dragonflydb/dragonfly-operator](https://github.com/dragonflydb/dragonfly-operator/releases),

@ -57,7 +57,7 @@ spec:
cpu: 100m cpu: 100m
memory: 100Mi memory: 100Mi
- name: exporter - name: exporter
image: mirror.gcr.io/sepa/beats-exporter:latest image: sepa/beats-exporter
args: args:
- -p=5066 - -p=5066
ports: ports:
@ -129,7 +129,7 @@ spec:
- name: filebeat-registry - name: filebeat-registry
mountPath: /usr/share/filebeat/data mountPath: /usr/share/filebeat/data
- name: exporter - name: exporter
image: mirror.gcr.io/sepa/beats-exporter:latest image: sepa/beats-exporter
args: args:
- -p=5066 - -p=5066
ports: ports:

@ -1,8 +1,9 @@
--- ---
apiVersion: codemowers.cloud/v1beta1 apiVersion: codemowers.io/v1alpha1
kind: OIDCMiddlewareClient kind: OIDCGWMiddlewareClient
metadata: metadata:
name: etherpad name: sso
namespace: etherpad
spec: spec:
displayName: Etherpad displayName: Etherpad
uri: 'https://pad.k-space.ee/' uri: 'https://pad.k-space.ee/'
@ -28,7 +29,7 @@ spec:
spec: spec:
containers: containers:
- name: etherpad - name: etherpad
image: mirror.gcr.io/etherpad/etherpad:2 image: etherpad/etherpad:2
securityContext: securityContext:
# Etherpad writes session key during start # Etherpad writes session key during start
readOnlyRootFilesystem: false readOnlyRootFilesystem: false
@ -87,6 +88,7 @@ metadata:
annotations: annotations:
kubernetes.io/ingress.class: traefik kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
spec: spec:
rules: rules:

@ -13,10 +13,6 @@ Forwarding to personal eg. `@gmail.com` mailbox can be configured via
[Wildduck webmail](https://webmail.k-space.ee/account/profile) [Wildduck webmail](https://webmail.k-space.ee/account/profile)
> Whoops, looks like something went wrong — check logs in /storage/logs
The paid(!) OIDC plugin still requires creation of local account by an administrator. This probably means the OIDC user tried to log in before an account (with matching <username>@k-space.ee mail) existed in Freescout local users.
# For administrator # For administrator
This application is managed by [ArgoCD](https://argocd.k-space.ee/applications/argocd/freescout) This application is managed by [ArgoCD](https://argocd.k-space.ee/applications/argocd/freescout)

@ -45,7 +45,8 @@ spec:
emptyDir: {} emptyDir: {}
initContainers: initContainers:
- name: jq - name: jq
image: mirror.gcr.io/alpine/k8s:1.31.76@sha256:2a3fdd639c71c6cad69fbc8cac2467648855dac29961efec3b155466cc4fa730 image: >-
alpine/k8s:1.24.16@sha256:06f8942d87fa17b40795bb9a8eff029a9be3fc3c9bcc13d62071de4cc3324153
command: command:
- /bin/bash - /bin/bash
- '-c' - '-c'
@ -80,7 +81,7 @@ spec:
imagePullPolicy: IfNotPresent imagePullPolicy: IfNotPresent
containers: containers:
- name: mysql - name: mysql
image: mirror.gcr.io/library/mysql:latest image: mysql
command: command:
- /bin/bash - /bin/bash
- '-c' - '-c'
@ -110,6 +111,7 @@ metadata:
annotations: annotations:
kubernetes.io/ingress.class: traefik kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
traefik.ingress.kubernetes.io/router.middlewares: freescout-freescout@kubernetescrd traefik.ingress.kubernetes.io/router.middlewares: freescout-freescout@kubernetescrd
spec: spec:

@ -1 +0,0 @@
PASSWORDS.xml

@ -1,14 +0,0 @@
<include>
<X-PRE-PROCESS cmd="set" data="default_password=">
<X-PRE-PROCESS cmd="set" data="ipcall_password="/>
<X-PRE-PROCESS cmd="set" data="1000_password="/>
<X-PRE-PROCESS cmd="set" data="1001_password="/>
<X-PRE-PROCESS cmd="set" data="1002_password="/>
<X-PRE-PROCESS cmd="set" data="1003_password="/>
<X-PRE-PROCESS cmd="set" data="1004_password="/>
<X-PRE-PROCESS cmd="set" data="1005_password="/>
<X-PRE-PROCESS cmd="set" data="1006_password="/>
<X-PRE-PROCESS cmd="set" data="1007_password="/>
<X-PRE-PROCESS cmd="set" data="1008_password="/>
<X-PRE-PROCESS cmd="set" data="1009_password="/>
</include>

@ -1,3 +0,0 @@
```
kubectl -n freeswitch create secret generic freeswitch-passwords --from-file freeswitch/PASSWORDS.xml
```

@ -1,567 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: freeswitch
namespace: freeswitch
annotations:
external-dns.alpha.kubernetes.io/hostname: freeswitch.k-space.ee
metallb.universe.tf/address-pool: eenet
metallb.universe.tf/ip-allocated-from-pool: eenet
spec:
ports:
- name: sip-internal-udp
protocol: UDP
port: 5060
targetPort: 5060
nodePort: 31787
- name: sip-nat-udp
protocol: UDP
port: 5070
targetPort: 5070
nodePort: 32241
- name: sip-external-udp
protocol: UDP
port: 5080
targetPort: 5080
nodePort: 31354
- name: sip-data-10000
protocol: UDP
port: 10000
targetPort: 10000
nodePort: 30786
- name: sip-data-10001
protocol: UDP
port: 10001
targetPort: 10001
nodePort: 31788
- name: sip-data-10002
protocol: UDP
port: 10002
targetPort: 10002
nodePort: 30247
- name: sip-data-10003
protocol: UDP
port: 10003
targetPort: 10003
nodePort: 32389
- name: sip-data-10004
protocol: UDP
port: 10004
targetPort: 10004
nodePort: 30723
- name: sip-data-10005
protocol: UDP
port: 10005
targetPort: 10005
nodePort: 30295
- name: sip-data-10006
protocol: UDP
port: 10006
targetPort: 10006
nodePort: 30782
- name: sip-data-10007
protocol: UDP
port: 10007
targetPort: 10007
nodePort: 32165
- name: sip-data-10008
protocol: UDP
port: 10008
targetPort: 10008
nodePort: 30282
- name: sip-data-10009
protocol: UDP
port: 10009
targetPort: 10009
nodePort: 31325
- name: sip-data-10010
protocol: UDP
port: 10010
targetPort: 10010
nodePort: 31234
selector:
app: freeswitch
type: LoadBalancer
externalTrafficPolicy: Local
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
internalTrafficPolicy: Cluster
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: freeswitch-sounds
namespace: freeswitch
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
storageClassName: longhorn
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: freeswitch
namespace: freeswitch
labels:
app: freeswitch
annotations:
reloader.stakater.com/auto: "true"
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: freeswitch
template:
metadata:
labels:
app: freeswitch
spec:
volumes:
- name: config
configMap:
name: freeswitch-config
defaultMode: 420
- name: directory
configMap:
name: freeswitch-directory
defaultMode: 420
- name: sounds
persistentVolumeClaim:
claimName: freeswitch-sounds
- name: passwords
secret:
secretName: freeswitch-passwords
containers:
- name: freeswitch
image: mirror.gcr.io/dheaps/freeswitch:latest
env:
- name: SOUND_TYPES
value: en-us-callie
- name: SOUND_RATES
value: "32000"
resources: {}
volumeMounts:
- name: config
mountPath: /etc/freeswitch/sip_profiles/external/ipcall.xml
subPath: ipcall.xml
- name: config
mountPath: /etc/freeswitch/dialplan/default/00_outbound_ipcall.xml
subPath: 00_outbound_ipcall.xml
- name: config
mountPath: /etc/freeswitch/dialplan/public.xml
subPath: dialplan.xml
- name: config
mountPath: /etc/freeswitch/autoload_configs/switch.conf.xml
subPath: switch.xml
- name: config
mountPath: /etc/freeswitch/vars.xml
subPath: vars.xml
- name: passwords
mountPath: /etc/freeswitch/PASSWORDS.xml
subPath: PASSWORDS.xml
- name: directory
mountPath: /etc/freeswitch/directory/default
- name: sounds
mountPath: /usr/share/freeswitch/sounds
---
apiVersion: v1
kind: ConfigMap
metadata:
name: freeswitch-config
namespace: freeswitch
data:
dialplan.xml: |
<!--
NOTICE:
This context is usually accessed via the external sip profile listening on port 5080.
It is recommended to have separate inbound and outbound contexts. Not only for security
but clearing up why you would need to do such a thing. You don't want outside un-authenticated
callers hitting your default context which allows dialing calls thru your providers and results
in Toll Fraud.
-->
<!-- http://wiki.freeswitch.org/wiki/Dialplan_XML -->
<include>
<context name="public">
<extension name="unloop">
<condition field="${unroll_loops}" expression="^true$"/>
<condition field="${sip_looped_call}" expression="^true$">
<action application="deflect" data="${destination_number}"/>
</condition>
</extension>
<!--
Tag anything pass thru here as an outside_call so you can make sure not
to create any routing loops based on the conditions that it came from
the outside of the switch.
-->
<extension name="outside_call" continue="true">
<condition>
<action application="set" data="outside_call=true"/>
<action application="export" data="RFC2822_DATE=${strftime(%a, %d %b %Y %T %z)}"/>
</condition>
</extension>
<extension name="call_debug" continue="true">
<condition field="${call_debug}" expression="^true$" break="never">
<action application="info"/>
</condition>
</extension>
<extension name="public_extensions">
<condition field="destination_number" expression="^(10[01][0-9])$">
<action application="transfer" data="$1 XML default"/>
</condition>
</extension>
<extension name="public_conference_extensions">
<condition field="destination_number" expression="^(3[5-8][01][0-9])$">
<action application="transfer" data="$1 XML default"/>
</condition>
</extension>
<!--
You can place files in the public directory to get included.
-->
<X-PRE-PROCESS cmd="include" data="public/*.xml"/>
<!--
If you have made it this far lets challenge the caller and if they authenticate
lets try what they dialed in the default context. (commented out by default)
-->
<!-- TODO:
<extension name="check_auth" continue="true">
<condition field="${sip_authorized}" expression="^true$" break="never">
<anti-action application="respond" data="407"/>
</condition>
</extension>
-->
<extension name="transfer_to_default">
<condition>
<!-- TODO: proper ring grouping -->
<action application="bridge" data="user/1004@freeswitch.k-space.ee,user/1003@freeswitch.k-space.ee,sofia/gateway/ipcall/53543824"/>
</condition>
</extension>
</context>
</include>
ipcall.xml: |
<include>
<gateway name="ipcall">
<param name="proxy" value="sip.ipcall.ee"/>
<param name="register" value="true"/>
<param name="realm" value="sip.ipcall.ee"/>
<param name="username" value="6659652"/>
<param name="password" value="$${ipcall_password}"/>
<param name="from-user" value="6659652"/>
<param name="from-domain" value="sip.ipcall.ee"/>
<param name="extension" value="ring_group/default"/>
</gateway>
</include>
00_outbound_ipcall.xml: |
<extension name="outbound">
<!-- TODO: check toll_allow ? -->
<condition field="destination_number" expression="^(\d+)$">
<action application="set" data="sip_invite_domain=sip.ipcall.ee"/>
<action application="bridge" data="sofia/gateway/ipcall/${destination_number}"/>
</condition>
</extension>
switch.xml: |
<configuration name="switch.conf" description="Core Configuration">
<cli-keybindings>
<key name="1" value="help"/>
<key name="2" value="status"/>
<key name="3" value="show channels"/>
<key name="4" value="show calls"/>
<key name="5" value="sofia status"/>
<key name="6" value="reloadxml"/>
<key name="7" value="console loglevel 0"/>
<key name="8" value="console loglevel 7"/>
<key name="9" value="sofia status profile internal"/>
<key name="10" value="sofia profile internal siptrace on"/>
<key name="11" value="sofia profile internal siptrace off"/>
<key name="12" value="version"/>
</cli-keybindings>
<default-ptimes>
</default-ptimes>
<settings>
<param name="colorize-console" value="true"/>
<param name="dialplan-timestamps" value="false"/>
<param name="max-db-handles" value="50"/>
<param name="db-handle-timeout" value="10"/>
<param name="max-sessions" value="1000"/>
<param name="sessions-per-second" value="30"/>
<param name="loglevel" value="debug"/>
<param name="mailer-app" value="sendmail"/>
<param name="mailer-app-args" value="-t"/>
<param name="dump-cores" value="yes"/>
<param name="rtp-start-port" value="10000"/>
<param name="rtp-end-port" value="10010"/>
</settings>
</configuration>
vars.xml: |
<include>
<X-PRE-PROCESS cmd="set" data="disable_system_api_commands=true"/>
<X-PRE-PROCESS cmd="set" data="sound_prefix=$${sounds_dir}/en/us/callie"/>
<X-PRE-PROCESS cmd="set" data="domain=freeswitch.k-space.ee"/>
<X-PRE-PROCESS cmd="set" data="domain_name=$${domain}"/>
<X-PRE-PROCESS cmd="set" data="hold_music=local_stream://moh"/>
<X-PRE-PROCESS cmd="set" data="use_profile=external"/>
<X-PRE-PROCESS cmd="set" data="rtp_sdes_suites=AEAD_AES_256_GCM_8|AEAD_AES_128_GCM_8|AES_CM_256_HMAC_SHA1_80|AES_CM_192_HMAC_SHA1_80|AES_CM_128_HMAC_SHA1_80|AES_CM_256_HMAC_SHA1_32|AES_CM_192_HMAC_SHA1_32|AES_CM_128_HMAC_SHA1_32|AES_CM_128_NULL_AUTH"/>
<X-PRE-PROCESS cmd="set" data="global_codec_prefs=OPUS,G722,PCMU,PCMA,H264,VP8"/>
<X-PRE-PROCESS cmd="set" data="outbound_codec_prefs=OPUS,G722,PCMU,PCMA,H264,VP8"/>
<X-PRE-PROCESS cmd="set" data="xmpp_client_profile=xmppc"/>
<X-PRE-PROCESS cmd="set" data="xmpp_server_profile=xmpps"/>
<X-PRE-PROCESS cmd="set" data="bind_server_ip=auto"/>
<X-PRE-PROCESS cmd="stun-set" data="external_rtp_ip=host:freeswitch.k-space.ee"/>
<X-PRE-PROCESS cmd="stun-set" data="external_sip_ip=host:freeswitch.k-space.ee"/>
<X-PRE-PROCESS cmd="set" data="unroll_loops=true"/>
<X-PRE-PROCESS cmd="set" data="outbound_caller_name=FreeSWITCH"/>
<X-PRE-PROCESS cmd="set" data="outbound_caller_id=0000000000"/>
<X-PRE-PROCESS cmd="set" data="call_debug=false"/>
<X-PRE-PROCESS cmd="set" data="console_loglevel=info"/>
<X-PRE-PROCESS cmd="set" data="default_areacode=372"/>
<X-PRE-PROCESS cmd="set" data="default_country=EE"/>
<X-PRE-PROCESS cmd="set" data="presence_privacy=false"/>
<X-PRE-PROCESS cmd="set" data="au-ring=%(400,200,383,417);%(400,2000,383,417)"/>
<X-PRE-PROCESS cmd="set" data="be-ring=%(1000,3000,425)"/>
<X-PRE-PROCESS cmd="set" data="ca-ring=%(2000,4000,440,480)"/>
<X-PRE-PROCESS cmd="set" data="cn-ring=%(1000,4000,450)"/>
<X-PRE-PROCESS cmd="set" data="cy-ring=%(1500,3000,425)"/>
<X-PRE-PROCESS cmd="set" data="cz-ring=%(1000,4000,425)"/>
<X-PRE-PROCESS cmd="set" data="de-ring=%(1000,4000,425)"/>
<X-PRE-PROCESS cmd="set" data="dk-ring=%(1000,4000,425)"/>
<X-PRE-PROCESS cmd="set" data="dz-ring=%(1500,3500,425)"/>
<X-PRE-PROCESS cmd="set" data="eg-ring=%(2000,1000,475,375)"/>
<X-PRE-PROCESS cmd="set" data="es-ring=%(1500,3000,425)"/>
<X-PRE-PROCESS cmd="set" data="fi-ring=%(1000,4000,425)"/>
<X-PRE-PROCESS cmd="set" data="fr-ring=%(1500,3500,440)"/>
<X-PRE-PROCESS cmd="set" data="hk-ring=%(400,200,440,480);%(400,3000,440,480)"/>
<X-PRE-PROCESS cmd="set" data="hu-ring=%(1250,3750,425)"/>
<X-PRE-PROCESS cmd="set" data="il-ring=%(1000,3000,400)"/>
<X-PRE-PROCESS cmd="set" data="in-ring=%(400,200,425,375);%(400,2000,425,375)"/>
<X-PRE-PROCESS cmd="set" data="jp-ring=%(1000,2000,420,380)"/>
<X-PRE-PROCESS cmd="set" data="ko-ring=%(1000,2000,440,480)"/>
<X-PRE-PROCESS cmd="set" data="pk-ring=%(1000,2000,400)"/>
<X-PRE-PROCESS cmd="set" data="pl-ring=%(1000,4000,425)"/>
<X-PRE-PROCESS cmd="set" data="ro-ring=%(1850,4150,475,425)"/>
<X-PRE-PROCESS cmd="set" data="rs-ring=%(1000,4000,425)"/>
<X-PRE-PROCESS cmd="set" data="ru-ring=%(800,3200,425)"/>
<X-PRE-PROCESS cmd="set" data="sa-ring=%(1200,4600,425)"/>
<X-PRE-PROCESS cmd="set" data="tr-ring=%(2000,4000,450)"/>
<X-PRE-PROCESS cmd="set" data="uk-ring=%(400,200,400,450);%(400,2000,400,450)"/>
<X-PRE-PROCESS cmd="set" data="us-ring=%(2000,4000,440,480)"/>
<X-PRE-PROCESS cmd="set" data="bong-ring=v=-7;%(100,0,941.0,1477.0);v=-7;>=2;+=.1;%(1400,0,350,440)"/>
<X-PRE-PROCESS cmd="set" data="beep=%(1000,0,640)"/>
<X-PRE-PROCESS cmd="set" data="sit=%(274,0,913.8);%(274,0,1370.6);%(380,0,1776.7)"/>
<X-PRE-PROCESS cmd="set" data="df_us_ssn=(?!219099999|078051120)(?!666|000|9\d{2})\d{3}(?!00)\d{2}(?!0{4})\d{4}"/>
<X-PRE-PROCESS cmd="set" data="df_luhn=?:4[0-9]{12}(?:[0-9]{3})?|5[1-5][0-9]{14}|3[47][0-9]{13}|3(?:0[0-5]|[68][0-9])[0-9]{11}|6(?:011|5[0-9]{2})[0-9]{12}|(?:2131|1800|35\d{3})\d{11}"/>
<XX-PRE-PROCESS cmd="set" data="digits_dialed_filter=(($${df_luhn})|($${df_us_ssn}))"/>
<X-PRE-PROCESS cmd="set" data="default_provider=sip.ipcall.ee"/>
<X-PRE-PROCESS cmd="set" data="default_provider_username="/>
<X-PRE-PROCESS cmd="set" data="default_provider_password="/>
<X-PRE-PROCESS cmd="set" data="default_provider_from_domain=sip.ipcall.ee"/>
<X-PRE-PROCESS cmd="set" data="default_provider_register=true"/>
<X-PRE-PROCESS cmd="set" data="default_provider_contact=1004"/>
<X-PRE-PROCESS cmd="set" data="sip_tls_version=tlsv1,tlsv1.1,tlsv1.2"/>
<X-PRE-PROCESS cmd="set" data="sip_tls_ciphers=ALL:!ADH:!LOW:!EXP:!MD5:@STRENGTH"/>
<X-PRE-PROCESS cmd="set" data="internal_auth_calls=true"/>
<X-PRE-PROCESS cmd="set" data="internal_sip_port=5060"/>
<X-PRE-PROCESS cmd="set" data="internal_tls_port=5061"/>
<X-PRE-PROCESS cmd="set" data="internal_ssl_enable=false"/>
<X-PRE-PROCESS cmd="set" data="external_auth_calls=false"/>
<X-PRE-PROCESS cmd="set" data="external_sip_port=5080"/>
<X-PRE-PROCESS cmd="set" data="external_tls_port=5081"/>
<X-PRE-PROCESS cmd="set" data="external_ssl_enable=false"/>
<X-PRE-PROCESS cmd="set" data="rtp_video_max_bandwidth_in=3mb"/>
<X-PRE-PROCESS cmd="set" data="rtp_video_max_bandwidth_out=3mb"/>
<X-PRE-PROCESS cmd="set" data="suppress_cng=true"/>
<X-PRE-PROCESS cmd="set" data="rtp_liberal_dtmf=true"/>
<X-PRE-PROCESS cmd="set" data="video_mute_png=$${images_dir}/default-mute.png"/>
<X-PRE-PROCESS cmd="set" data="video_no_avatar_png=$${images_dir}/default-avatar.png"/>
<X-PRE-PROCESS cmd="include" data="PASSWORDS.xml"/>
</include>
---
apiVersion: v1
kind: ConfigMap
metadata:
name: freeswitch-directory
namespace: freeswitch
data:
1000.xml: |
<include>
<user id="1000">
<params>
<param name="password" value="$${1000_password}"/>
</params>
<variables>
<variable name="toll_allow" value="domestic,local"/>
<variable name="accountcode" value="1000"/>
<variable name="user_context" value="default"/>
<variable name="effective_caller_id_name" value=""/>
<variable name="effective_caller_id_number" value="1000"/>
<variable name="outbound_caller_id_name" value="$${outbound_caller_name}"/>
<variable name="outbound_caller_id_number" value="$${outbound_caller_id}"/>
</variables>
</user>
</include>
1001.xml: |
<include>
<user id="1001">
<params>
<param name="password" value="$${1001_password}"/>
</params>
<variables>
<variable name="toll_allow" value="domestic,local"/>
<variable name="accountcode" value="1001"/>
<variable name="user_context" value="default"/>
<variable name="effective_caller_id_name" value=""/>
<variable name="effective_caller_id_number" value="1001"/>
<variable name="outbound_caller_id_name" value="$${outbound_caller_name}"/>
<variable name="outbound_caller_id_number" value="$${outbound_caller_id}"/>
</variables>
</user>
</include>
1002.xml: |
<include>
<user id="1002">
<params>
<param name="password" value="$${1002_password}"/>
</params>
<variables>
<variable name="toll_allow" value="domestic,local"/>
<variable name="accountcode" value="1002"/>
<variable name="user_context" value="default"/>
<variable name="effective_caller_id_name" value=""/>
<variable name="effective_caller_id_number" value="1002"/>
<variable name="outbound_caller_id_name" value="$${outbound_caller_name}"/>
<variable name="outbound_caller_id_number" value="$${outbound_caller_id}"/>
</variables>
</user>
</include>
1003.xml: |
<include>
<user id="1003">
<params>
<param name="password" value="$${1003_password}"/>
</params>
<variables>
<variable name="toll_allow" value="domestic,local"/>
<variable name="accountcode" value="1003"/>
<variable name="user_context" value="default"/>
<variable name="effective_caller_id_name" value="Erki A"/>
<variable name="effective_caller_id_number" value="1003"/>
<variable name="outbound_caller_id_name" value="$${outbound_caller_name}"/>
<variable name="outbound_caller_id_number" value="$${outbound_caller_id}"/>
</variables>
</user>
</include>
1004.xml: |
<include>
<user id="1004">
<params>
<param name="password" value="$${1004_password}"/>
</params>
<variables>
<variable name="toll_allow" value="domestic,local"/>
<variable name="accountcode" value="1004"/>
<variable name="user_context" value="default"/>
<variable name="effective_caller_id_name" value="Erki A"/>
<variable name="effective_caller_id_number" value="1004"/>
<variable name="outbound_caller_id_name" value="$${outbound_caller_name}"/>
<variable name="outbound_caller_id_number" value="$${outbound_caller_id}"/>
</variables>
</user>
</include>
1005.xml: |
<include>
<user id="1005">
<params>
<param name="password" value="$${1005_password}"/>
</params>
<variables>
<variable name="toll_allow" value="domestic,local"/>
<variable name="accountcode" value="1005"/>
<variable name="user_context" value="default"/>
<variable name="effective_caller_id_name" value=""/>
<variable name="effective_caller_id_number" value="1005"/>
<variable name="outbound_caller_id_name" value="$${outbound_caller_name}"/>
<variable name="outbound_caller_id_number" value="$${outbound_caller_id}"/>
</variables>
</user>
</include>
1006.xml: |
<include>
<user id="1006">
<params>
<param name="password" value="$${1006_password}"/>
</params>
<variables>
<variable name="toll_allow" value="domestic,local"/>
<variable name="accountcode" value="1006"/>
<variable name="user_context" value="default"/>
<variable name="effective_caller_id_name" value=""/>
<variable name="effective_caller_id_number" value="1006"/>
<variable name="outbound_caller_id_name" value="$${outbound_caller_name}"/>
<variable name="outbound_caller_id_number" value="$${outbound_caller_id}"/>
</variables>
</user>
</include>
1007.xml: |
<include>
<user id="1007">
<params>
<param name="password" value="$${1007_password}"/>
</params>
<variables>
<variable name="toll_allow" value="domestic,local"/>
<variable name="accountcode" value="1007"/>
<variable name="user_context" value="default"/>
<variable name="effective_caller_id_name" value=""/>
<variable name="effective_caller_id_number" value="1007"/>
<variable name="outbound_caller_id_name" value="$${outbound_caller_name}"/>
<variable name="outbound_caller_id_number" value="$${outbound_caller_id}"/>
</variables>
</user>
</include>
1008.xml: |
<include>
<user id="1008">
<params>
<param name="password" value="$${1008_password}"/>
</params>
<variables>
<variable name="toll_allow" value="domestic,local"/>
<variable name="accountcode" value="1008"/>
<variable name="user_context" value="default"/>
<variable name="effective_caller_id_name" value=""/>
<variable name="effective_caller_id_number" value="1008"/>
<variable name="outbound_caller_id_name" value="$${outbound_caller_name}"/>
<variable name="outbound_caller_id_number" value="$${outbound_caller_id}"/>
</variables>
</user>
</include>
1009.xml: |
<include>
<user id="1009">
<params>
<param name="password" value="$${1009_password}"/>
</params>
<variables>
<variable name="toll_allow" value="domestic,local"/>
<variable name="accountcode" value="1009"/>
<variable name="user_context" value="default"/>
<variable name="effective_caller_id_name" value=""/>
<variable name="effective_caller_id_number" value="1009"/>
<variable name="outbound_caller_id_name" value="$${outbound_caller_name}"/>
<variable name="outbound_caller_id_number" value="$${outbound_caller_id}"/>
</variables>
</user>
</include>

@ -1,49 +0,0 @@
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: freeswitch
spec:
podSelector:
matchLabels:
app: freeswitch
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus
- from:
- ipBlock:
cidr: 100.101.0.0/16
- from:
- ipBlock:
cidr: 100.102.0.0/16
- from:
- ipBlock:
cidr: 81.90.125.224/32 # Lauri home
- from:
- ipBlock:
cidr: 172.20.8.241/32 # Erki A
- from:
- ipBlock:
cidr: 212.47.211.10/32 # Elisa SIP
- from:
- ipBlock:
cidr: 212.47.211.10/32 # Elisa SIP
egress:
- to:
- ipBlock:
cidr: 212.47.211.10/32 # Elisa SIP
- to:
- ipBlock:
cidr: 195.222.16.38/32 # Elisa SIP
- to:
ports:
- port: 53
protocol: UDP

@ -1,5 +0,0 @@
```
helm repo add blakeblackshear https://blakeblackshear.github.io/blakeshome-charts/
helm template -n frigate --release-name frigate blakeblackshear/frigate --include-crds -f values.yaml > application.yml
kubectl apply -n frigate -f application.yml -f auth.yml -f rabbitmq.yml -f storage-class.yml -f storage.yml -f transcode.yml
```

@ -1,309 +0,0 @@
---
# Source: frigate/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: frigate
namespace: frigate
labels:
app.kubernetes.io/name: frigate
helm.sh/chart: frigate-7.8.0
app.kubernetes.io/instance: frigate
app.kubernetes.io/managed-by: Helm
data:
config.yml: |
mqtt:
host: frigate-mqtt
port: 1883
topic_prefix: frigate
client_id: frigate
user: '{FRIGATE_MQTT_USERNAME}'
password: '{FRIGATE_MQTT_PASSWORD}'
stats_interval: 60
detectors:
coral:
type: edgetpu
device: usb
#cpu1:
#type: cpu
#ov:
# type: openvino
# device: CPU
model:
width: 300
height: 300
input_tensor: nhwc
input_pixel_format: bgr
path: /openvino-model/ssdlite_mobilenet_v2.xml
labelmap_path: /openvino-model/coco_91cl_bkgr.txt
record:
enabled: True
retain:
days: 3
mode: motion
events:
retain:
default: 30
mode: motion
cameras:
server_room:
ffmpeg:
inputs:
- path: rtsp://go2rtc:8554/server_room
roles:
- detect
- rtmp
- record
chaos:
ffmpeg:
inputs:
- path: rtsp://go2rtc:8554/chaos
roles:
- detect
- rtmp
- record
cyber:
ffmpeg:
inputs:
- path: rtsp://go2rtc:8554/cyber
roles:
- detect
- rtmp
- record
workshop:
ffmpeg:
inputs:
- path: rtsp://go2rtc:8554/workshop
roles:
- detect
- rtmp
- record
---
# Source: frigate/templates/config-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: frigate-config
labels:
app.kubernetes.io/name: frigate
helm.sh/chart: frigate-7.8.0
app.kubernetes.io/instance: frigate
app.kubernetes.io/managed-by: Helm
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "1000Mi"
storageClassName: "longhorn"
---
# Source: frigate/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: frigate
labels:
app.kubernetes.io/name: frigate
helm.sh/chart: frigate-7.8.0
app.kubernetes.io/instance: frigate
app.kubernetes.io/version: "0.14.1"
app.kubernetes.io/managed-by: Helm
spec:
type: ClusterIP
ipFamilyPolicy: SingleStack
ports:
- name: http
port: 5000
protocol: TCP
targetPort: http
- name: http-auth
port: 8971
protocol: TCP
targetPort: http-auth
- name: rtmp
port: 1935
protocol: TCP
targetPort: rtmp
- name: rtsp
port: 8554
protocol: TCP
targetPort: rtsp
- name: webrtc-tcp
port: 8555
protocol: TCP
targetPort: webrtc-tcp
- name: webrtc-udp
port: 8555
protocol: UDP
targetPort: webrtc-udp
selector:
app.kubernetes.io/name: frigate
app.kubernetes.io/instance: frigate
---
# Source: frigate/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: frigate
labels:
app.kubernetes.io/name: frigate
helm.sh/chart: frigate-7.8.0
app.kubernetes.io/instance: frigate
app.kubernetes.io/version: "0.14.1"
app.kubernetes.io/managed-by: Helm
spec:
replicas: 1
revisionHistoryLimit: 3
strategy:
type: Recreate
selector:
matchLabels:
app.kubernetes.io/name: frigate
app.kubernetes.io/instance: frigate
template:
metadata:
labels:
app.kubernetes.io/name: frigate
app.kubernetes.io/instance: frigate
annotations:
checksum/configmap: c03d767c7ef736f9d27d13a90ca868c5d4666b6e3e37b73b3e3b74be088dfff2
spec:
initContainers:
- name: copyconfig
image: "ghcr.io/blakeblackshear/frigate:0.14.1"
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /config.yml
subPath: config.yml
name: configmap
- mountPath: /config
name: config
command: [ "cp" ]
args: [ "-v", "/config.yml", "/config/config.yml" ]
containers:
- name: frigate
image: "ghcr.io/blakeblackshear/frigate:0.14.1"
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
ports:
- name: http
containerPort: 5000
protocol: TCP
- name: http-auth
containerPort: 8971
protocol: TCP
- name: rtmp
containerPort: 1935
protocol: TCP
- name: rtsp
containerPort: 8554
protocol: TCP
- name: webrtc-udp
containerPort: 8555
protocol: UDP
- name: webrtc-tcp
containerPort: 8555
protocol: TCP
- name: go2rtc-admin
containerPort: 1984
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
scheme: HTTP
initialDelaySeconds: 30
failureThreshold: 5
timeoutSeconds: 10
readinessProbe:
httpGet:
path: /
port: http
scheme: HTTP
initialDelaySeconds: 30
failureThreshold: 5
timeoutSeconds: 10
env:
envFrom:
- secretRef:
name: frigate-rstp-credentials
- secretRef:
name: frigate-mqtt-credentials
volumeMounts:
- mountPath: /dev/bus/usb
name: coral-dev
- mountPath: /config
name: config
- mountPath: /data
name: data
- mountPath: /media
name: media
- name: dshm
mountPath: /dev/shm
- name: tmp
mountPath: /tmp
resources:
{}
volumes:
- name: configmap
configMap:
name: frigate
- name: coral-dev
hostPath:
path: /dev/bus/usb
- name: config
persistentVolumeClaim:
claimName: frigate-config
- name: data
emptyDir: {}
- name: media
persistentVolumeClaim:
claimName: frigate-storage
- name: dshm
emptyDir:
medium: Memory
sizeLimit: 4Gi
- name: tmp
emptyDir:
medium: Memory
sizeLimit: 4Gi
---
# Source: frigate/templates/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: frigate
labels:
app.kubernetes.io/name: frigate
helm.sh/chart: frigate-7.8.0
app.kubernetes.io/instance: frigate
app.kubernetes.io/version: "0.14.1"
app.kubernetes.io/managed-by: Helm
annotations:
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: frigate-frigate@kubernetescrd
spec:
tls:
- hosts:
- "*.k-space.ee"
secretName:
rules:
- host: "frigate.k-space.ee"
http:
paths:
- path: /
pathType: "ImplementationSpecific"
backend:
service:
name: frigate
port:
name: http

@ -1,10 +0,0 @@
---
apiVersion: codemowers.cloud/v1beta1
kind: OIDCMiddlewareClient
metadata:
name: frigate
spec:
displayName: Frigate
uri: 'https://frigate.k-space.ee/'
allowedGroups:
- k-space:legalmember

@ -1,12 +0,0 @@
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: frigate-mqtt
spec:
replicas: 3
persistence:
storageClassName: rabbitmq
storage: 10Gi
rabbitmq:
additionalPlugins:
- rabbitmq_mqtt

@ -1,28 +0,0 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: frigate-config
provisioner: csi.proxmox.sinextra.dev
parameters:
cache: none
csi.storage.k8s.io/fstype: xfs
ssd: 'true'
storage: ks-pvs
reclaimPolicy: Retain
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: frigate-data
provisioner: csi.proxmox.sinextra.dev
parameters:
cache: none
csi.storage.k8s.io/fstype: xfs
shared: 'true'
ssd: 'false'
storage: ks-pvs-nas
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer

@ -1,32 +0,0 @@
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: frigate-storage
spec:
persistentVolumeReclaimPolicy: Retain
capacity:
storage: 1Ti
accessModes:
- ReadWriteMany
storageClassName: ""
nfs:
server: 172.21.0.7
path: /nas/k6/frigate
mountOptions:
- vers=4
- minorversion=1
- noac
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: frigate-storage
spec:
volumeName: frigate-storage
storageClassName: ""
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Ti

@ -1,81 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: go2rtc
labels:
app.kubernetes.io/name: go2rtc
app.kubernetes.io/instance: go2rtc
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app.kubernetes.io/name: go2rtc
app.kubernetes.io/instance: go2rtc
template:
metadata:
labels:
app.kubernetes.io/name: go2rtc
app.kubernetes.io/instance: go2rtc
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- go2rtc
topologyKey: "kubernetes.io/hostname"
nodeSelector:
dedicated: nvr
tolerations:
- key: dedicated
operator: Equal
value: nvr
effect: NoSchedule
containers:
- name: go2rtc
image: alexxit/go2rtc
ports:
- name: rtsp
containerPort: 8554
protocol: TCP
- name: api
containerPort: 1984
protocol: TCP
volumeMounts:
- mountPath: /config/go2rtc.yaml
subPath: config.yml
name: config
resources:
limits:
nvidia.com/gpu: 1
volumes:
- name: config
secret:
secretName: go2rtc-config
items:
- key: config.yml
path: config.yml
---
apiVersion: v1
kind: Service
metadata:
name: go2rtc
labels:
app.kubernetes.io/name: go2rtc
app.kubernetes.io/instance: go2rtc
spec:
type: ClusterIP
ipFamilyPolicy: SingleStack
ports:
- name: rtsp
port: 8554
protocol: TCP
targetPort: rtsp
selector:
app.kubernetes.io/name: go2rtc
app.kubernetes.io/instance: go2rtc

@ -1,177 +0,0 @@
# Default values for frigate.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
# -- upgrade strategy type (e.g. Recreate or RollingUpdate)
strategyType: Recreate
image:
# -- Docker registry/repository to pull the image from
repository: ghcr.io/blakeblackshear/frigate
# -- Overrides the default tag (appVersion) used in Chart.yaml ([Docker Hub](https://hub.docker.com/r/blakeblackshear/frigate/tags?page=1))
tag:
# -- Docker image pull policy
pullPolicy: IfNotPresent
# -- Docker image pull policy
imagePullSecrets: []
# -- additional ENV variables to set. Prefix with FRIGATE_ to target Frigate configuration values
env: {}
# TZ: UTC
# -- set environment variables from Secret(s)
envFromSecrets:
# secrets are required before `helm install`
- frigate-rstp-credentials
- frigate-mqtt-credentials
coral:
# -- enables the use of a Coral device
enabled: true
# -- path on the host to which to mount the Coral device
hostPath: /dev/bus/usb
gpu:
nvidia:
# -- Enables NVIDIA GPU compatibility. Must also use the "amd64nvidia" tagged image
enabled: false
# -- Overrides the default runtimeClassName
runtimeClassName:
# -- amount of shared memory to use for caching
shmSize: 4Gi
# -- use memory for tmpfs (mounted to /tmp)
tmpfs:
enabled: true
sizeLimit: 4Gi
# -- frigate configuration - see [Docs](https://docs.frigate.video/configuration/index) for more info
config: |
mqtt:
host: frigate-mqtt
port: 1883
topic_prefix: frigate
client_id: frigate
user: '{FRIGATE_MQTT_USERNAME}'
password: '{FRIGATE_MQTT_PASSWORD}'
stats_interval: 60
detectors:
coral:
type: edgetpu
device: usb
#cpu1:
#type: cpu
#ov:
# type: openvino
# device: CPU
model:
width: 300
height: 300
input_tensor: nhwc
input_pixel_format: bgr
path: /openvino-model/ssdlite_mobilenet_v2.xml
labelmap_path: /openvino-model/coco_91cl_bkgr.txt
record:
enabled: True
retain:
days: 3
mode: motion
events:
retain:
default: 30
mode: motion
cameras:
server_room:
ffmpeg:
inputs:
- path: rtsp://go2rtc:8554/server_room
roles:
- detect
- rtmp
- record
chaos:
ffmpeg:
inputs:
- path: rtsp://go2rtc:8554/chaos
roles:
- detect
- rtmp
- record
cyber:
ffmpeg:
inputs:
- path: rtsp://go2rtc:8554/cyber
roles:
- detect
- rtmp
- record
workshop:
ffmpeg:
inputs:
- path: rtsp://go2rtc:8554/workshop
roles:
- detect
- rtmp
- record
# Probes configuration
probes:
liveness:
enabled: true
initialDelaySeconds: 30
failureThreshold: 5
timeoutSeconds: 10
readiness:
enabled: true
initialDelaySeconds: 30
failureThreshold: 5
timeoutSeconds: 10
startup:
enabled: false
failureThreshold: 30
periodSeconds: 10
service:
type: ClusterIP
port: 5000
annotations: {}
labels: {}
loadBalancerIP:
ipFamilyPolicy: SingleStack
ipFamilies: []
ingress:
enabled: true
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
traefik.ingress.kubernetes.io/router.middlewares: frigate-frigate@kubernetescrd
hosts:
- host: frigate.k-space.ee
paths:
- path: '/'
portName: http
tls:
- hosts:
- "*.k-space.ee"
persistence:
config:
enabled: true
storageClass: "longhorn"
accessMode: ReadWriteOnce
size: 1000Mi
skipuninstall: false
media:
enabled: true
existingClaim: "frigate-storage"
skipuninstall: true

@ -7,5 +7,3 @@ Should ArgoCD be down manifests here can be applied with:
``` ```
kubectl apply -n gitea -f application.yaml kubectl apply -n gitea -f application.yaml
``` ```
Gitea DOES NOT go through Traefik. It has its own IP because ssh :22 would conflict with kube worker ssh. On its own IP, at the moment it doesn't flirt with Traefik — also has its own certificate.

@ -8,13 +8,10 @@ spec:
dnsNames: dnsNames:
- git.k-space.ee - git.k-space.ee
issuerRef: issuerRef:
group: cert-manager.io
kind: ClusterIssuer kind: ClusterIssuer
name: default name: default
secretName: git-tls secretName: git-tls
revisionHistoryLimit: 1 revisionHistoryLimit: 1
# Gitea DOES NOT go through Traefik. It has its own IP because ssh :22 would conflict with kube worker ssh. On its own IP, at the moment it doesn't flirt with Traefik — also has its own certificate.
--- ---
apiVersion: codemowers.cloud/v1beta1 apiVersion: codemowers.cloud/v1beta1
kind: SecretClaim kind: SecretClaim
@ -56,7 +53,6 @@ spec:
availableScopes: availableScopes:
- openid - openid
- profile - profile
overrideIncomingScopes: true
pkce: false pkce: false
secretRefreshPod: secretRefreshPod:
apiVersion: v1 apiVersion: v1
@ -69,7 +65,7 @@ spec:
emptyDir: {} emptyDir: {}
initContainers: initContainers:
- name: jq - name: jq
image: mirror.gcr.io/alpine/k8s:1.31.76@sha256:2a3fdd639c71c6cad69fbc8cac2467648855dac29961efec3b155466cc4fa730 image: alpine/k8s:1.24.16@sha256:06f8942d87fa17b40795bb9a8eff029a9be3fc3c9bcc13d62071de4cc3324153
imagePullPolicy: IfNotPresent imagePullPolicy: IfNotPresent
volumeMounts: volumeMounts:
- mountPath: /tmp - mountPath: /tmp
@ -83,7 +79,7 @@ spec:
- jq '{"strategyKey":"OpenID","config":{"Provider":"openidConnect","ClientID":$ENV.OIDC_CLIENT_ID,"ClientSecret":$ENV.OIDC_CLIENT_SECRET,"OpenIDConnectAutoDiscoveryURL":"https://auth.k-space.ee/.well-known/openid-configuration","CustomURLMapping":null,"IconURL":"","Scopes":null,"RequiredClaimName":"","RequiredClaimValue":"","GroupClaimName":"","AdminGroup":"","GroupTeamMap":"","GroupTeamMapRemoval":false,"RestrictedGroup":""}} | "UPDATE login_source SET cfg=\(.config|tostring|@sh) WHERE name=\(.strategyKey|tostring|@sh) LIMIT 1"' -n -r > /tmp/update.sql - jq '{"strategyKey":"OpenID","config":{"Provider":"openidConnect","ClientID":$ENV.OIDC_CLIENT_ID,"ClientSecret":$ENV.OIDC_CLIENT_SECRET,"OpenIDConnectAutoDiscoveryURL":"https://auth.k-space.ee/.well-known/openid-configuration","CustomURLMapping":null,"IconURL":"","Scopes":null,"RequiredClaimName":"","RequiredClaimValue":"","GroupClaimName":"","AdminGroup":"","GroupTeamMap":"","GroupTeamMapRemoval":false,"RestrictedGroup":""}} | "UPDATE login_source SET cfg=\(.config|tostring|@sh) WHERE name=\(.strategyKey|tostring|@sh) LIMIT 1"' -n -r > /tmp/update.sql
containers: containers:
- name: mysql - name: mysql
image: mirror.gcr.io/library/mysql:latest image: mysql
imagePullPolicy: IfNotPresent imagePullPolicy: IfNotPresent
volumeMounts: volumeMounts:
- mountPath: /tmp - mountPath: /tmp
@ -125,7 +121,7 @@ spec:
runAsNonRoot: true runAsNonRoot: true
containers: containers:
- name: gitea - name: gitea
image: docker.gitea.com/gitea:1.23.7-rootless image: gitea/gitea:1.22.1-rootless
imagePullPolicy: IfNotPresent imagePullPolicy: IfNotPresent
securityContext: securityContext:
readOnlyRootFilesystem: true readOnlyRootFilesystem: true
@ -174,11 +170,6 @@ spec:
value: "false" value: "false"
- name: GITEA__SECURITY__INSTALL_LOCK - name: GITEA__SECURITY__INSTALL_LOCK
value: "true" value: "true"
# Disable bypassing (disabled) OIDC account. Password-based app tokens remain enabled.
- name: GITEA__SERVICE__ENABLE_PASSWORD_SIGNIN_FORM
value: "false"
- name: GITEA__SERVICE__ENABLE_PASSKEY_AUTHENTICATION
value: "false"
- name: GITEA__SERVICE__REGISTER_EMAIL_CONFIRM - name: GITEA__SERVICE__REGISTER_EMAIL_CONFIRM
value: "true" value: "true"
- name: GITEA__SERVICE__DISABLE_REGISTRATION - name: GITEA__SERVICE__DISABLE_REGISTRATION

@ -18,7 +18,6 @@ spec:
availableScopes: availableScopes:
- openid - openid
- profile - profile
- groups
tokenEndpointAuthMethod: none tokenEndpointAuthMethod: none
--- ---
apiVersion: v1 apiVersion: v1
@ -50,17 +49,14 @@ data:
root_url = https://%(domain)s/ root_url = https://%(domain)s/
[auth] [auth]
oauth_allow_insecure_email_lookup=true oauth_allow_insecure_email_lookup=true
[auth.basic]
enabled = false
[auth.generic_oauth] [auth.generic_oauth]
name = OAuth name = OAuth
icon = signin icon = signin
enabled = true enabled = true
scopes = openid profile groups empty_scopes = false
allow_sign_up = true allow_sign_up = true
use_pkce = true use_pkce = true
role_attribute_path = contains(groups[*], 'k-space:kubernetes:admins') && 'Admin' || contains(groups[*], 'k-space:floor') && 'Editor' || Viewer role_attribute_path = contains(groups[*], 'k-space:kubernetes:admins') && 'Admin' || 'Viewer'
allow_assign_grafana_admin = true
[security] [security]
disable_initial_admin_creation = true disable_initial_admin_creation = true
--- ---
@ -85,7 +81,7 @@ spec:
fsGroup: 472 fsGroup: 472
containers: containers:
- name: grafana - name: grafana
image: mirror.gcr.io/grafana/grafana:11.6.0 image: grafana/grafana:11.1.0
securityContext: securityContext:
readOnlyRootFilesystem: true readOnlyRootFilesystem: true
runAsNonRoot: true runAsNonRoot: true
@ -203,6 +199,7 @@ metadata:
name: grafana name: grafana
annotations: annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
spec: spec:
rules: rules:

@ -82,6 +82,7 @@ metadata:
annotations: annotations:
kubernetes.io/ingress.class: traefik kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
spec: spec:
rules: rules:
@ -109,3 +110,57 @@ spec:
app.kubernetes.io/name: doorboy-proxy app.kubernetes.io/name: doorboy-proxy
podMetricsEndpoints: podMetricsEndpoints:
- port: http - port: http
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kdoorpi
spec:
selector:
matchLabels: &selectorLabels
app.kubernetes.io/name: kdoorpi
template:
metadata:
labels: *selectorLabels
spec:
containers:
- name: kdoorpi
image: harbor.k-space.ee/k-space/kdoorpi:latest
env:
- name: KDOORPI_API_ALLOWED
value: https://doorboy-proxy.k-space.ee/allowed
- name: KDOORPI_API_LONGPOLL
value: https://doorboy-proxy.k-space.ee/longpoll
- name: KDOORPI_API_SWIPE
value: http://172.21.99.98/swipe
- name: KDOORPI_DOOR
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: KDOORPI_API_KEY
valueFrom:
secretKeyRef:
name: doorboy-api
key: DOORBOY_SECRET
- name: KDOORPI_UID_SALT
valueFrom:
secretKeyRef:
name: doorboy-uid-hash-salt
key: KDOORPI_UID_SALT
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
nodeSelector:
dedicated: door
tolerations:
- key: dedicated
operator: Equal
value: door
effect: NoSchedule
- key: arch
operator: Equal
value: arm64
effect: NoSchedule

@ -2,27 +2,38 @@ apiVersion: apps/v1
kind: Deployment kind: Deployment
metadata: metadata:
name: goredirect name: goredirect
namespace: hackerspace
spec: spec:
replicas: 2 replicas: 2
revisionHistoryLimit: 0 revisionHistoryLimit: 0
selector: selector:
matchLabels: matchLabels:
app: goredirect app.kubernetes.io/name: goredirect
template: template:
metadata: metadata:
labels: labels:
app: goredirect app.kubernetes.io/name: goredirect
spec: spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- goredirect
topologyKey: topology.kubernetes.io/zone
weight: 100
containers: containers:
- image: harbor.k-space.ee/k-space/goredirect:latest - image: harbor.k-space.ee/k-space/goredirect:latest
imagePullPolicy: Always imagePullPolicy: Always
env: env:
- name: GOREDIRECT_NOT_FOUND - name: GOREDIRECT_NOT_FOUND
value: https://inventory.k-space.ee/m/inventory/add-by-slug/%s value: https://inventory.k-space.ee/m/inventory/add-slug/%s
- name: GOREDIRECT_FOUND - name: GOREDIRECT_FOUND
value: https://inventory.k-space.ee/m/inventory/%s/view value: https://inventory.k-space.ee/m/inventory/%s/view
- name: GOREDIRECT_NOPATH
value: https://inventory.k-space.ee/m/inventory
- name: MONGO_URI - name: MONGO_URI
valueFrom: valueFrom:
secretKeyRef: secretKeyRef:
@ -31,6 +42,7 @@ spec:
name: goredirect name: goredirect
ports: ports:
- containerPort: 8080 - containerPort: 8080
name: http
protocol: TCP protocol: TCP
resources: resources:
limits: limits:
@ -46,38 +58,19 @@ spec:
--- ---
apiVersion: v1 apiVersion: v1
kind: Service kind: Service
metadata:
name: goredirect
spec:
type: ClusterIP
selector:
app: goredirect
ports:
- protocol: TCP
port: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata: metadata:
name: goredirect name: goredirect
annotations: annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure
# external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
# ^ ommitting for direct ip. Root level can't have CNAME.
external-dns.alpha.kubernetes.io/hostname: k6.ee external-dns.alpha.kubernetes.io/hostname: k6.ee
metallb.universe.tf/address-pool: elisa
spec: spec:
rules: ports:
- host: k6.ee - name: http
http: protocol: TCP
paths: port: 80
- pathType: Prefix targetPort: 8080
path: "/" nodePort: 32120
backend: selector:
service: app.kubernetes.io/name: goredirect
name: goredirect type: LoadBalancer
port: externalTrafficPolicy: Local
number: 8080
tls:
- hosts:
- "k6.ee"

@ -39,5 +39,5 @@ metadata:
name: inventory-external name: inventory-external
namespace: hackerspace namespace: hackerspace
spec: spec:
capacity: 10Gi capacity: 1Gi
class: external class: external

@ -2,19 +2,18 @@
apiVersion: apps/v1 apiVersion: apps/v1
kind: Deployment kind: Deployment
metadata: metadata:
name: inventory-app name: inventory
labels: namespace: hackerspace
app: signs-webpage
spec: spec:
replicas: 1 replicas: 1
revisionHistoryLimit: 0 revisionHistoryLimit: 0
selector: selector:
matchLabels: matchLabels:
app: inventory-app app.kubernetes.io/name: inventory
template: template:
metadata: metadata:
labels: labels:
app: inventory-app app.kubernetes.io/name: inventory
spec: spec:
containers: containers:
- image: harbor.k-space.ee/k-space/inventory-app:latest - image: harbor.k-space.ee/k-space/inventory-app:latest
@ -26,8 +25,6 @@ spec:
value: "1" value: "1"
- name: INVENTORY_ASSETS_BASE_URL - name: INVENTORY_ASSETS_BASE_URL
value: https://external.minio-clusters.k-space.ee/hackerspace-701d9303-0f27-4829-a2be-b1084021ad91/ value: https://external.minio-clusters.k-space.ee/hackerspace-701d9303-0f27-4829-a2be-b1084021ad91/
- name: MACADDRESS_OUTLINK_BASEURL
value: https://grafana.k-space.ee/d/ddwyidbtbc16oa/ip-usage?orgId=1&from=now-2y&to=now&timezone=browser&var-Filters=mac%7C%3D%7C
- name: OIDC_USERS_NAMESPACE - name: OIDC_USERS_NAMESPACE
value: passmower value: passmower
- name: SECRET_KEY - name: SECRET_KEY
@ -57,7 +54,7 @@ spec:
name: oidc-client-inventory-app-owner-secrets name: oidc-client-inventory-app-owner-secrets
- secretRef: - secretRef:
name: inventory-mongodb name: inventory-mongodb
name: inventory-app name: inventory
ports: ports:
- containerPort: 5000 - containerPort: 5000
name: http name: http
@ -81,7 +78,8 @@ spec:
dnsPolicy: ClusterFirst dnsPolicy: ClusterFirst
restartPolicy: Always restartPolicy: Always
schedulerName: default-scheduler schedulerName: default-scheduler
serviceAccountName: inventory-svcacc serviceAccount: inventory
serviceAccountName: inventory
terminationGracePeriodSeconds: 30 terminationGracePeriodSeconds: 30
volumes: volumes:
- name: tmp - name: tmp
@ -90,8 +88,9 @@ apiVersion: v1
kind: Service kind: Service
metadata: metadata:
name: inventory-app name: inventory-app
labels:
app: inventory-app
spec: spec:
type: ClusterIP
selector: selector:
app: inventory-app app: inventory-app
ports: ports:
@ -103,11 +102,12 @@ kind: Ingress
metadata: metadata:
name: inventory-app name: inventory-app
annotations: annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
external-dns.alpha.kubernetes.io/hostname: inventory.k-space.ee,members.k-space.ee external-dns.alpha.kubernetes.io/hostname: members.k-space.ee,inventory.k-space.ee
spec: spec:
ingressClassName: shared
rules: rules:
- host: inventory.k-space.ee - host: inventory.k-space.ee
http: http:
@ -145,8 +145,7 @@ spec:
apiVersion: rbac.authorization.k8s.io/v1 apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole kind: ClusterRole
metadata: metadata:
name: inventory-role name: inventory
namespace: hackerspace
rules: rules:
- verbs: - verbs:
- get - get
@ -161,18 +160,17 @@ rules:
apiVersion: rbac.authorization.k8s.io/v1 apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding kind: ClusterRoleBinding
metadata: metadata:
name: inventory-roles name: inventory
namespace: hackerspace
roleRef: roleRef:
apiGroup: rbac.authorization.k8s.io apiGroup: rbac.authorization.k8s.io
kind: ClusterRole kind: ClusterRole
name: inventory-role name: inventory
subjects: subjects:
- kind: ServiceAccount - kind: ServiceAccount
name: inventory-svcacc name: inventory
namespace: hackerspace namespace: hackerspace
--- ---
apiVersion: v1 apiVersion: v1
kind: ServiceAccount kind: ServiceAccount
metadata: metadata:
name: inventory-svcacc name: inventory

File diff suppressed because it is too large Load Diff

@ -1,19 +1,21 @@
expose: expose:
type: ingress type: loadBalancer
tls: tls:
# harbor helm needs PR to use non-core-host-named tls (wildcard), like *.k-space.ee; currently it gets its own cert (harbor.k-space.ee)
enabled: true enabled: true
certSource: secret certSource: secret
secret: secret:
secretName: wildcard-tls secretName: "harbor-ingress"
ingress: loadBalancer:
hosts: name: harbor
core: harbor.k-space.ee ports:
httpPort: 80
httpsPort: 443
annotations: annotations:
kubernetes.io/ingress.class: traefik cert-manager.io/cluster-issuer: default
traefik.ingress.kubernetes.io/router.entrypoints: websecure external-dns.alpha.kubernetes.io/hostname: harbor.k-space.ee
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee metallb.universe.tf/address-pool: elisa
labels: {} labels: {}
sourceRanges: []
externalURL: https://harbor.k-space.ee externalURL: https://harbor.k-space.ee
@ -46,7 +48,7 @@ persistence:
# Refer to # Refer to
# https://github.com/distribution/distribution/blob/main/docs/configuration.md#redirect # https://github.com/distribution/distribution/blob/main/docs/configuration.md#redirect
# for the detail. # for the detail.
disableredirect: false disableredirect: true
type: s3 type: s3
s3: s3:
# Set an existing secret for S3 accesskey and secretkey # Set an existing secret for S3 accesskey and secretkey
@ -120,8 +122,6 @@ metrics:
trivy: trivy:
enabled: false enabled: false
notary:
enabled: false
database: database:
type: "external" type: "external"
@ -143,3 +143,49 @@ redis:
addr: "dragonfly:6379" addr: "dragonfly:6379"
username: "" username: ""
password: "MvYcuU0RaIu1SX7fY1m1JrgLUSaZJjge" password: "MvYcuU0RaIu1SX7fY1m1JrgLUSaZJjge"
nginx:
nodeSelector:
dedicated: storage
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule
portal:
nodeSelector:
dedicated: storage
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule
core:
nodeSelector:
dedicated: storage
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule
jobservice:
nodeSelector:
dedicated: storage
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule
registry:
nodeSelector:
dedicated: storage
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule

@ -273,6 +273,7 @@ metadata:
kubernetes.io/ingress.class: traefik kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: kubernetes-dashboard-sso@kubernetescrd traefik.ingress.kubernetes.io/router.middlewares: kubernetes-dashboard-sso@kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true"
spec: spec:
rules: rules:
- host: dashboard.k-space.ee - host: dashboard.k-space.ee

@ -62,7 +62,7 @@ spec:
serviceAccountName: local-path-provisioner-service-account serviceAccountName: local-path-provisioner-service-account
containers: containers:
- name: local-path-provisioner - name: local-path-provisioner
image: mirror.gcr.io/rancher/local-path-provisioner:v0.0.22 image: rancher/local-path-provisioner:v0.0.22
imagePullPolicy: IfNotPresent imagePullPolicy: IfNotPresent
command: command:
- local-path-provisioner - local-path-provisioner
@ -151,7 +151,7 @@ data:
spec: spec:
containers: containers:
- name: helper-pod - name: helper-pod
image: mirror.gcr.io/library/busybox image: busybox
imagePullPolicy: IfNotPresent imagePullPolicy: IfNotPresent

382
logmower/application.yml Normal file

@ -0,0 +1,382 @@
---
apiVersion: codemowers.cloud/v1beta1
kind: OIDCMiddlewareClient
metadata:
name: frontend
spec:
displayName: Kubernetes pod log aggregator
uri: 'https://log.k-space.ee'
allowedGroups:
- k-space:kubernetes:developers
- k-space:kubernetes:admins
headerMapping:
email: Remote-Email
groups: Remote-Groups
name: Remote-Name
user: Remote-Username
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: logmower-shipper
spec:
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 50%
selector:
matchLabels:
app: logmower-shipper
template:
metadata:
labels:
app: logmower-shipper
spec:
serviceAccountName: logmower-shipper
containers:
- name: logmower-shipper
image: logmower/shipper:latest
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: MONGO_URI
valueFrom:
secretKeyRef:
name: logmower-mongodb-application-readwrite
key: connectionString.standard
ports:
- containerPort: 8000
name: metrics
securityContext:
readOnlyRootFilesystem: true
command:
- /app/log_shipper.py
- --parse-json
- --normalize-log-level
- --stream-to-log-level
- --merge-top-level
- --max-collection-size
- "10000000000"
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: etcmachineid
mountPath: /etc/machine-id
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
volumes:
- name: etcmachineid
hostPath:
path: /etc/machine-id
- name: varlog
hostPath:
path: /var/log
tolerations:
- operator: "Exists"
effect: "NoSchedule"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: logging-logmower-shipper
subjects:
- kind: ServiceAccount
name: logmower-shipper
namespace: logmower
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: logmower-shipper
labels:
app: logmower-shipper
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: logmower-shipper
spec:
podSelector:
matchLabels:
app: logmower-shipper
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus
egress:
- to:
- podSelector:
matchLabels:
app: logmower-mongodb-svc
ports:
- port: 27017
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: logmower-eventsource
spec:
podSelector:
matchLabels:
app: logmower-eventsource
policyTypes:
- Ingress
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: logmower-mongodb-svc
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: logmower-frontend
spec:
podSelector:
matchLabels:
app: logmower-frontend
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: logmower-shipper
spec:
selector:
matchLabels:
app: logmower-shipper
podMetricsEndpoints:
- port: metrics
---
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: logmower-shipper
spec:
groups:
- name: logmower-shipper
rules:
- alert: LogmowerSingleInsertionErrors
annotations:
summary: Logmower shipper is having issues submitting log records
to database
expr: rate(logmower_insertion_error_count_total[30m]) > 0
for: 0m
labels:
severity: warning
- alert: LogmowerBulkInsertionErrors
annotations:
summary: Logmower shipper is having issues submitting log records
to database
expr: rate(logmower_bulk_insertion_error_count_total[30m]) > 0
for: 0m
labels:
severity: warning
- alert: LogmowerHighDatabaseLatency
annotations:
summary: Database operations are slow
expr: histogram_quantile(0.95, logmower_database_operation_latency_bucket) > 10
for: 1m
labels:
severity: warning
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: logmower
annotations:
kubernetes.io/ingress.class: traefik
cert-manager.io/cluster-issuer: default
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: logmower-frontend@kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
spec:
rules:
- host: log.k-space.ee
http:
paths:
- pathType: Prefix
path: "/events"
backend:
service:
name: logmower-eventsource
port:
number: 3002
- pathType: Prefix
path: "/"
backend:
service:
name: logmower-frontend
port:
number: 8080
tls:
- hosts:
- "*.k-space.ee"
---
apiVersion: v1
kind: Service
metadata:
name: logmower-eventsource
spec:
type: ClusterIP
selector:
app: logmower-eventsource
ports:
- protocol: TCP
port: 3002
---
apiVersion: v1
kind: Service
metadata:
name: logmower-frontend
spec:
type: ClusterIP
selector:
app: logmower-frontend
ports:
- protocol: TCP
port: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: logmower-frontend
spec:
selector:
matchLabels:
app: logmower-frontend
template:
metadata:
labels:
app: logmower-frontend
spec:
containers:
- name: logmower-frontend
image: logmower/frontend:latest
ports:
- containerPort: 8080
name: http
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
resources:
limits:
memory: 50Mi
requests:
cpu: 1m
memory: 20Mi
volumeMounts:
- name : nginx-cache
mountPath: /var/cache/nginx/
- name : nginx-config
mountPath: /var/config/nginx/
- name: var-run
mountPath: /var/run/
volumes:
- emptyDir: {}
name: nginx-cache
- emptyDir: {}
name: nginx-config
- emptyDir: {}
name: var-run
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: logmower-eventsource
spec:
selector:
matchLabels:
app: logmower-eventsource
template:
metadata:
labels:
app: logmower-eventsource
spec:
containers:
- name: logmower-eventsource
image: logmower/eventsource:latest
ports:
- containerPort: 3002
name: nodejs
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
resources:
limits:
cpu: 500m
memory: 200Mi
requests:
cpu: 10m
memory: 100Mi
env:
- name: MONGODB_HOST
valueFrom:
secretKeyRef:
name: logmower-mongodb-application-readonly
key: connectionString.standard
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: logmower-mongodb
spec:
podSelector:
matchLabels:
app: logmower-mongodb-svc
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector: {}
ports:
- port: 27017
egress:
- to:
- podSelector:
matchLabels:
app: logmower-mongodb-svc
ports:
- port: 27017

@ -28,9 +28,9 @@ the Minio S3 bucket hosted at nas.k-space.ee
Longhorn was last upgraded with following snippet: Longhorn was last upgraded with following snippet:
``` ```
wget https://raw.githubusercontent.com/longhorn/longhorn/v1.8.2/deploy/longhorn.yaml wget https://raw.githubusercontent.com/longhorn/longhorn/v1.6.2/deploy/longhorn.yaml
patch -p0 < changes.diff patch -p0 < changes.diff
kubectl -n longhorn-system apply -f longhorn.yaml -f application-extras.yml -f backup.yaml kubectl -n longhorn-system apply -f longhorn.yml -f application-extras.yml -f backup.yaml
``` ```
After initial deployment `dedicated=storage:NoSchedule` was specified After initial deployment `dedicated=storage:NoSchedule` was specified

@ -24,6 +24,7 @@ metadata:
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
traefik.ingress.kubernetes.io/router.entrypoints: websecure traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: longhorn-system-ui@kubernetescrd traefik.ingress.kubernetes.io/router.middlewares: longhorn-system-ui@kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true"
spec: spec:
rules: rules:
- host: longhorn.k-space.ee - host: longhorn.k-space.ee

@ -26,7 +26,7 @@
+ tolerations: + tolerations:
+ - key: dedicated + - key: dedicated
+ operator: Equal + operator: Equal
+ value: nvr + value: storage
+ effect: NoSchedule + effect: NoSchedule
+ - key: arch + - key: arch
+ operator: Equal + operator: Equal
@ -42,7 +42,7 @@
+ tolerations: + tolerations:
+ - key: dedicated + - key: dedicated
+ operator: Equal + operator: Equal
+ value: nvr + value: storage
+ effect: NoSchedule + effect: NoSchedule
+ - key: arch + - key: arch
+ operator: Equal + operator: Equal

@ -8,7 +8,6 @@ spec:
dnsNames: dnsNames:
- "*.minio-clusters.k-space.ee" - "*.minio-clusters.k-space.ee"
issuerRef: issuerRef:
group: cert-manager.io
kind: ClusterIssuer kind: ClusterIssuer
name: default name: default
secretName: wildcard-tls secretName: wildcard-tls

@ -43,6 +43,7 @@ metadata:
annotations: annotations:
kubernetes.io/ingress.class: traefik kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
spec: spec:
rules: rules:
@ -69,6 +70,7 @@ metadata:
annotations: annotations:
kubernetes.io/ingress.class: traefik kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
spec: spec:
rules: rules:

@ -32,9 +32,6 @@ Sample queries:
* [Disk space left](https://prom.k-space.ee/graph?g0.range_input=1h&g0.expr=node_filesystem_avail_bytes&g0.tab=1) * [Disk space left](https://prom.k-space.ee/graph?g0.range_input=1h&g0.expr=node_filesystem_avail_bytes&g0.tab=1)
* Minio [s3 egress](https://prom.k-space.ee/graph?g0.expr=rate(minio_s3_traffic_sent_bytes%5B3m%5D)&g0.tab=0&g0.display_mode=lines&g0.show_exemplars=0&g0.range_input=6h), [internode egress](https://prom.k-space.ee/graph?g0.expr=rate(minio_inter_node_traffic_sent_bytes%5B2m%5D)&g0.tab=0&g0.display_mode=lines&g0.show_exemplars=0&g0.range_input=6h), [storage used](https://prom.k-space.ee/graph?g0.expr=minio_node_disk_used_bytes&g0.tab=0&g0.display_mode=lines&g0.show_exemplars=0&g0.range_input=6h) * Minio [s3 egress](https://prom.k-space.ee/graph?g0.expr=rate(minio_s3_traffic_sent_bytes%5B3m%5D)&g0.tab=0&g0.display_mode=lines&g0.show_exemplars=0&g0.range_input=6h), [internode egress](https://prom.k-space.ee/graph?g0.expr=rate(minio_inter_node_traffic_sent_bytes%5B2m%5D)&g0.tab=0&g0.display_mode=lines&g0.show_exemplars=0&g0.range_input=6h), [storage used](https://prom.k-space.ee/graph?g0.expr=minio_node_disk_used_bytes&g0.tab=0&g0.display_mode=lines&g0.show_exemplars=0&g0.range_input=6h)
Another useful tool for exploring Prometheus operator custom resources is
[doc.crds.dev/github.com/prometheus-operator/prometheus-operator](https://doc.crds.dev/github.com/prometheus-operator/prometheus-operator@v0.75.0)
# For administrators # For administrators
To reconfigure SNMP targets etc: To reconfigure SNMP targets etc:
@ -55,14 +52,7 @@ To set Mikrotik secrets:
``` ```
kubectl create -n monitoring secret generic mikrotik-exporter \ kubectl create -n monitoring secret generic mikrotik-exporter \
--from-literal=username=netpoller \ --from-literal=MIKROTIK_PASSWORD='f7W!H*Pu' \
--from-literal=password=... --from-literal=PROMETHEUS_BEARER_TOKEN=$(cat /dev/urandom | base64 | head -c 30)
``` ```
To wipe timeseries:
```
for replica in $(seq 0 2); do
kubectl exec -n monitoring prometheus-prometheus-$replica -- wget --post-data='match[]={__name__=~"mikrotik_.*"}' http://127.0.0.1:9090/api/v1/admin/tsdb/delete_series -O -
done
```

@ -169,7 +169,7 @@ spec:
spec: spec:
containers: containers:
- name: blackbox-exporter - name: blackbox-exporter
image: mirror.gcr.io/prom/blackbox-exporter:v0.26.0 image: mirror.gcr.io/prom/blackbox-exporter:v0.25.0
ports: ports:
- name: http - name: http
containerPort: 9115 containerPort: 9115

@ -4,29 +4,25 @@ kind: Probe
metadata: metadata:
name: mikrotik name: mikrotik
spec: spec:
basicAuth: bearerTokenSecret:
username: name: mikrotik-exporter
name: mikrotik-exporter key: PROMETHEUS_BEARER_TOKEN
key: username
password:
name: mikrotik-exporter
key: password
prober: prober:
path: /metrics
url: mikrotik-exporter url: mikrotik-exporter
module: full
targets: targets:
staticConfig: staticConfig:
static: static:
- 172.23.0.1 - router.mgmt.k-space.ee
- 172.23.0.100 - sw_chaos.mgmt.k-space.ee
#- 100.102.1.111 - sw_poe.mgmt.k-space.ee
#- 100.102.1.112 - sw_mgmt.mgmt.k-space.ee
- 100.102.1.114 - sw_core02.mgmt.k-space.ee
- 100.102.1.115 - sw_cyber.mgmt.k-space.ee
- 100.102.1.121 - sw_ha.mgmt.k-space.ee
- 100.102.1.131 - sw_asocial.mgmt.k-space.ee
- 100.102.1.141 - sw_kitchen.mgmt.k-space.ee
- 100.102.1.151 - sw_core01.mgmt.k-space.ee
--- ---
apiVersion: monitoring.coreos.com/v1 apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule kind: PrometheusRule
@ -36,30 +32,22 @@ spec:
groups: groups:
- name: mikrotik - name: mikrotik
rules: rules:
- alert: MikrotikBondRedundancyLost - alert: MikrotikUplinkRedundancyLost
expr: mikrotik_bond_port_active == 0 expr: mikrotik_interface_running{port=~"sfp-sfpplus[12]", instance!~"sw_core.*", instance!~"sw_mgmt.*"} == 0
for: 2m for: 0m
labels: labels:
severity: error severity: error
annotations: annotations:
summary: Switch uplink high availability lost summary: Switch uplink high availability lost
description: One of the two bonds has inactive member interface description: One of the two 10Gb optical links is malfunctioning
- alert: MikrotikLinkRateDegraded - alert: MikrotikLinkRateDegraded
expr: mikrotik_interface_link_rate_bps{interface=~"sfp-sfpplus.*"} < 10000000000 expr: mikrotik_interface_rate{port=~"sfp-sfpplus.*"} < 10000000000
for: 2m for: 0m
labels: labels:
severity: error severity: error
annotations: annotations:
summary: SFP+ link degraded summary: 10Gb link degraded
description: One of the SFP+ (10G) links is running at lower speed description: One of the 10Gb links is running at lower speed
- alert: MikrotikLinkRateDegraded
expr: mikrotik_interface_link_rate_bps{interface=~"qsfpplus.*"} < 40000000000
for: 2m
labels:
severity: error
annotations:
summary: QSFP+ link degraded
description: One of the QSFP+ (40G) links is running at lower speed
--- ---
apiVersion: apps/v1 apiVersion: apps/v1
kind: Deployment kind: Deployment
@ -75,10 +63,20 @@ spec:
metadata: metadata:
labels: labels:
app: mikrotik-exporter app: mikrotik-exporter
annotations:
co.elastic.logs/multiline.pattern: '^ '
co.elastic.logs/multiline.negate: "false"
co.elastic.logs/multiline.match: after
spec: spec:
containers: containers:
- name: mikrotik-exporter - name: mikrotik-exporter
image: mirror.gcr.io/codemowers/mikrotik-exporter:latest@sha256:895ed4a96364aa6f37aa049eb7882779529dce313360e78b01dee7d6f9b3e0bb image: mirror.gcr.io/codemowers/mikrotik-exporter:latest
env:
- name: MIKROTIK_USER
value: netpoller
envFrom:
- secretRef:
name: mikrotik-exporter
topologySpreadConstraints: topologySpreadConstraints:
- maxSkew: 1 - maxSkew: 1
topologyKey: topology.kubernetes.io/zone topologyKey: topology.kubernetes.io/zone
@ -96,13 +94,13 @@ spec:
affinity: affinity:
podAntiAffinity: podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution: requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector: - labelSelector:
matchExpressions: matchExpressions:
- key: app - key: app
operator: In operator: In
values: values:
- mikrotik-exporter - mikrotik-exporter
topologyKey: "kubernetes.io/hostname" topologyKey: "kubernetes.io/hostname"
--- ---
kind: Service kind: Service
apiVersion: v1 apiVersion: v1
@ -114,6 +112,6 @@ spec:
- name: http - name: http
port: 80 port: 80
protocol: TCP protocol: TCP
targetPort: 8728 targetPort: 3001
selector: selector:
app: mikrotik-exporter app: mikrotik-exporter

@ -33,7 +33,7 @@ spec:
groups: groups:
- name: node-exporter - name: node-exporter
rules: rules:
- alert: ZfsDegradedPool - alert: ZfsOfflinePool
expr: node_zfs_zpool_state{state!="online"} > 0 expr: node_zfs_zpool_state{state!="online"} > 0
for: 1m for: 1m
labels: labels:
@ -377,20 +377,14 @@ spec:
- name: node-exporter - name: node-exporter
args: args:
- --web.listen-address=0.0.0.0:9101 - --web.listen-address=0.0.0.0:9101
- --no-collector.bonding - --path.sysfs=/host/sys
- --no-collector.fibrechannel - --path.rootfs=/host/root
- --no-collector.infiniband
- --no-collector.nfs
- --no-collector.nfsd
- --no-collector.nvme
- --no-collector.zfs
- --no-collector.tapestats
- --no-collector.wifi - --no-collector.wifi
- --collector.filesystem.mount-points-exclude=^/(dev|proc|sys|var/lib/docker|var/lib/kubelet/pods|run)(/.+)?$ - --collector.filesystem.mount-points-exclude=^/(dev|proc|sys|var/lib/docker/.+|var/lib/kubelet/pods/.+)($|/)
- --collector.netclass.ignored-devices=^(veth|cali|vxlan|cni|vnet|tap|lo|wg) - --collector.netclass.ignored-devices=^(veth|cali|vxlan|cni|vnet|tap|lo|wg)
- --collector.netdev.device-exclude=^(veth|cali|vxlan|cni|vnet|tap|lo|wg) - --collector.netdev.device-exclude=^(veth|cali|vxlan|cni|vnet|tap|lo|wg)
- --collector.diskstats.ignored-devices=^(sr|loop)[0-9][0-9]*$ - --collector.diskstats.ignored-devices=^(sr[0-9][0-9]*)$
image: mirror.gcr.io/prom/node-exporter:v1.9.1 image: mirror.gcr.io/prom/node-exporter:v1.8.2
resources: resources:
limits: limits:
cpu: 50m cpu: 50m
@ -399,11 +393,13 @@ spec:
cpu: 5m cpu: 5m
memory: 20Mi memory: 20Mi
volumeMounts: volumeMounts:
- name: sys - mountPath: /host/sys
mountPath: /sys mountPropagation: HostToContainer
name: sys
readOnly: true readOnly: true
- name: proc - mountPath: /host/root
mountPath: /proc mountPropagation: HostToContainer
name: root
readOnly: true readOnly: true
ports: ports:
- containerPort: 9101 - containerPort: 9101
@ -423,9 +419,9 @@ spec:
tolerations: tolerations:
- operator: Exists - operator: Exists
volumes: volumes:
- name: sys - hostPath:
hostPath:
path: /sys path: /sys
- name: proc name: sys
hostPath: - hostPath:
path: /proc path: /
name: root

@ -17,7 +17,6 @@ metadata:
name: prometheus name: prometheus
namespace: monitoring namespace: monitoring
spec: spec:
enableAdminAPI: true
topologySpreadConstraints: topologySpreadConstraints:
- maxSkew: 1 - maxSkew: 1
topologyKey: topology.kubernetes.io/zone topologyKey: topology.kubernetes.io/zone
@ -384,6 +383,7 @@ metadata:
namespace: monitoring namespace: monitoring
annotations: annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
traefik.ingress.kubernetes.io/router.middlewares: monitoring-prometheus@kubernetescrd traefik.ingress.kubernetes.io/router.middlewares: monitoring-prometheus@kubernetescrd
spec: spec:
@ -409,6 +409,7 @@ metadata:
namespace: monitoring namespace: monitoring
annotations: annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
traefik.ingress.kubernetes.io/router.middlewares: monitoring-alertmanager@kubernetescrd traefik.ingress.kubernetes.io/router.middlewares: monitoring-alertmanager@kubernetescrd
spec: spec:

@ -86,8 +86,8 @@ spec:
staticConfig: staticConfig:
static: static:
- ups-4.mgmt.k-space.ee - ups-4.mgmt.k-space.ee
- ups-6.mgmt.k-space.ee
- ups-7.mgmt.k-space.ee - ups-7.mgmt.k-space.ee
- ups-8.mgmt.k-space.ee
- ups-9.mgmt.k-space.ee - ups-9.mgmt.k-space.ee
--- ---
apiVersion: monitoring.coreos.com/v1 apiVersion: monitoring.coreos.com/v1

@ -13,7 +13,7 @@ spec:
podSpec: podSpec:
containers: containers:
- name: mariadb - name: mariadb
image: mirror.gcr.io/library/mariadb:10.9.7@sha256:198c7a5fea3d7285762042a628fe8f83f0a7ccef559605b4cc9502e65210880b image: mariadb:10.9.7@sha256:198c7a5fea3d7285762042a628fe8f83f0a7ccef559605b4cc9502e65210880b
imagePullPolicy: IfNotPresent imagePullPolicy: IfNotPresent
nodeSelector: nodeSelector:
dedicated: storage dedicated: storage

@ -29,7 +29,7 @@ spec:
spec: spec:
containers: containers:
- name: phpmyadmin - name: phpmyadmin
image: mirror.gcr.io/phpmyadmin/phpmyadmin image: phpmyadmin/phpmyadmin
ports: ports:
- name: web - name: web
containerPort: 80 containerPort: 80
@ -77,6 +77,7 @@ metadata:
annotations: annotations:
kubernetes.io/ingress.class: traefik kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
traefik.ingress.kubernetes.io/router.middlewares: mysql-clusters-phpmyadmin@kubernetescrd traefik.ingress.kubernetes.io/router.middlewares: mysql-clusters-phpmyadmin@kubernetescrd
spec: spec:

@ -14,7 +14,7 @@ spec:
podSpec: podSpec:
containers: containers:
- name: mariadb - name: mariadb
image: mirror.gcr.io/library/mariadb:10.9.7@sha256:198c7a5fea3d7285762042a628fe8f83f0a7ccef559605b4cc9502e65210880b image: mariadb:10.9.7@sha256:198c7a5fea3d7285762042a628fe8f83f0a7ccef559605b4cc9502e65210880b
imagePullPolicy: IfNotPresent imagePullPolicy: IfNotPresent
nodeSelector: nodeSelector:
dedicated: storage dedicated: storage

@ -3,18 +3,9 @@ apiVersion: storage.k8s.io/v1
kind: StorageClass kind: StorageClass
metadata: metadata:
name: mysql name: mysql
annotations: provisioner: rawfile.csi.openebs.io
kubernetes.io/description: |
Storage class for MySQL, MariaDB and similar applications that
implement high availability in application layer.
This storage class uses XFS, has no block level redundancy and
has block device level caching disabled.
provisioner: csi.proxmox.sinextra.dev
reclaimPolicy: Retain reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true allowVolumeExpansion: true
parameters: parameters:
csi.storage.k8s.io/fstype: xfs fsType: "xfs"
storage: ks-pvs
cache: none
ssd: "true"

Some files were not shown because too many files have changed in this diff Show More