1
0
forked from k-space/kube

Compare commits

..

1 Commits

Author SHA1 Message Date
6b635b6dc7 proxmox: first attempt to move to ingressroute 2022-11-04 11:39:36 +02:00
190 changed files with 32875 additions and 41643 deletions

138
README.md
View File

@ -2,8 +2,21 @@
## Introduction ## Introduction
This is the Kubernetes manifests of services running on k-space.ee domains. This is the Kubernetes manifests of services running on k-space.ee domains:
The applications are listed on https://auth2.k-space.ee for authenticated users.
- [Authelia](https://auth.k-space.ee) for authentication
- [Drone.io](https://drone.k-space.ee) for building Docker images
- [Harbor](https://harbor.k-space.ee) for hosting Docker images
- [ArgoCD](https://argocd.k-space.ee) for deploying Kubernetes manifests and
Helm charts into the cluster
- [camtiler](https://cams.k-space.ee) for cameras
- [Longhorn Dashboard](https://longhorn.k-space.ee) for administering
Longhorn storage
- [Kubernetes Dashboard](https://kubernetes-dashboard.k-space.ee/) for read-only overview
of the Kubernetes cluster
- [Wildduck Webmail](https://webmail.k-space.ee/)
Most endpoints are protected by OIDC autentication or Authelia SSO middleware.
## Cluster access ## Cluster access
@ -14,7 +27,7 @@ General discussion is happening in the `#kube` Slack channel.
For bootstrap access obtain `/etc/kubernetes/admin.conf` from one of the master For bootstrap access obtain `/etc/kubernetes/admin.conf` from one of the master
nodes and place it under `~/.kube/config` on your machine. nodes and place it under `~/.kube/config` on your machine.
Once Passmower is working, OIDC access for others can be enabled with Once Authelia is working, OIDC access for others can be enabled with
running following on Kubernetes masters: running following on Kubernetes masters:
```bash ```bash
@ -23,9 +36,9 @@ patch /etc/kubernetes/manifests/kube-apiserver.yaml - << EOF
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379 - --etcd-servers=https://127.0.0.1:2379
+ - --oidc-issuer-url=https://auth2.k-space.ee/ + - --oidc-issuer-url=https://auth.k-space.ee
+ - --oidc-client-id=oidc-gateway.kubelogin + - --oidc-client-id=kubelogin
+ - --oidc-username-claim=sub + - --oidc-username-claim=preferred_username
+ - --oidc-groups-claim=groups + - --oidc-groups-claim=groups
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
@ -64,8 +77,8 @@ users:
args: args:
- oidc-login - oidc-login
- get-token - get-token
- --oidc-issuer-url=https://auth2.k-space.ee/ - --oidc-issuer-url=https://auth.k-space.ee
- --oidc-client-id=oidc-gateway.kubelogin - --oidc-client-id=kubelogin
- --oidc-use-pkce - --oidc-use-pkce
- --oidc-extra-scope=profile,email,groups - --oidc-extra-scope=profile,email,groups
- --listen-address=127.0.0.1:27890 - --listen-address=127.0.0.1:27890
@ -107,7 +120,7 @@ Our self-hosted Kubernetes stack compared to AWS based deployments:
| AWS Route53 | Bind and RFC2136 | DNS records and Let's Encrypt DNS validation | | AWS Route53 | Bind and RFC2136 | DNS records and Let's Encrypt DNS validation |
| AWS S3 | Minio Operator | Highly available object storage | | AWS S3 | Minio Operator | Highly available object storage |
| AWS VPC | Calico | Overlay network | | AWS VPC | Calico | Overlay network |
| Dex | Passmower | ACL mapping and OIDC provider which integrates with GitHub/Samba | | Dex | Authelia | ACL mapping and OIDC provider which integrates with GitHub/Samba |
| GitHub Actions | Drone | Build Docker images | | GitHub Actions | Drone | Build Docker images |
| GitHub | Gitea | Source code management, issue tracking | | GitHub | Gitea | Source code management, issue tracking |
| GitHub OAuth2 | Samba (Active Directory compatible) | Source of truth for authentication and authorization | | GitHub OAuth2 | Samba (Active Directory compatible) | Source of truth for authentication and authorization |
@ -116,6 +129,7 @@ Our self-hosted Kubernetes stack compared to AWS based deployments:
External dependencies running as classic virtual machines: External dependencies running as classic virtual machines:
- Samba as Authelia's source of truth
- Bind as DNS server - Bind as DNS server
@ -123,13 +137,13 @@ External dependencies running as classic virtual machines:
Deploy applications via [ArgoCD](https://argocd.k-space.ee) Deploy applications via [ArgoCD](https://argocd.k-space.ee)
We use Treafik with Passmower for Ingress. We use Treafik with Authelia for Ingress.
Applications where possible and where applicable should use `Remote-User` Applications where possible and where applicable should use `Remote-User`
authentication. This prevents application exposure on public Internet. authentication. This prevents application exposure on public Internet.
Otherwise use OpenID Connect for authentication, Otherwise use OpenID Connect for authentication,
see Argo itself as an example how that is done. see Argo itself as an example how that is done.
See `camtiler/ingress.yml` for commented Ingress example. See `kspace-camtiler/ingress.yml` for commented Ingress example.
Note that we do not use IngressRoute objects because they don't Note that we do not use IngressRoute objects because they don't
support `external-dns` out of the box. support `external-dns` out of the box.
@ -141,12 +155,27 @@ these should be handled by `tls:` section in Ingress.
## Cluster formation ## Cluster formation
Created Ubuntu 22.04 VM-s on Proxmox with local storage. Create Ubuntu 20.04 VM-s on Proxmox with local storage.
Added some ARM64 workers by using Ubuntu 22.04 server on Raspberry Pi.
After machines have booted up and you can reach them via SSH: After machines have booted up and you can reach them via SSH:
``` ```bash
# Enable required kernel modules
cat > /etc/modules << EOF
overlay
br_netfilter
EOF
cat /etc/modules | xargs -L 1 -t modprobe
# Finetune sysctl:
cat > /etc/sysctl.d/99-k8s.conf << EOF
net.ipv4.conf.all.accept_redirects = 0
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sysctl --system
# Disable Ubuntu caching DNS resolver # Disable Ubuntu caching DNS resolver
systemctl disable systemd-resolved.service systemctl disable systemd-resolved.service
systemctl stop systemd-resolved systemctl stop systemd-resolved
@ -157,16 +186,50 @@ nameserver 8.8.8.8
EOF EOF
# Disable multipathd as Longhorn handles that itself # Disable multipathd as Longhorn handles that itself
systemctl mask multipathd snapd systemctl mask multipathd
systemctl disable --now multipathd snapd bluetooth ModemManager hciuart wpa_supplicant packagekit systemctl disable multipathd
systemctl stop multipathd
# Disable Snapcraft
systemctl mask snapd
systemctl disable snapd
systemctl stop snapd
# Permit root login # Permit root login
sed -i -e 's/PermitRootLogin no/PermitRootLogin without-password/' /etc/ssh/sshd_config sed -i -e 's/PermitRootLogin no/PermitRootLogin without-password/' /etc/ssh/sshd_config
systemctl reload ssh systemctl reload ssh
cat ~ubuntu/.ssh/authorized_keys > /root/.ssh/authorized_keys cat << EOF > /root/.ssh/authorized_keys
sk-ecdsa-sha2-nistp256@openssh.com AAAAInNrLWVjZHNhLXNoYTItbmlzdHAyNTZAb3BlbnNzaC5jb20AAAAIbmlzdHAyNTYAAABBBD4/e9SWYWYoNZMkkF+NirhbmHuUgjoCap42kAq0pLIXFwIqgVTCre03VPoChIwBClc8RspLKqr5W3j0fG8QwnQAAAAEc3NoOg== lauri@lauri-x13
EOF
userdel -f ubuntu userdel -f ubuntu
apt-get install -yqq linux-image-generic apt-get remove -yq cloud-init
apt-get remove -yq cloud-init linux-image-*-kvm
```
Install packages, for Raspbian set `OS=Debian_11`
```bash
OS=xUbuntu_20.04
VERSION=1.23
cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /
EOF
cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list
deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /
EOF
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers.gpg add -
curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers-cri-o.gpg add -
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list
apt-get update
apt-get install -yqq apt-transport-https curl cri-o cri-o-runc kubelet=1.23.5-00 kubectl=1.23.5-00 kubeadm=1.23.5-00
sudo systemctl daemon-reload
sudo systemctl enable crio --now
apt-mark hold kubelet kubeadm kubectl
sed -i -e 's/unqualified-search-registries = .*/unqualified-search-registries = ["docker.io"]/' /etc/containers/registries.conf
``` ```
On master: On master:
@ -177,16 +240,6 @@ kubeadm init --token-ttl=120m --pod-network-cidr=10.244.0.0/16 --control-plane-e
For the `kubeadm join` command specify FQDN via `--node-name $(hostname -f)`. For the `kubeadm join` command specify FQDN via `--node-name $(hostname -f)`.
Set AZ labels:
```
for j in $(seq 1 9); do
for t in master mon worker storage; do
kubectl label nodes ${t}${j}.kube.k-space.ee topology.kubernetes.io/zone=node${j}
done
done
```
After forming the cluster add taints: After forming the cluster add taints:
```bash ```bash
@ -194,7 +247,7 @@ for j in $(seq 1 9); do
kubectl label nodes worker${j}.kube.k-space.ee node-role.kubernetes.io/worker='' kubectl label nodes worker${j}.kube.k-space.ee node-role.kubernetes.io/worker=''
done done
for j in $(seq 1 4); do for j in $(seq 1 3); do
kubectl taint nodes mon${j}.kube.k-space.ee dedicated=monitoring:NoSchedule kubectl taint nodes mon${j}.kube.k-space.ee dedicated=monitoring:NoSchedule
kubectl label nodes mon${j}.kube.k-space.ee dedicated=monitoring kubectl label nodes mon${j}.kube.k-space.ee dedicated=monitoring
done done
@ -205,26 +258,15 @@ for j in $(seq 1 4); do
done done
``` ```
On Raspberry Pi you need to take additonal steps:
* Manually enable cgroups by appending
`cgroup_memory=1 cgroup_enable=memory` to `/boot/cmdline.txt`,
* Disable swap with `swapoff -a; apt-get purge -y dphys-swapfile`
* For mounting Longhorn volumes on Rasbian install `open-iscsi`
For `arm64` nodes add suitable taint to prevent scheduling non-multiarch images on them: For `arm64` nodes add suitable taint to prevent scheduling non-multiarch images on them:
```bash ```bash
kubectl taint nodes worker9.kube.k-space.ee arch=arm64:NoSchedule kubectl taint nodes worker9.kube.k-space.ee arch=arm64:NoSchedule
``` ```
For door controllers:
```
for j in ground front back; do
kubectl taint nodes door-${j}.kube.k-space.ee dedicated=door:NoSchedule
kubectl label nodes door-${j}.kube.k-space.ee dedicated=door
kubectl taint nodes door-${j}.kube.k-space.ee arch=arm64:NoSchedule
done
```
To reduce wear on storage:
```
echo StandardOutput=null >> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
systemctl daemon-reload
systemctl restart kubelet
```

View File

@ -1,76 +0,0 @@
- name: Setup primary nameserver
hosts: ns1.k-space.ee
tasks:
- name: Make sure bind9 is installed
ansible.builtin.apt:
name: bind9
state: present
- name: Configure Bind
register: bind
copy:
dest: /etc/bind/named.conf
content: |
# This file is managed by Ansible
# https://git.k-space.ee/k-space/kube/src/branch/master/ansible-bind-primary.yml
# Do NOT modify manually
include "/etc/bind/named.conf.local";
include "/etc/bind/readwrite.key";
include "/etc/bind/readonly.key";
options {
directory "/var/cache/bind";
version "";
listen-on { any; };
listen-on-v6 { any; };
pid-file "/var/run/named/named.pid";
notify explicit; also-notify { 172.20.53.1; 172.20.53.2; 172.20.53.3; };
allow-recursion { none; };
recursion no;
check-names master ignore;
dnssec-validation no;
auth-nxdomain no;
};
# https://kb.isc.org/docs/aa-00723
acl allowed {
172.20.3.0/24;
172.20.4.0/24;
};
acl rejected { !allowed; any; };
zone "." {
type hint;
file "/var/lib/bind/db.root";
};
zone "k-space.ee" {
type master;
file "/var/lib/bind/db.k-space.ee";
allow-update { !rejected; key readwrite; };
allow-transfer { !rejected; key readonly; key readwrite; };
};
zone "k6.ee" {
type master;
file "/var/lib/bind/db.k6.ee";
allow-update { !rejected; key readwrite; };
allow-transfer { !rejected; key readonly; key readwrite; };
};
zone "kspace.ee" {
type master;
file "/var/lib/bind/db.kspace.ee";
allow-update { !rejected; key readwrite; };
allow-transfer { !rejected; key readonly; key readwrite; };
};
- name: Check Bind config
ansible.builtin.shell: "named-checkconf"
- name: Reload Bind config
service:
name: bind9
state: reloaded
when: bind.changed

View File

@ -1,63 +0,0 @@
# ansible doors -m shell -a "ctr image pull harbor.k-space.ee/k-space/mjpg-streamer:latest"
# journalctl -u mjpg_streamer@video0.service -f
- name: Setup doors
hosts: doors
tasks:
- name: Make sure containerd is installed
ansible.builtin.apt:
name: containerd
state: present
- name: Copy systemd service for Doorboy controller
copy:
dest: /etc/systemd/system/godoor.service
content: |
[Unit]
Description=Doorboy service
Documentation=https://git.k-space.ee/k-space/godoor
After=network.target
[Service]
Environment=IMAGE=harbor.k-space.ee/k-space/godoor:latest
ExecStartPre=-ctr task kill --signal=9 %N
ExecStartPre=-ctr task rm %N
ExecStartPre=-ctr c rm %N
ExecStartPre=-ctr image pull $IMAGE
ExecStart=ctr run --rm --pid-file=/run/%N.pid --privileged --read-only --env-file=/etc/godoor --env=KDOORPI_API_ALLOWED=https://doorboy-proxy.k-space.ee/allowed --env=KDOORPI_API_LONGPOLL=https://doorboy-proxy.k-space.ee/longpoll --env=KDOORPI_API_SWIPE=https://doorboy-proxy.k-space.ee/swipe --env=KDOORPI_DOOR=%H --net-host --net-host --cwd /app $IMAGE %N /godoor
ExecStopPost=ctr task rm %N
ExecStopPost=ctr c rm %N
Restart=always
[Install]
WantedBy=multi-user.target
- name: Enable Doorboy controller
ansible.builtin.systemd:
state: restarted
daemon_reload: yes
name: godoor.service
- name: Copy systemd service for mjpg-streamer
copy:
dest: /etc/systemd/system/mjpg_streamer@.service
content: |
[Unit]
Description=A server for streaming Motion-JPEG from a video capture device
After=network.target
ConditionPathExists=/dev/%I
[Service]
Environment=IMAGE=harbor.k-space.ee/k-space/mjpg-streamer:latest
StandardOutput=tty
Type=forking
ExecStartPre=-ctr task kill --signal=9 %p_%i
ExecStartPre=-ctr task rm %p_%i
ExecStartPre=-ctr c rm %p_%i
ExecStartPre=-ctr image pull $IMAGE
ExecStart=ctr run --tty -d --rm --pid-file=/run/%i.pid --privileged --read-only --net-host $IMAGE %p_%i /usr/local/bin/mjpg_streamer -i 'input_uvc.so -d /dev/%I -r 1280x720 -f 10' -o 'output_http.so -w /usr/share/mjpg_streamer/www'
ExecStopPost=ctr task rm %p_%i
ExecStopPost=ctr c rm %p_%i
PIDFile=/run/%i.pid
[Install]
WantedBy=multi-user.target
- name: Enable mjpg-streamer
ansible.builtin.systemd:
state: restarted
daemon_reload: yes
name: mjpg_streamer@video0.service

View File

@ -1,81 +0,0 @@
---
- name: Reconfigure graceful shutdown for kubelet
hosts: kubernetes
tasks:
- name: Reconfigure shutdownGracePeriod
ansible.builtin.lineinfile:
path: /var/lib/kubelet/config.yaml
regexp: '^shutdownGracePeriod:'
line: 'shutdownGracePeriod: 5m'
- name: Reconfigure shutdownGracePeriodCriticalPods
ansible.builtin.lineinfile:
path: /var/lib/kubelet/config.yaml
regexp: '^shutdownGracePeriodCriticalPods:'
line: 'shutdownGracePeriodCriticalPods: 5m'
- name: Work around unattended-upgrades
ansible.builtin.lineinfile:
path: /lib/systemd/logind.conf.d/unattended-upgrades-logind-maxdelay.conf
regexp: '^InhibitDelayMaxSec='
line: 'InhibitDelayMaxSec=5m0s'
- name: Pin kube components
hosts: kubernetes
tasks:
- name: Pin packages
loop:
- kubeadm
- kubectl
- kubelet
ansible.builtin.copy:
dest: "/etc/apt/preferences.d/{{ item }}"
content: |
Package: {{ item }}
Pin: version 1.26.*
Pin-Priority: 1001
- name: Reset /etc/containers/registries.conf
hosts: kubernetes
tasks:
- name: Copy /etc/containers/registries.conf
ansible.builtin.copy:
content: "unqualified-search-registries = [\"docker.io\"]\n"
dest: /etc/containers/registries.conf
register: registries
- name: Restart CRI-O
service:
name: cri-o
state: restarted
when: registries.changed
- name: Reset /etc/modules
hosts: kubernetes
tasks:
- name: Copy /etc/modules
ansible.builtin.copy:
content: |
overlay
br_netfilter
dest: /etc/modules
register: kernel_modules
- name: Load kernel modules
ansible.builtin.shell: "cat /etc/modules | xargs -L 1 -t modprobe"
when: kernel_modules.changed
- name: Reset /etc/sysctl.d/99-k8s.conf
hosts: kubernetes
tasks:
- name: Copy /etc/sysctl.d/99-k8s.conf
ansible.builtin.copy:
content: |
net.ipv4.conf.all.accept_redirects = 0
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
vm.max_map_count = 524288
fs.inotify.max_user_instances = 1280
fs.inotify.max_user_watches = 655360
dest: /etc/sysctl.d/99-k8s.conf
register: sysctl
- name: Reload sysctl config
ansible.builtin.shell: "sysctl --system"
when: sysctl.changed

View File

@ -1,12 +0,0 @@
[defaults]
ansible_managed = This file is managed by Ansible, manual changes will be overwritten.
inventory = inventory.yml
nocows = 1
pipelining = True
pattern =
deprecation_warnings = False
fact_caching = jsonfile
fact_caching_connection = ~/.ansible/k-space-fact-cache
[ssh_connection]
ssh_args = -F ssh_config

View File

@ -1,9 +1,7 @@
# Workflow # Workflow
Most applications in our Kubernetes cluster are managed by ArgoCD. Most applications in our Kubernetes cluster are managed by ArgoCD.
Most notably operators are NOT managed by ArgoCD.
Adding to `applications/`: `kubectl apply -f newapp.yaml`
# Deployment # Deployment
@ -13,15 +11,16 @@ To deploy ArgoCD:
helm repo add argo-cd https://argoproj.github.io/argo-helm helm repo add argo-cd https://argoproj.github.io/argo-helm
kubectl create secret -n argocd generic argocd-secret # Initialize empty secret for sessions kubectl create secret -n argocd generic argocd-secret # Initialize empty secret for sessions
helm template -n argocd --release-name k6 argo-cd/argo-cd --include-crds -f values.yaml > argocd.yml helm template -n argocd --release-name k6 argo-cd/argo-cd --include-crds -f values.yaml > argocd.yml
kubectl apply -f argocd.yml -f application-extras.yml -n argocd kubectl apply -f argocd.yml -n argocd
kubectl -n argocd rollout restart deployment/k6-argocd-redis kubectl -n argocd rollout restart deployment/k6-argocd-redis
kubectl -n argocd rollout restart deployment/k6-argocd-repo-server kubectl -n argocd rollout restart deployment/k6-argocd-repo-server
kubectl -n argocd rollout restart deployment/k6-argocd-server kubectl -n argocd rollout restart deployment/k6-argocd-server
kubectl -n argocd rollout restart deployment/k6-argocd-notifications-controller kubectl -n argocd rollout restart deployment/k6-argocd-notifications-controller
kubectl -n argocd rollout restart statefulset/k6-argocd-application-controller kubectl -n argocd rollout restart statefulset/k6-argocd-application-controller
kubectl label -n argocd secret oidc-client-argocd-owner-secrets app.kubernetes.io/part-of=argocd
``` ```
Note: Refer to Authelia README for OIDC secret setup
# Setting up Git secrets # Setting up Git secrets
@ -50,32 +49,3 @@ rm -fv id_ecdsa
Have Gitea admin reset password for user `argocd` and log in with that account. Have Gitea admin reset password for user `argocd` and log in with that account.
Add the SSH key for user `argocd` from file `id_ecdsa.pub`. Add the SSH key for user `argocd` from file `id_ecdsa.pub`.
Delete any other SSH keys associated with Gitea user `argocd`. Delete any other SSH keys associated with Gitea user `argocd`.
# Managing applications
To update apps:
```
for j in asterisk bind camtiler drone drone-execution etherpad freescout gitea grafana hackerspace nextcloud nyancat rosdump traefik wiki wildduck woodpecker; do
cat << EOF >> applications/$j.yaml
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: $j
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: $j
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: $j
syncPolicy: {}
EOF
done
find applications -name "*.yaml" -exec kubectl apply -n argocd -f {} \;
```

View File

@ -1,37 +0,0 @@
---
apiVersion: codemowers.io/v1alpha1
kind: OIDCGWClient
metadata:
name: argocd
namespace: argocd
spec:
displayName: Argo CD
uri: https://argocd.k-space.ee
redirectUris:
- https://argocd.k-space.ee/auth/callback
allowedGroups:
- k-space:kubernetes:admins
grantTypes:
- authorization_code
- refresh_token
responseTypes:
- code
availableScopes:
- openid
- profile
pkce: false
---
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
namespace: argocd
name: k-space.ee
spec:
clusterResourceWhitelist:
- group: '*'
kind: '*'
destinations:
- namespace: '*'
server: '*'
sourceRepos:
- '*'

View File

@ -0,0 +1,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: authelia
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: authelia
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: authelia
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -1,11 +1,10 @@
---
apiVersion: argoproj.io/v1alpha1 apiVersion: argoproj.io/v1alpha1
kind: Application kind: Application
metadata: metadata:
name: camtiler name: camtiler
namespace: argocd namespace: argocd
spec: spec:
project: k-space.ee project: default
source: source:
repoURL: 'git@git.k-space.ee:k-space/kube.git' repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: camtiler path: camtiler
@ -13,4 +12,6 @@ spec:
destination: destination:
server: 'https://kubernetes.default.svc' server: 'https://kubernetes.default.svc'
namespace: camtiler namespace: camtiler
syncPolicy: {} syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -1,11 +1,10 @@
---
apiVersion: argoproj.io/v1alpha1 apiVersion: argoproj.io/v1alpha1
kind: Application kind: Application
metadata: metadata:
name: drone-execution name: drone-execution
namespace: argocd namespace: argocd
spec: spec:
project: k-space.ee project: default
source: source:
repoURL: 'git@git.k-space.ee:k-space/kube.git' repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: drone-execution path: drone-execution
@ -13,4 +12,6 @@ spec:
destination: destination:
server: 'https://kubernetes.default.svc' server: 'https://kubernetes.default.svc'
namespace: drone-execution namespace: drone-execution
syncPolicy: {} syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -1,11 +1,10 @@
---
apiVersion: argoproj.io/v1alpha1 apiVersion: argoproj.io/v1alpha1
kind: Application kind: Application
metadata: metadata:
name: drone name: drone
namespace: argocd namespace: argocd
spec: spec:
project: k-space.ee project: default
source: source:
repoURL: 'git@git.k-space.ee:k-space/kube.git' repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: drone path: drone
@ -13,4 +12,6 @@ spec:
destination: destination:
server: 'https://kubernetes.default.svc' server: 'https://kubernetes.default.svc'
namespace: drone namespace: drone
syncPolicy: {} syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -0,0 +1,22 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: elastic-system
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: elastic-system
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: elastic-system
syncPolicy:
syncOptions:
- CreateNamespace=true
ignoreDifferences:
- group: admissionregistration.k8s.io
kind: ValidatingWebhookConfiguration
jqPathExpressions:
- '.webhooks[]?.clientConfig.caBundle'

View File

@ -1,11 +1,10 @@
---
apiVersion: argoproj.io/v1alpha1 apiVersion: argoproj.io/v1alpha1
kind: Application kind: Application
metadata: metadata:
name: etherpad name: etherpad
namespace: argocd namespace: argocd
spec: spec:
project: k-space.ee project: default
source: source:
repoURL: 'git@git.k-space.ee:k-space/kube.git' repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: etherpad path: etherpad
@ -13,4 +12,6 @@ spec:
destination: destination:
server: 'https://kubernetes.default.svc' server: 'https://kubernetes.default.svc'
namespace: etherpad namespace: etherpad
syncPolicy: {} syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -0,0 +1,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: external-dns
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: external-dns
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: external-dns
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -1,16 +0,0 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: freescout
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: freescout
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: freescout
syncPolicy: {}

View File

@ -1,16 +0,0 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: gitea
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: gitea
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: gitea
syncPolicy: {}

View File

@ -1,11 +1,10 @@
---
apiVersion: argoproj.io/v1alpha1 apiVersion: argoproj.io/v1alpha1
kind: Application kind: Application
metadata: metadata:
name: grafana name: grafana
namespace: argocd namespace: argocd
spec: spec:
project: k-space.ee project: default
source: source:
repoURL: 'git@git.k-space.ee:k-space/kube.git' repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: grafana path: grafana
@ -13,4 +12,6 @@ spec:
destination: destination:
server: 'https://kubernetes.default.svc' server: 'https://kubernetes.default.svc'
namespace: grafana namespace: grafana
syncPolicy: {} syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -1,16 +0,0 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: hackerspace
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: hackerspace
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: hackerspace
syncPolicy: {}

View File

@ -1,16 +1,17 @@
---
apiVersion: argoproj.io/v1alpha1 apiVersion: argoproj.io/v1alpha1
kind: Application kind: Application
metadata: metadata:
name: bind name: harbor
namespace: argocd namespace: argocd
spec: spec:
project: k-space.ee project: default
source: source:
repoURL: 'git@git.k-space.ee:k-space/kube.git' repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: bind path: harbor
targetRevision: HEAD targetRevision: HEAD
destination: destination:
server: 'https://kubernetes.default.svc' server: 'https://kubernetes.default.svc'
namespace: bind namespace: harbor
syncPolicy: {} syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -1,17 +1,17 @@
---
apiVersion: argoproj.io/v1alpha1 apiVersion: argoproj.io/v1alpha1
kind: Application kind: Application
metadata: metadata:
name: whoami-oidc name: keel
namespace: argocd namespace: argocd
spec: spec:
project: k-space.ee project: default
source: source:
repoURL: 'git@git.k-space.ee:k-space/kube.git' repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: whoami-oidc path: keel
targetRevision: HEAD targetRevision: HEAD
destination: destination:
server: 'https://kubernetes.default.svc' server: 'https://kubernetes.default.svc'
namespace: whoami-oidc namespace: keel
syncPolicy: syncPolicy:
automated: {} syncOptions:
- CreateNamespace=true

View File

@ -0,0 +1,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: kubernetes-dashboard
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: kubernetes-dashboard
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: kubernetes-dashboard
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -0,0 +1,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: logging
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: logging
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: logging
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -0,0 +1,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: members
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube-members.git'
path: .
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: members
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -0,0 +1,22 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: metallb-system
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: metallb-system
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: metallb-system
syncPolicy:
syncOptions:
- CreateNamespace=true
ignoreDifferences:
- group: apiextensions.k8s.io
kind: CustomResourceDefinition
jqPathExpressions:
- '.spec.conversion.webhook.clientConfig.caBundle'

View File

@ -0,0 +1,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: mysql-operator
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: mysql-operator
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: mysql-operator
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -1,16 +0,0 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: nextcloud
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: nextcloud
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: nextcloud
syncPolicy: {}

View File

@ -1,16 +0,0 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: nyancat
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: nyancat
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: nyancat
syncPolicy: {}

View File

@ -0,0 +1,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: phpmyadmin
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: phpmyadmin
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: phpmyadmin
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -1,16 +1,14 @@
---
apiVersion: argoproj.io/v1alpha1 apiVersion: argoproj.io/v1alpha1
kind: Application kind: Application
metadata: metadata:
name: asterisk name: prometheus-operator
namespace: argocd namespace: argocd
spec: spec:
project: k-space.ee project: default
source: source:
repoURL: 'git@git.k-space.ee:k-space/kube.git' repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: asterisk path: prometheus-operator
targetRevision: HEAD targetRevision: HEAD
destination: destination:
server: 'https://kubernetes.default.svc' server: 'https://kubernetes.default.svc'
namespace: asterisk namespace: prometheus-operator
syncPolicy: {}

View File

@ -0,0 +1,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: reloader
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: reloader
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: reloader
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -1,11 +1,10 @@
---
apiVersion: argoproj.io/v1alpha1 apiVersion: argoproj.io/v1alpha1
kind: Application kind: Application
metadata: metadata:
name: rosdump name: rosdump
namespace: argocd namespace: argocd
spec: spec:
project: k-space.ee project: default
source: source:
repoURL: 'git@git.k-space.ee:k-space/kube.git' repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: rosdump path: rosdump
@ -13,4 +12,6 @@ spec:
destination: destination:
server: 'https://kubernetes.default.svc' server: 'https://kubernetes.default.svc'
namespace: rosdump namespace: rosdump
syncPolicy: {} syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -1,16 +0,0 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: traefik
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: traefik
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: traefik
syncPolicy: {}

View File

@ -1,16 +0,0 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: wiki
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: wiki
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: wiki
syncPolicy: {}

View File

@ -1,11 +1,10 @@
---
apiVersion: argoproj.io/v1alpha1 apiVersion: argoproj.io/v1alpha1
kind: Application kind: Application
metadata: metadata:
name: wildduck name: wildduck
namespace: argocd namespace: argocd
spec: spec:
project: k-space.ee project: default
source: source:
repoURL: 'git@git.k-space.ee:k-space/kube.git' repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: wildduck path: wildduck
@ -13,4 +12,6 @@ spec:
destination: destination:
server: 'https://kubernetes.default.svc' server: 'https://kubernetes.default.svc'
namespace: wildduck namespace: wildduck
syncPolicy: {} syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -1,16 +0,0 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: woodpecker
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: woodpecker
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: woodpecker
syncPolicy: {}

View File

@ -1,7 +1,7 @@
global: global:
logLevel: warn logLevel: warn
domain: argocd.k-space.ee
# We use Authelia OIDC instead of Dex
dex: dex:
enabled: false enabled: false
@ -11,6 +11,8 @@ redis-ha:
server: server:
# HTTPS is implemented by Traefik # HTTPS is implemented by Traefik
extraArgs:
- --insecure
ingress: ingress:
enabled: true enabled: true
annotations: annotations:
@ -22,8 +24,25 @@ server:
tls: tls:
- hosts: - hosts:
- "*.k-space.ee" - "*.k-space.ee"
configEnabled: true
configfucked: config:
admin.enabled: "false"
url: https://argocd.k-space.ee
application.instanceLabelKey: argocd.argoproj.io/instance
oidc.config: |
name: Authelia
issuer: https://auth.k-space.ee
clientID: argocd
cliClientID: argocd
clientSecret: $oidc.config.clientSecret
requestedIDTokenClaims:
groups:
essential: true
requestedScopes:
- openid
- profile
- email
- groups
resource.customizations: | resource.customizations: |
# https://github.com/argoproj/argo-cd/issues/1704 # https://github.com/argoproj/argo-cd/issues/1704
networking.k8s.io/Ingress: networking.k8s.io/Ingress:
@ -31,11 +50,28 @@ server:
hs = {} hs = {}
hs.status = "Healthy" hs.status = "Healthy"
return hs return hs
apiextensions.k8s.io/CustomResourceDefinition:
ignoreDifferences: |
jsonPointers:
- "x-kubernetes-validations"
# Members of ArgoCD Admins group in AD/Samba are allowed to administer Argo
rbacConfig:
policy.default: role:readonly
policy.csv: |
# Map AD groups to ArgoCD roles
g, Developers, role:developers
g, ArgoCD Admins, role:admin
# Allow developers to read objects
p, role:developers, applications, get, */*, allow
p, role:developers, certificates, get, *, allow
p, role:developers, clusters, get, *, allow
p, role:developers, repositories, get, *, allow
p, role:developers, projects, get, *, allow
p, role:developers, accounts, get, *, allow
p, role:developers, gpgkeys, get, *, allow
p, role:developers, logs, get, */*, allow
p, role:developers, applications, restart, default/camtiler, allow
p, role:developers, applications, override, default/camtiler, allow
p, role:developers, applications, action/apps/Deployment/restart, default/camtiler, allow
p, role:developers, applications, sync, default/camtiler, allow
p, role:developers, applications, update, default/camtiler, allow
metrics: metrics:
enabled: true enabled: true
@ -57,49 +93,11 @@ controller:
enabled: true enabled: true
configs: configs:
params:
server.insecure: true
rbac:
policy.default: role:admin
policy.csv: |
# Map AD groups to ArgoCD roles
g, Developers, role:developers
g, ArgoCD Admins, role:admin
# Allow developers to read objects
p, role:developers, applications, get, */*, allow
p, role:developers, certificates, get, *, allow
p, role:developers, clusters, get, *, allow
p, role:developers, repositories, get, *, allow
p, role:developers, projects, get, *, allow
p, role:developers, accounts, get, *, allow
p, role:developers, gpgkeys, get, *, allow
p, role:developers, logs, get, */*, allow
p, role:developers, applications, restart, default/camtiler, allow
p, role:developers, applications, override, default/camtiler, allow
p, role:developers, applications, action/apps/Deployment/restart, default/camtiler, allow
p, role:developers, applications, sync, default/camtiler, allow
p, role:developers, applications, update, default/camtiler, allow
cm:
admin.enabled: "false"
oidc.config: |
name: OpenID Connect
issuer: https://auth2.k-space.ee/
clientID: $oidc-client-argocd-owner-secrets:OIDC_CLIENT_ID
cliClientID: $oidc-client-argocd-owner-secrets:OIDC_CLIENT_ID
clientSecret: $oidc-client-argocd-owner-secrets:OIDC_CLIENT_SECRET
requestedIDTokenClaims:
groups:
essential: true
requestedScopes:
- openid
- profile
- email
- groups
secret: secret:
createSecret: false createSecret: false
ssh: knownHosts:
knownHosts: | data:
ssh_known_hosts: |
# Copy-pasted from `ssh-keyscan git.k-space.ee` # Copy-pasted from `ssh-keyscan git.k-space.ee`
git.k-space.ee ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCF1+/TDRXuGwsu4SZQQwQuJusb7W1OciGAQp/ZbTTvKD+0p7fV6dXyUlWjdFmITrFNYDreDnMiOS+FvE62d2Z0= git.k-space.ee ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCF1+/TDRXuGwsu4SZQQwQuJusb7W1OciGAQp/ZbTTvKD+0p7fV6dXyUlWjdFmITrFNYDreDnMiOS+FvE62d2Z0=
git.k-space.ee ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDsLyRuubdIUnTKEqOipu+9x+FforrC8+oxulVrl0ECgdIRBQnLQXIspTNwuC3MKJ4z+DPbndSt8zdN33xWys8UNEs3V5/W6zsaW20tKiaX75WK5eOL4lIDJi/+E97+c0aZBXamhxTrgkRVJ5fcAkY6C5cKEmVM5tlke3v3ihLq78/LpJYv+P947NdnthYE2oc+XGp/elZ0LNfWRPnd///+ykbwWirvQm+iiDz7PMVKkb+Q7l3vw4+zneKJWAyFNrm+aewyJV9lFZZJuHliwlHGTriSf6zhMAWyJzvYqDAN6iT5yi9KGKw60J6vj2GLuK4ULVblTyP9k9+3iELKSWW5 git.k-space.ee ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDsLyRuubdIUnTKEqOipu+9x+FforrC8+oxulVrl0ECgdIRBQnLQXIspTNwuC3MKJ4z+DPbndSt8zdN33xWys8UNEs3V5/W6zsaW20tKiaX75WK5eOL4lIDJi/+E97+c0aZBXamhxTrgkRVJ5fcAkY6C5cKEmVM5tlke3v3ihLq78/LpJYv+P947NdnthYE2oc+XGp/elZ0LNfWRPnd///+ykbwWirvQm+iiDz7PMVKkb+Q7l3vw4+zneKJWAyFNrm+aewyJV9lFZZJuHliwlHGTriSf6zhMAWyJzvYqDAN6iT5yi9KGKw60J6vj2GLuK4ULVblTyP9k9+3iELKSWW5

1
asterisk/.gitignore vendored
View File

@ -1 +0,0 @@
conf

View File

@ -1,11 +0,0 @@
# Asterisk
Asterisk is used as
This application is managed by [ArgoCD](https://argocd.k-space.ee/applications/argocd/asterisk)
Should ArgoCD be down manifests here can be applied with:
```
kubectl apply -n asterisk -f application.yaml
```

View File

@ -1,124 +0,0 @@
---
apiVersion: v1
kind: Service
metadata:
name: asterisk
annotations:
external-dns.alpha.kubernetes.io/hostname: voip.k-space.ee
spec:
type: LoadBalancer
externalTrafficPolicy: Local
selector:
app: asterisk
ports:
- name: asterisk
protocol: UDP
port: 5060
- name: sip-data-10000
protocol: UDP
port: 10000
- name: sip-data-10001
protocol: UDP
port: 10001
- name: sip-data-10002
protocol: UDP
port: 10002
- name: sip-data-10003
protocol: UDP
port: 10003
- name: sip-data-10004
protocol: UDP
port: 10004
- name: sip-data-10005
protocol: UDP
port: 10005
- name: sip-data-10006
protocol: UDP
port: 10006
- name: sip-data-10007
protocol: UDP
port: 10007
- name: sip-data-10008
protocol: UDP
port: 10008
- name: sip-data-10009
protocol: UDP
port: 10009
- name: sip-data-10010
protocol: UDP
port: 10010
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: asterisk
labels:
app: asterisk
spec:
selector:
matchLabels:
app: asterisk
replicas: 1
template:
metadata:
labels:
app: asterisk
spec:
containers:
- name: asterisk
image: harbor.k-space.ee/k-space/asterisk
command:
- /usr/sbin/asterisk
args:
- -TWBpvvvdddf
volumeMounts:
- name: config
mountPath: /etc/asterisk
ports:
- containerPort: 8088
name: metrics
volumes:
- name: config
secret:
secretName: asterisk-secrets
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: asterisk
spec:
selector:
matchLabels:
app: asterisk
podMetricsEndpoints:
- port: metrics
---
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: asterisk
spec:
groups:
- name: asterisk
rules:
- alert: AsteriskPhoneNotRegistered
expr: asterisk_endpoints_state{resource=~"1.*"} < 2
for: 5m
labels:
severity: critical
annotations:
summary: "{{ $labels.resource }} is not registered."
- alert: AsteriskOutboundNumberNotRegistered
expr: asterisk_pjsip_outbound_registration_status == 0
for: 5m
labels:
severity: critical
annotations:
summary: "{{ $labels.username }} is not registered with provider."
- alert: AsteriskCallsPerMinuteLimitExceed
expr: asterisk_channels_duration_seconds > 10*60
for: 20m
labels:
severity: warning
annotations:
summary: "Call at channel {{ $labels.name }} is taking longer than 10m."

View File

@ -1,45 +0,0 @@
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: asterisk
spec:
podSelector:
matchLabels:
app: asterisk
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus
- from:
- ipBlock:
cidr: 100.101.0.0/16
- from:
- ipBlock:
cidr: 100.102.0.0/16
- from:
- ipBlock:
cidr: 81.90.125.224/32 # Lauri home
- from:
- ipBlock:
cidr: 172.20.8.241/32 # Erki A
- from:
- ipBlock:
cidr: 195.222.16.36/32 # Elisa SIP
- from:
- ipBlock:
cidr: 195.222.16.38/32 # Elisa SIP
egress:
- to:
- ipBlock:
cidr: 195.222.16.36/32 # Elisa SIP
- to:
- ipBlock:
cidr: 195.222.16.38/32 # Elisa SIP

2
authelia/.gitignore vendored Normal file
View File

@ -0,0 +1,2 @@
application-secrets.y*ml
oidc-secrets.y*ml

171
authelia/README.md Normal file
View File

@ -0,0 +1,171 @@
# Authelia
## Background
Authelia works in conjunction with Traefik to provide SSO with
credentials stored in Samba (Active Directory compatible) directory tree.
Samba resides outside Kubernetes cluster as it's difficuilt to containerize
while keeping it usable from outside the cluster due to Samba's networking.
The MariaDB instance is used to store MFA tokens.
KeyDB is used to store session info.
## Deployment
Inspect changes with `git diff` and proceed to deploy:
```
kubectl apply -n authelia -f application.yml
kubectl create secret generic -n authelia mysql-secrets \
--from-literal=rootPassword=$(cat /dev/urandom | base64 | head -c 30)
kubectl create secret generic -n authelia mariadb-secrets \
--from-literal=MYSQL_ROOT_PASSWORD=$(cat /dev/urandom | base64 | head -c 30) \
--from-literal=MYSQL_PASSWORD=$(cat /dev/urandom | base64 | head -c 30)
kubectl -n authelia rollout restart deployment/authelia
```
To change secrets create `secret.yml`:
```
---
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: application-secrets
data:
JWT_TOKEN: ...
SESSION_ENCRYPTION_KEY: ...
STORAGE_PASSWORD: ...
STORAGE_ENCRYPTION_KEY: ...
LDAP_PASSWORD: ...
STORAGE_PASSWORD: ...
SMTP_PASSWORD: ...
```
Apply with:
```
kubectl apply -n authelia -f application-secrets.yml
kubectl annotate -n authelia secret application-secrets reloader.stakater.com/match=true
```
## OIDC secrets
OIDC secrets are separated from the main configuration until
Authelia will add CRD-s for these.
Generally speaking for untrusted applications, that is stuff that is running
outside the Kubernetes cluster eg web browser based (JS) and
local command line clients one
should use `public: true` and omit `secret: ...`.
Populate `oidc-secrets.yml` with approximately following:
```
identity_providers:
oidc:
clients:
- id: kubelogin
description: Kubernetes cluster
secret: ...
authorization_policy: two_factor
redirect_uris:
- http://localhost:27890
scopes:
- openid
- groups
- email
- profile
- id: proxmox
description: Proxmox Virtual Environment
secret: ...
authorization_policy: two_factor
redirect_uris:
- https://pve.k-space.ee
scopes:
- openid
- groups
- email
- profile
- id: argocd
description: ArgoCD
secret: ...
authorization_policy: two_factor
redirect_uris:
- https://argocd.k-space.ee/auth/callback
scopes:
- openid
- groups
- email
- profile
- id: harbor
description: Harbor
secret: ...
authorization_policy: two_factor
redirect_uris:
- https://harbor.k-space.ee/c/oidc/callback
scopes:
- openid
- groups
- email
- profile
- id: gitea
description: Gitea
secret: ...
authorization_policy: one_factor
redirect_uris:
- https://git.k-space.ee/user/oauth2/authelia/callback
scopes:
- openid
- profile
- email
- groups
grant_types:
- refresh_token
- authorization_code
response_types:
- code
userinfo_signing_algorithm: none
- id: grafana
description: Grafana
secret: ...
authorization_policy: one_factor
redirect_uris:
- https://grafana.k-space.ee/login/generic_oauth
scopes:
- openid
- groups
- email
- profile
```
To upload the file to Kubernetes secrets:
```
kubectl -n authelia delete secret oidc-secrets
kubectl -n authelia create secret generic oidc-secrets \
--from-file=oidc-secrets.yml=oidc-secrets.yml
kubectl annotate -n authelia secret oidc-secrets reloader.stakater.com/match=true
kubectl -n authelia rollout restart deployment/authelia
```
Synchronize OIDC secrets:
```
kubectl -n argocd delete secret argocd-secret
kubectl -n argocd create secret generic argocd-secret \
--from-literal=server.secretkey=$(cat /dev/urandom | base64 | head -c 30) \
--from-literal=oidc.config.clientSecret=$( \
kubectl get secret -n authelia oidc-secrets -o json \
| jq '.data."oidc-secrets.yml"' -r | base64 -d | yq -o json \
| jq '.identity_providers.oidc.clients[] | select(.id == "argocd") | .secret' -r)
kubectl -n grafana delete secret oidc-secret
kubectl -n grafana create secret generic oidc-secret \
--from-literal=GF_AUTH_GENERIC_OAUTH_CLIENT_SECRET=$( \
kubectl get secret -n authelia oidc-secrets -o json \
| jq '.data."oidc-secrets.yml"' -r | base64 -d | yq -o json \
| jq '.identity_providers.oidc.clients[] | select(.id == "grafana") | .secret' -r)
```

414
authelia/application.yml Normal file
View File

@ -0,0 +1,414 @@
---
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: authelia-certificates
labels:
app.kubernetes.io/name: authelia
data:
ldaps.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUZJekNDQXd1Z0F3SUJBZ0lVRzNaYnI0MGVVMlRHak1Lek5XaDhOTDJkRDRZd0RRWUpLb1pJaHZjTkFRRUwKQlFBd0lURWZNQjBHQTFVRUF3d1dVMkZ0WW1FZ1lYUWdZV1F1YXkxemNHRmpaUzVsWlRBZUZ3MHlNVEV5TVRRdwpOekk0TlRGYUZ3MHlOakV5TVRNd056STROVEZhTUNFeEh6QWRCZ05WQkFNTUZsTmhiV0poSUdGMElHRmtMbXN0CmMzQmhZMlV1WldVd2dnSWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUNEd0F3Z2dJS0FvSUNBUURub3hEZFlicjAKVFJHOEErdk0xT0I1Rzg4Z05YL1pXeFNLb0VaM2p0ekF0NEc3blV0aUhoVzI1cUhEeXZGVGEzTzJiMUFVSEhzbwpVQXpVTWNVV1FRb3J2RjF4L1VsYitpcnk0QkxFTklaVTdYMVpxb2ZKYXgwZTcrbit1YVM3R015dnB4VXliNGlYCkd3djdZZEh5SmM4WjZROHd2MTdNV2F2ejNaOE5CWFdoeG1xc3ljTlphVkl2S1lNRVpGazNUTnA3T20vSTFpdkYKWDJuNVNtb2d2NmdBVmpVODhSeWc2NlRFVStiaGY5QWdiU0VxWjhMaVd6c20xdHc0WnJXMDVVK25JVjRzTHdlaQp2SXppblFMYmFMTkc2ZUl0cUtQZGVsWWhRNHlCeHM3QXpTOCtieVVBZk9jRktzUTI5alFVdUxNbE1pUmt6MjV5Cnc5UUZxSGVuRjNIYXJqU1JTL3ZZV3J3K0RNbmo2Tit3QVdtd21SR3NzVmxPMjFSLzAzNThBK0h5VzhyLzlsTm8KV1FoMmt3VGRPdjdxMzFwRmZQQUhHUkFITnZUN0dRKzVCeFFjdG83cG1GQ2t2OTdpbmhiZG50d2ViSmM1VWI3NQpBeHNWVC9uNk9aTjJSU09NV0RKY1pjVkpXYjQxdTNTL2lBVHlvbDBuOEZMRlRRZm9xdXdvVkQ1UnpwU0NsVm50Cjd1eENyaGNsYXhTYnhUUDhqa29ERXQzc1NycWoySm5PNlhtQ3R2VlZkMmQvWVZQQ21qQm54TWc1bld1WEwwZTgKNkh3MTd5TGtYeFgzVERkdjF2VThvYTdpTmZyNmc3Vlcrd2ZsUkJoVW5WRUluNXZEdm80STVSdWRXaEJxcHN6VQo3bGQrUDVjZE5GWEdjUlRQdFFlbXkxUllKMG5ZejkybGtRSURBUUFCbzFNd1VUQWRCZ05WSFE0RUZnUVVjZ1JrCnZ4U3V1QnNFaktzbXQvN3dpRHIxbHVRd0h3WURWUjBqQkJnd0ZvQVVjZ1JrdnhTdXVCc0VqS3NtdC83d2lEcjEKbHVRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBTkJna3Foa2lHOXcwQkFRc0ZBQU9DQWdFQVNlNXM1aU04QjQ2agp6bXZMOUQ4dUJrQ0VIOW9mMnc1VFluL1NPZkFRVnhBOGxBYndORitlWmgyakdGSUN6citNYmlTMlhZdkxJNnVrClZ5cFJrN28vdExmdmY0alpqZnRpeEliWEM1MjYrUk1xOEcvV2xGbzJnWFZ0eW5BcXp5bXJVYjV1MVZJcG53QWYKNTBzNHlDOURFUXF1aGErYzJCWTBRQ3ZySnUvYy9KTUs3QTdYOFdRSzVDUy8wZkNPdzBPY2xkZzA0c3VWVlU2eQp0MEZmV0kvTlhURFFrU2JWVXN5OElmaXd4a0o5NmNsTjFNWVArQ015Mkh1eWF0aTZySnhVZFBEbS9tYzdRWXNPClNTSzQyNXJQOFFZMmduNlNXUXJXdUJic2dLSEpoVzRBYjdTTldkb0Q0QytwVDA2V1MzVXphMnhZd09TV1IvTWMKR1V5YXRwLzlxR05tOWM1d2RFQ3FtdkVQc2twQkp5ZWR6MUk2V2lxdjRuK0UvRk9qRGl0VVpFd3BFZXRUQktXZgoyRnZRa1pGRmpRU3VIdG5KT040cVRvWmlaNW4vbis4Z1k2Z1Y5Wnd0NHM5OGlpdnUwTFc4UlZGSTNkS0tiYm5lCkY1KzltNE9vMjF0SlU2QThXVGpqVXpLUnFKdEZSa1JpWGtOTGRoY2MrdTdMOFFlZTFOUjIyalg5N2NaVDNGUGoKYmpOUlpId3k5K1dhMG1zcC9EYUR5RnlaOStPUUhReUJJazdCSS9LdU0rT2dta3dlSHBNSE5CMUs1NHZQenZKawpHaFN1QUNIeTRybmdvQTBvMzNhZzJ6a3lEY3NocVRtK2Q3UXFWOWUzU2pONFpUUXlTeWNpa0I1bFJKVHAydVFkCk5jVjBtcG5nREl1aFVlSFRKWkJ0SVZCZnp4bHdHd2c9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
---
apiVersion: v1
kind: ConfigMap
metadata:
name: authelia-config
labels:
app.kubernetes.io/name: authelia
annotations:
reloader.stakater.com/match: "true"
data:
authelia-config.yml: |
---
log:
level: warn
certificates_directory: /certificates
theme: light
default_redirection_url: https://members.k-space.ee
totp:
issuer: K-SPACE
authentication_backend:
ldap:
implementation: activedirectory
url: ldaps://ad.k-space.ee
base_dn: dc=ad,dc=k-space,dc=ee
username_attribute: sAMAccountName
additional_users_dn: ou=Membership
users_filter: (&({username_attribute}={input})(objectCategory=person)(objectClass=user))
additional_groups_dn: cn=Users
groups_filter: (&(member={dn})(objectclass=group))
group_name_attribute: cn
mail_attribute: mail
display_name_attribute: displayName
user: cn=authelia,cn=Users,dc=ad,dc=k-space,dc=ee
session:
domain: k-space.ee
same_site: lax
expiration: 1M
inactivity: 120h
remember_me_duration: "0"
redis:
host: redis
port: 6379
regulation:
ban_time: 5m
find_time: 2m
max_retries: 3
storage:
mysql:
host: mariadb
database: authelia
username: authelia
notifier:
disable_startup_check: true
smtp:
host: mail.k-space.ee
port: 465
username: authelia
sender: authelia@k-space.ee
subject: "[Authelia] {title}"
startup_check_address: lauri@k-space.ee
access_control:
default_policy: deny
rules:
# Longhorn dashboard
- domain: longhorn.k-space.ee
policy: two_factor
subject: group:Longhorn Admins
- domain: longhorn.k-space.ee
policy: deny
# Members site
- domain: members.k-space.ee
policy: bypass
resources:
- ^/?$
- domain: members.k-space.ee
policy: two_factor
resources:
- ^/login/authelia/?$
- domain: members.k-space.ee
policy: bypass
# Webmail
- domain: webmail.k-space.ee
policy: two_factor
# Etherpad
- domain: pad.k-space.ee
policy: two_factor
resources:
- ^/p/board-
subject: group:Board Members
- domain: pad.k-space.ee
policy: deny
resources:
- ^/p/board-
- domain: pad.k-space.ee
policy: two_factor
resources:
- ^/p/members-
- domain: pad.k-space.ee
policy: deny
resources:
- ^/p/members-
- domain: pad.k-space.ee
policy: bypass
# phpMyAdmin
- domain: phpmyadmin.k-space.ee
policy: two_factor
# Require login for everything else protected by traefik-sso middleware
- domain: '*.k-space.ee'
policy: one_factor
...
---
apiVersion: v1
kind: Service
metadata:
name: authelia
labels:
app.kubernetes.io/name: authelia
spec:
type: ClusterIP
sessionAffinity: None
selector:
app.kubernetes.io/name: authelia
ports:
- name: http
protocol: TCP
port: 80
targetPort: http
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: authelia
labels:
app.kubernetes.io/name: authelia
annotations:
reloader.stakater.com/search: "true"
spec:
selector:
matchLabels:
app.kubernetes.io/name: authelia
replicas: 2
revisionHistoryLimit: 0
template:
metadata:
labels:
app.kubernetes.io/name: authelia
spec:
enableServiceLinks: false
containers:
- name: authelia
image: authelia/authelia:4
command:
- authelia
- --config=/config/authelia-config.yml
- --config=/config/oidc-secrets.yml
resources:
limits:
cpu: "4.00"
memory: 125Mi
requests:
cpu: "0.25"
memory: 50Mi
env:
- name: AUTHELIA_SERVER_DISABLE_HEALTHCHECK
value: "true"
- name: AUTHELIA_JWT_SECRET_FILE
value: /secrets/JWT_TOKEN
- name: AUTHELIA_SESSION_SECRET_FILE
value: /secrets/SESSION_ENCRYPTION_KEY
- name: AUTHELIA_AUTHENTICATION_BACKEND_LDAP_PASSWORD_FILE
value: /secrets/LDAP_PASSWORD
- name: AUTHELIA_SESSION_REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: redis-secrets
key: REDIS_PASSWORD
- name: AUTHELIA_STORAGE_ENCRYPTION_KEY_FILE
value: /secrets/STORAGE_ENCRYPTION_KEY
- name: AUTHELIA_STORAGE_MYSQL_PASSWORD_FILE
value: /mariadb-secrets/MYSQL_PASSWORD
- name: AUTHELIA_IDENTITY_PROVIDERS_OIDC_HMAC_SECRET_FILE
value: /secrets/OIDC_HMAC_SECRET
- name: AUTHELIA_IDENTITY_PROVIDERS_OIDC_ISSUER_PRIVATE_KEY_FILE
value: /secrets/OIDC_PRIVATE_KEY
- name: AUTHELIA_NOTIFIER_SMTP_PASSWORD_FILE
value: /secrets/SMTP_PASSWORD
- name: TZ
value: Europe/Tallinn
startupProbe:
failureThreshold: 6
httpGet:
path: /api/health
port: http
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 5
livenessProbe:
failureThreshold: 5
httpGet:
path: /api/health
port: http
scheme: HTTP
initialDelaySeconds: 0
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 5
readinessProbe:
failureThreshold: 5
httpGet:
path: /api/health
port: http
scheme: HTTP
initialDelaySeconds: 0
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 5
ports:
- name: http
containerPort: 9091
protocol: TCP
volumeMounts:
- mountPath: /config/authelia-config.yml
name: authelia-config
readOnly: true
subPath: authelia-config.yml
- mountPath: /config/oidc-secrets.yml
name: oidc-secrets
readOnly: true
subPath: oidc-secrets.yml
- mountPath: /secrets
name: secrets
readOnly: true
- mountPath: /certificates
name: certificates
readOnly: true
- mountPath: /mariadb-secrets
name: mariadb-secrets
readOnly: true
volumes:
- name: authelia-config
configMap:
name: authelia-config
- name: secrets
secret:
secretName: application-secrets
items:
- key: JWT_TOKEN
path: JWT_TOKEN
- key: SESSION_ENCRYPTION_KEY
path: SESSION_ENCRYPTION_KEY
- key: STORAGE_ENCRYPTION_KEY
path: STORAGE_ENCRYPTION_KEY
- key: STORAGE_PASSWORD
path: STORAGE_PASSWORD
- key: LDAP_PASSWORD
path: LDAP_PASSWORD
- key: OIDC_PRIVATE_KEY
path: OIDC_PRIVATE_KEY
- key: OIDC_HMAC_SECRET
path: OIDC_HMAC_SECRET
- key: SMTP_PASSWORD
path: SMTP_PASSWORD
- name: certificates
secret:
secretName: authelia-certificates
- name: mariadb-secrets
secret:
secretName: mariadb-secrets
- name: redis-secrets
secret:
secretName: redis-secrets
- name: oidc-secrets
secret:
secretName: oidc-secrets
items:
- key: oidc-secrets.yml
path: oidc-secrets.yml
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: authelia
labels:
app.kubernetes.io/name: authelia
annotations:
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
kubernetes.io/tls-acme: "true"
traefik.ingress.kubernetes.io/router.entryPoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: authelia-chain-k6-authelia@kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true"
spec:
rules:
- host: auth.k-space.ee
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: authelia
port:
number: 80
tls:
- hosts:
- "*.k-space.ee"
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: forwardauth-k6-authelia
labels:
app.kubernetes.io/name: authelia
spec:
forwardAuth:
address: http://authelia.authelia.svc.cluster.local/api/verify?rd=https://auth.k-space.ee/
trustForwardHeader: true
authResponseHeaders:
- Remote-User
- Remote-Name
- Remote-Email
- Remote-Groups
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: headers-k6-authelia
labels:
app.kubernetes.io/name: authelia
spec:
headers:
browserXssFilter: true
customFrameOptionsValue: "SAMEORIGIN"
customResponseHeaders:
Cache-Control: "no-store"
Pragma: "no-cache"
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: chain-k6-authelia-auth
labels:
app.kubernetes.io/name: authelia
spec:
chain:
middlewares:
- name: forwardauth-k6-authelia
namespace: authelia
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: chain-k6-authelia
labels:
app.kubernetes.io/name: authelia
spec:
chain:
middlewares:
- name: headers-k6-authelia
namespace: authelia
---
apiVersion: mysql.oracle.com/v2
kind: InnoDBCluster
metadata:
name: mysql-cluster
spec:
secretName: mysql-secrets
instances: 3
router:
instances: 2
tlsUseSelfSigned: true
datadirVolumeClaimTemplate:
storageClassName: local-path
accessModes:
- ReadWriteOnce
resources:
requests:
storage: "1Gi"
podSpec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/managed-by
operator: In
values:
- mysql-operator
topologyKey: kubernetes.io/hostname
nodeSelector:
dedicated: storage
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule
---
apiVersion: codemowers.io/v1alpha1
kind: KeyDBCluster
metadata:
name: redis
spec:
replicas: 3

1
authelia/mariadb.yml Symbolic link
View File

@ -0,0 +1 @@
../shared/mariadb.yml

1
bind/.gitignore vendored
View File

@ -1 +0,0 @@
*.key

View File

@ -1,104 +0,0 @@
# Bind setup
The Bind primary resides outside Kubernetes at `193.40.103.2` and
it's internally reachable via `172.20.0.2`.
Bind secondaries are hosted inside Kubernetes, load balanced behind `62.65.250.2` and
under normal circumstances managed by [ArgoCD](https://argocd.k-space.ee/applications/argocd/bind).
Ingresses and DNSEndpoints referring to `k-space.ee`, `kspace.ee`, `k6.ee`
are picked up automatically by `external-dns` and updated on primary.
The primary triggers notification events to `172.20.53.{1..3}`
which are internally exposed IP-s of the secondaries.
# Secrets
To configure TSIG secrets:
```
kubectl create secret generic -n bind bind-readonly-secret \
--from-file=readonly.key
kubectl create secret generic -n bind bind-readwrite-secret \
--from-file=readwrite.key
kubectl create secret generic -n bind external-dns
kubectl -n bind delete secret tsig-secret
kubectl -n bind create secret generic tsig-secret \
--from-literal=TSIG_SECRET=$(cat readwrite.key | grep secret | cut -d '"' -f 2)
kubectl -n cert-manager delete secret tsig-secret
kubectl -n cert-manager create secret generic tsig-secret \
--from-literal=TSIG_SECRET=$(cat readwrite.key | grep secret | cut -d '"' -f 2)
```
# Serving additional zones
## Bind primary configuration
To serve additional domains from this Bind setup add following
section to `named.conf.local` on primary `ns1.k-space.ee`:
```
key "foobar" {
algorithm hmac-sha512;
secret "...";
};
zone "foobar.com" {
type master;
file "/var/lib/bind/db.foobar.com";
allow-update { !rejected; key foobar; };
allow-transfer { !rejected; key readonly; key foobar; };
notify explicit; also-notify { 172.20.53.1; 172.20.53.2; 172.20.53.3; };
};
```
Initiate empty zonefile in `/var/lib/bind/db.foobar.com` on the primary `ns1.k-space.ee`:
```
foobar.com IN SOA ns1.foobar.com. hostmaster.foobar.com. (1 300 300 2592000 300)
NS ns1.foobar.com.
NS ns2.foobar.com.
ns1.foobar.com. A 193.40.103.2
ns2.foobar.com. A 62.65.250.2
```
Reload Bind config:
```
named-checkconf
systemctl reload bind9
```
## Bind secondary config
Add section to `bind-secondary-config-local` under key `named.conf.local`:
```
zone "foobar.com" { type slave; masters { 172.20.0.2 key readonly; }; };
```
And restart secondaries:
```
kubectl rollout restart -n bind statefulset/bind-secondary
```
## Registrar config
At your DNS registrar point your glue records to:
```
foobar.com. NS ns1.foobar.com.
foobar.com. NS ns2.foobar.com.
ns1.foobar.com. A 193.40.103.2
ns2.foobar.com. A 62.65.250.2
```
## Updating DNS records
With the configured TSIG key `foobar` you can now:
* Obtain Let's Encrypt certificates with DNS challenge.
Inside Kubernetes use `cert-manager` with RFC2136 provider.
* Update DNS records.
Inside Kubernetes use `external-dns` with RFC2136 provider.

View File

@ -1,178 +0,0 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: bind-secondary-config-local
data:
named.conf.local: |
zone "codemowers.ee" { type slave; masters { 172.20.0.2 key readonly; }; };
zone "codemowers.eu" { type slave; masters { 172.20.0.2 key readonly; }; };
zone "codemowers.cloud" { type slave; masters { 172.20.0.2 key readonly; }; };
---
apiVersion: v1
kind: ConfigMap
metadata:
name: bind-secondary-config
data:
named.conf: |
include "/etc/bind/named.conf.local";
include "/etc/bind/readonly.key";
options {
recursion no;
pid-file "/var/bind/named.pid";
allow-query { 0.0.0.0/0; };
allow-notify { 172.20.0.2; };
allow-transfer { none; };
check-names slave ignore;
notify no;
};
zone "k-space.ee" { type slave; masters { 172.20.0.2 key readonly; }; };
zone "k6.ee" { type slave; masters { 172.20.0.2 key readonly; }; };
zone "kspace.ee" { type slave; masters { 172.20.0.2 key readonly; }; };
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: bind-secondary
namespace: bind
spec:
replicas: 3
selector:
matchLabels:
app: bind-secondary
template:
metadata:
labels:
app: bind-secondary
spec:
volumes:
- name: run
emptyDir: {}
containers:
- name: bind-secondary
image: internetsystemsconsortium/bind9:9.19
volumeMounts:
- mountPath: /run/named
name: run
workingDir: /var/bind
command:
- named
- -g
- -c
- /etc/bind/named.conf
volumeMounts:
- name: bind-secondary-config
mountPath: /etc/bind
readOnly: true
- name: bind-data
mountPath: /var/bind
volumes:
- name: bind-secondary-config
projected:
sources:
- configMap:
name: bind-secondary-config
- configMap:
name: bind-secondary-config-local
optional: true
- secret:
name: bind-readonly-secret
- name: bind-data
emptyDir: {}
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- bind-secondary
topologyKey: "kubernetes.io/hostname"
---
apiVersion: v1
kind: Service
metadata:
name: bind-secondary
namespace: bind
spec:
type: LoadBalancer
externalTrafficPolicy: Local
loadBalancerIP: 62.65.250.2
selector:
app: bind-secondary
ports:
- protocol: TCP
port: 53
name: dns-tcp
targetPort: 53
- protocol: UDP
port: 53
name: dns-udp
targetPort: 53
---
apiVersion: v1
kind: Service
metadata:
name: bind-secondary-0
namespace: bind
spec:
type: LoadBalancer
externalTrafficPolicy: Local
loadBalancerIP: 172.20.53.1
selector:
app: bind-secondary
statefulset.kubernetes.io/pod-name: bind-secondary-0
ports:
- protocol: TCP
port: 53
name: dns-tcp
targetPort: 53
- protocol: UDP
port: 53
name: dns-udp
targetPort: 53
---
apiVersion: v1
kind: Service
metadata:
name: bind-secondary-1
namespace: bind
spec:
type: LoadBalancer
externalTrafficPolicy: Local
loadBalancerIP: 172.20.53.2
selector:
app: bind-secondary
statefulset.kubernetes.io/pod-name: bind-secondary-1
ports:
- protocol: TCP
port: 53
name: dns-tcp
targetPort: 53
- protocol: UDP
port: 53
name: dns-udp
targetPort: 53
---
apiVersion: v1
kind: Service
metadata:
name: bind-secondary-2
namespace: bind
spec:
type: LoadBalancer
externalTrafficPolicy: Local
loadBalancerIP: 172.20.53.3
selector:
app: bind-secondary
statefulset.kubernetes.io/pod-name: bind-secondary-2
ports:
- protocol: TCP
port: 53
name: dns-tcp
targetPort: 53
- protocol: UDP
port: 53
name: dns-udp
targetPort: 53

View File

@ -1,40 +0,0 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns-k-space
spec:
revisionHistoryLimit: 0
selector:
matchLabels: &selectorLabels
app.kubernetes.io/name: external-dns
domain: k-space.ee
template:
metadata:
labels: *selectorLabels
spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: registry.k8s.io/external-dns/external-dns:v0.13.5
envFrom:
- secretRef:
name: tsig-secret
args:
- --events
- --registry=txt
- --txt-prefix=external-dns-
- --txt-owner-id=k8s
- --provider=rfc2136
- --source=ingress
- --source=service
- --source=crd
- --domain-filter=k-space.ee
- --rfc2136-tsig-axfr
- --rfc2136-host=172.20.0.2
- --rfc2136-port=53
- --rfc2136-zone=k-space.ee
- --rfc2136-tsig-keyname=readwrite
- --rfc2136-tsig-secret-alg=hmac-sha512
- --rfc2136-tsig-secret=$(TSIG_SECRET)
# https://github.com/kubernetes-sigs/external-dns/issues/2446

View File

@ -1,71 +0,0 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns-k6
spec:
revisionHistoryLimit: 0
selector:
matchLabels: &selectorLabels
app.kubernetes.io/name: external-dns
domain: k6.ee
template:
metadata:
labels: *selectorLabels
spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: registry.k8s.io/external-dns/external-dns:v0.13.5
envFrom:
- secretRef:
name: tsig-secret
args:
- --log-level=debug
- --events
- --registry=noop
- --provider=rfc2136
- --source=service
- --source=crd
- --domain-filter=k6.ee
- --rfc2136-tsig-axfr
- --rfc2136-host=172.20.0.2
- --rfc2136-port=53
- --rfc2136-zone=k6.ee
- --rfc2136-tsig-keyname=readwrite
- --rfc2136-tsig-secret-alg=hmac-sha512
- --rfc2136-tsig-secret=$(TSIG_SECRET)
# https://github.com/kubernetes-sigs/external-dns/issues/2446
---
apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
name: k6
spec:
endpoints:
- dnsName: k6.ee
recordTTL: 300
recordType: SOA
targets:
- "ns1.k-space.ee. hostmaster.k-space.ee. (1 300 300 300 300)"
- dnsName: k6.ee
recordTTL: 300
recordType: NS
targets:
- ns1.k-space.ee
- ns2.k-space.ee
- dnsName: ns1.k-space.ee
recordTTL: 300
recordType: A
targets:
- 193.40.103.2
- dnsName: ns2.k-space.ee
recordTTL: 300
recordType: A
targets:
- 62.65.250.2
- dnsName: k-space.ee
recordTTL: 300
recordType: MX
targets:
- 10 mail.k-space.ee

View File

@ -1,66 +0,0 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns-kspace
spec:
revisionHistoryLimit: 0
selector:
matchLabels: &selectorLabels
app.kubernetes.io/name: external-dns
domain: kspace.ee
template:
metadata:
labels: *selectorLabels
spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: registry.k8s.io/external-dns/external-dns:v0.13.5
envFrom:
- secretRef:
name: tsig-secret
args:
- --events
- --registry=noop
- --provider=rfc2136
- --source=ingress
- --source=service
- --source=crd
- --domain-filter=kspace.ee
- --rfc2136-tsig-axfr
- --rfc2136-host=172.20.0.2
- --rfc2136-port=53
- --rfc2136-zone=kspace.ee
- --rfc2136-tsig-keyname=readwrite
- --rfc2136-tsig-secret-alg=hmac-sha512
- --rfc2136-tsig-secret=$(TSIG_SECRET)
# https://github.com/kubernetes-sigs/external-dns/issues/2446
---
apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
name: kspace
spec:
endpoints:
- dnsName: kspace.ee
recordTTL: 300
recordType: SOA
targets:
- "ns1.k-space.ee. hostmaster.k-space.ee. (1 300 300 300 300)"
- dnsName: kspace.ee
recordTTL: 300
recordType: NS
targets:
- ns1.k-space.ee
- ns2.k-space.ee
- dnsName: ns1.k-space.ee
recordTTL: 300
recordType: A
targets:
- 193.40.103.2
- dnsName: ns2.k-space.ee
recordTTL: 300
recordType: A
targets:
- 62.65.250.2

View File

@ -1,58 +0,0 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: external-dns
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- pods
- nodes
verbs:
- get
- watch
- list
- apiGroups:
- extensions
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- externaldns.k8s.io
resources:
- dnsendpoints
verbs:
- get
- watch
- list
- apiGroups:
- externaldns.k8s.io
resources:
- dnsendpoints/status
verbs:
- update
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: external-dns
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: external-dns-viewer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: external-dns
subjects:
- kind: ServiceAccount
name: external-dns
namespace: bind

View File

@ -1,16 +1,7 @@
To apply changes: To apply changes:
``` ```
kubectl apply -n camtiler \ kubectl apply -n camtiler -f application.yml -f persistence.yml -f mongoexpress.yml -f mongodb-support.yml -f networkpolicy-base.yml -f minio-support.yml
-f application.yml \
-f minio.yml \
-f mongoexpress.yml \
-f mongodb-support.yml \
-f camera-tiler.yml \
-f logmower.yml \
-f ingress.yml \
-f network-policies.yml \
-f networkpolicy-base.yml
``` ```
To deploy changes: To deploy changes:
@ -24,16 +15,15 @@ To initialize secrets:
``` ```
kubectl create secret generic -n camtiler mongodb-application-readwrite-password --from-literal="password=$(cat /dev/urandom | base64 | head -c 30)" kubectl create secret generic -n camtiler mongodb-application-readwrite-password --from-literal="password=$(cat /dev/urandom | base64 | head -c 30)"
kubectl create secret generic -n camtiler mongodb-application-readonly-password --from-literal="password=$(cat /dev/urandom | base64 | head -c 30)" kubectl create secret generic -n camtiler mongodb-application-readonly-password --from-literal="password=$(cat /dev/urandom | base64 | head -c 30)"
kubectl create secret generic -n camtiler minio-secrets \ kubectl create secret generic -n camtiler minio-secret \
--from-literal=accesskey=application \
--from-literal=secretkey=$(cat /dev/urandom | base64 | head -c 30)
kubectl create secret generic -n camtiler minio-env-configuration \
--from-literal="MINIO_BROWSER=off" \
--from-literal="MINIO_ROOT_USER=root" \ --from-literal="MINIO_ROOT_USER=root" \
--from-literal="MINIO_ROOT_PASSWORD=$(cat /dev/urandom | base64 | head -c 30)" --from-literal="MINIO_ROOT_PASSWORD=$(cat /dev/urandom | base64 | head -c 30)" \
--from-literal="MINIO_STORAGE_CLASS_STANDARD=EC:4"
kubectl -n camtiler create secret generic camera-secrets \ kubectl -n camtiler create secret generic camera-secrets \
--from-literal=username=... \ --from-literal=username=... \
--from-literal=password=... --from-literal=password=...
``` ```
To restart all deployments:
```
for j in $(kubectl get deployments -n camtiler -o name); do kubectl rollout restart -n camtiler $j; done
```

View File

@ -1,11 +1,387 @@
--- ---
apiVersion: codemowers.cloud/v1beta1 apiVersion: apps/v1
kind: MinioBucketClaim kind: Deployment
metadata: metadata:
name: camtiler name: camtiler
annotations:
keel.sh/policy: force
keel.sh/trigger: poll
spec: spec:
capacity: 150Gi revisionHistoryLimit: 0
class: dedicated replicas: 2
selector:
matchLabels:
app.kubernetes.io/name: camtiler
template:
metadata:
labels:
app.kubernetes.io/name: camtiler
component: camtiler
spec:
serviceAccountName: camtiler
containers:
- name: camtiler
image: harbor.k-space.ee/k-space/camera-tiler:latest
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
ports:
- containerPort: 5001
name: "http"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: log-viewer-frontend
annotations:
keel.sh/policy: force
keel.sh/trigger: poll
spec:
revisionHistoryLimit: 0
replicas: 2
selector:
matchLabels:
app.kubernetes.io/name: log-viewer-frontend
template:
metadata:
labels:
app.kubernetes.io/name: log-viewer-frontend
spec:
containers:
- name: log-viewer-frontend
image: harbor.k-space.ee/k-space/log-viewer-frontend:latest
# securityContext:
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: log-viewer-backend
annotations:
keel.sh/policy: force
keel.sh/trigger: poll
spec:
revisionHistoryLimit: 0
replicas: 3
selector:
matchLabels:
app.kubernetes.io/name: log-viewer-backend
template:
metadata:
labels:
app.kubernetes.io/name: log-viewer-backend
spec:
containers:
- name: log-backend-backend
image: harbor.k-space.ee/k-space/log-viewer:latest
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
env:
- name: MONGO_URI
valueFrom:
secretKeyRef:
name: mongodb-application-readwrite
key: connectionString.standard
- name: MINIO_BUCKET
value: application
- name: MINIO_HOSTNAME
value: cams-s3.k-space.ee
- name: MINIO_PORT
value: "443"
- name: MINIO_SCHEME
value: "https"
- name: MINIO_SECRET_KEY
valueFrom:
secretKeyRef:
name: minio-secret
key: secretkey
- name: MINIO_ACCESS_KEY
valueFrom:
secretKeyRef:
name: minio-secret
key: accesskey
---
apiVersion: v1
kind: Service
metadata:
name: log-viewer-frontend
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: log-viewer-frontend
ports:
- protocol: TCP
port: 3003
---
apiVersion: v1
kind: Service
metadata:
name: log-viewer-backend
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: log-viewer-backend
ports:
- protocol: TCP
port: 3002
---
apiVersion: v1
kind: Service
metadata:
name: camtiler
labels:
component: camtiler
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: camtiler
component: camtiler
ports:
- protocol: TCP
port: 5001
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: camtiler
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: camtiler
rules:
- apiGroups:
- ""
resources:
- services
verbs:
- list
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: camtiler
subjects:
- kind: ServiceAccount
name: camtiler
apiGroup: ""
roleRef:
kind: Role
name: camtiler
apiGroup: ""
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: camtiler
annotations:
kubernetes.io/ingress.class: traefik
# This tells Traefik this Ingress object is associated with the
# https:// entrypoint
# Global http:// to https:// redirect is enabled in
# ../traefik/values.yml using `globalArguments`
traefik.ingress.kubernetes.io/router.entrypoints: websecure
# Following enables Authelia intercepting middleware
# which makes sure user is authenticated and then
# proceeds to inject Remote-User header for the application
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true"
# Following tells external-dns to add CNAME entry which makes
# cams.k-space.ee point to same IP address as traefik.k-space.ee
# The A record for traefik.k-space.ee is created via annotation
# added in ../traefik/ingress.yml
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
spec:
rules:
- host: cams.k-space.ee
http:
paths:
- pathType: Prefix
path: "/tiled"
backend:
service:
name: camtiler
port:
number: 5001
- pathType: Prefix
path: "/events"
backend:
service:
name: log-viewer-backend
port:
number: 3002
- pathType: Prefix
path: "/"
backend:
service:
name: log-viewer-frontend
port:
number: 3003
tls:
- hosts:
- "*.k-space.ee"
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: camera-motion-detect
spec:
podSelector:
matchLabels:
component: camdetect
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
component: camtiler
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: prometheus-operator
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus
egress:
- to:
- ipBlock:
# Permit access to cameras outside the cluster
cidr: 100.102.0.0/16
- to:
- podSelector:
matchLabels:
app: mongodb-svc
ports:
- port: 27017
- to:
- podSelector:
matchLabels:
v1.min.io/tenant: minio
ports:
- port: 9000
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: camera-tiler
spec:
podSelector:
matchLabels:
component: camtiler
policyTypes:
- Ingress
- Egress
egress:
- to:
- podSelector:
matchLabels:
component: camdetect
ports:
- port: 5000
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: prometheus-operator
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: log-viewer-backend
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: log-viewer-backend
policyTypes:
- Ingress
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: mongodb-svc
- to:
# Minio access via Traefik's public endpoint
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: log-viewer-frontend
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: log-viewer-frontend
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minio
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
spec:
rules:
- host: cams-s3.k-space.ee
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: minio
port:
number: 80
tls:
- hosts:
- "*.k-space.ee"
--- ---
apiVersion: apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition kind: CustomResourceDefinition
@ -97,13 +473,12 @@ spec:
metadata: metadata:
name: foobar name: foobar
labels: labels:
app.kubernetes.io/name: foobar component: camdetect
component: camera-motion-detect
spec: spec:
type: ClusterIP type: ClusterIP
selector: selector:
app.kubernetes.io/name: foobar app.kubernetes.io/name: foobar
component: camera-motion-detect component: camdetect
ports: ports:
- protocol: TCP - protocol: TCP
port: 80 port: 80
@ -113,16 +488,19 @@ spec:
kind: Deployment kind: Deployment
metadata: metadata:
name: camera-foobar name: camera-foobar
# Make sure keel.sh pulls updates for this deployment
annotations:
keel.sh/policy: force
keel.sh/trigger: poll
spec: spec:
revisionHistoryLimit: 0
replicas: 1 replicas: 1
# Make sure we do not congest the network during rollout
strategy: strategy:
type: RollingUpdate type: RollingUpdate
rollingUpdate: rollingUpdate:
# Swap following two with replicas: 2 maxSurge: 0
maxSurge: 1 maxUnavailable: 1
maxUnavailable: 0
selector: selector:
matchLabels: matchLabels:
app.kubernetes.io/name: foobar app.kubernetes.io/name: foobar
@ -130,25 +508,18 @@ spec:
metadata: metadata:
labels: labels:
app.kubernetes.io/name: foobar app.kubernetes.io/name: foobar
component: camera-motion-detect component: camdetect
spec: spec:
containers: containers:
- name: camera-motion-detect - name: camdetect
image: harbor.k-space.ee/k-space/camera-motion-detect:latest image: harbor.k-space.ee/k-space/camera-motion-detect:latest
starupProbe:
httpGet:
path: /healthz
port: 5000
initialDelaySeconds: 2
periodSeconds: 180
timeoutSeconds: 60
readinessProbe: readinessProbe:
httpGet: httpGet:
path: /readyz path: /readyz
port: 5000 port: 5000
initialDelaySeconds: 60 initialDelaySeconds: 10
periodSeconds: 60 periodSeconds: 180
timeoutSeconds: 5 timeoutSeconds: 60
ports: ports:
- containerPort: 5000 - containerPort: 5000
name: "http" name: "http"
@ -158,7 +529,7 @@ spec:
cpu: "200m" cpu: "200m"
limits: limits:
memory: "256Mi" memory: "256Mi"
cpu: "4000m" cpu: "1"
securityContext: securityContext:
readOnlyRootFilesystem: true readOnlyRootFilesystem: true
runAsNonRoot: true runAsNonRoot: true
@ -170,25 +541,9 @@ spec:
- name: SOURCE_NAME - name: SOURCE_NAME
value: foobar value: foobar
- name: S3_BUCKET_NAME - name: S3_BUCKET_NAME
valueFrom: value: application
secretKeyRef:
name: miniobucket-camtiler-owner-secrets
key: BUCKET_NAME
- name: S3_ENDPOINT_URL - name: S3_ENDPOINT_URL
valueFrom: value: http://minio
secretKeyRef:
name: miniobucket-camtiler-owner-secrets
key: AWS_S3_ENDPOINT_URL
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: miniobucket-camtiler-owner-secrets
key: AWS_SECRET_ACCESS_KEY
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: miniobucket-camtiler-owner-secrets
key: AWS_ACCESS_KEY_ID
- name: BASIC_AUTH_PASSWORD - name: BASIC_AUTH_PASSWORD
valueFrom: valueFrom:
secretKeyRef: secretKeyRef:
@ -199,6 +554,16 @@ spec:
secretKeyRef: secretKeyRef:
name: mongodb-application-readwrite name: mongodb-application-readwrite
key: connectionString.standard key: connectionString.standard
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: minio-secret
key: secretkey
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: minio-secret
key: accesskey
# Make sure 2+ pods of same camera are scheduled on different hosts # Make sure 2+ pods of same camera are scheduled on different hosts
affinity: affinity:
@ -206,21 +571,32 @@ spec:
requiredDuringSchedulingIgnoredDuringExecution: requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector: - labelSelector:
matchExpressions: matchExpressions:
- key: app.kubernetes.io/name - key: app
operator: In operator: In
values: values:
- foobar - foobar
topologyKey: topology.kubernetes.io/zone topologyKey: kubernetes.io/hostname
# Make sure camera deployments are spread over workers # Make sure camera deployments are spread over workers
topologySpreadConstraints: topologySpreadConstraints:
- maxSkew: 1 - maxSkew: 1
topologyKey: topology.kubernetes.io/zone topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule whenUnsatisfiable: DoNotSchedule
labelSelector: labelSelector:
matchLabels: matchLabels:
app.kubernetes.io/name: foobar app.kubernetes.io/name: foobar
component: camera-motion-detect component: camdetect
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: camtiler
spec:
selector: {}
podMetricsEndpoints:
- port: http
podTargetLabels:
- app.kubernetes.io/name
--- ---
apiVersion: monitoring.coreos.com/v1 apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule kind: PrometheusRule
@ -231,21 +607,21 @@ spec:
- name: cameras - name: cameras
rules: rules:
- alert: CameraLost - alert: CameraLost
expr: rate(camtiler_frames_total{stage="downloaded"}[1m]) < 1 expr: rate(camdetect_rx_frames_total[2m]) < 1
for: 2m for: 2m
labels: labels:
severity: warning severity: warning
annotations: annotations:
summary: Camera feed stopped summary: Camera feed stopped
- alert: CameraServerRoomMotion - alert: CameraServerRoomMotion
expr: rate(camtiler_events_total{app_kubernetes_io_name="server-room"}[30m]) > 0 expr: camdetect_event_active {app="camdetect-server-room"} > 0
for: 1m for: 1m
labels: labels:
severity: warning severity: warning
annotations: annotations:
summary: Motion was detected in server room summary: Motion was detected in server room
- alert: CameraSlowUploads - alert: CameraSlowUploads
expr: camtiler_queue_frames{stage="upload"} > 10 expr: rate(camdetect_upload_dropped_frames_total[2m]) > 1
for: 5m for: 5m
labels: labels:
severity: warning severity: warning
@ -253,20 +629,13 @@ spec:
summary: Motion detect snapshots are piling up and summary: Motion detect snapshots are piling up and
not getting uploaded to S3 not getting uploaded to S3
- alert: CameraSlowProcessing - alert: CameraSlowProcessing
expr: camtiler_queue_frames{stage="download"} > 10 expr: rate(camdetect_download_dropped_frames_total[2m]) > 1
for: 5m for: 5m
labels: labels:
severity: warning severity: warning
annotations: annotations:
summary: Motion detection processing pipeline is not keeping up summary: Motion detection processing pipeline is not keeping up
with incoming frames with incoming frames
- alert: CameraResourcesThrottled
expr: sum by (pod) (rate(container_cpu_cfs_throttled_periods_total{namespace="camtiler"}[1m])) > 0
for: 5m
labels:
severity: warning
annotations:
summary: CPU limits are bottleneck
--- ---
apiVersion: k-space.ee/v1alpha1 apiVersion: k-space.ee/v1alpha1
kind: Camera kind: Camera
@ -275,7 +644,6 @@ metadata:
spec: spec:
target: http://user@workshop.cam.k-space.ee:8080/?action=stream target: http://user@workshop.cam.k-space.ee:8080/?action=stream
secretRef: camera-secrets secretRef: camera-secrets
replicas: 1
--- ---
apiVersion: k-space.ee/v1alpha1 apiVersion: k-space.ee/v1alpha1
kind: Camera kind: Camera
@ -284,7 +652,6 @@ metadata:
spec: spec:
target: http://user@server-room.cam.k-space.ee:8080/?action=stream target: http://user@server-room.cam.k-space.ee:8080/?action=stream
secretRef: camera-secrets secretRef: camera-secrets
replicas: 2
--- ---
apiVersion: k-space.ee/v1alpha1 apiVersion: k-space.ee/v1alpha1
kind: Camera kind: Camera
@ -293,7 +660,6 @@ metadata:
spec: spec:
target: http://user@printer.cam.k-space.ee:8080/?action=stream target: http://user@printer.cam.k-space.ee:8080/?action=stream
secretRef: camera-secrets secretRef: camera-secrets
replicas: 1
--- ---
apiVersion: k-space.ee/v1alpha1 apiVersion: k-space.ee/v1alpha1
kind: Camera kind: Camera
@ -302,7 +668,6 @@ metadata:
spec: spec:
target: http://user@chaos.cam.k-space.ee:8080/?action=stream target: http://user@chaos.cam.k-space.ee:8080/?action=stream
secretRef: camera-secrets secretRef: camera-secrets
replicas: 1
--- ---
apiVersion: k-space.ee/v1alpha1 apiVersion: k-space.ee/v1alpha1
kind: Camera kind: Camera
@ -311,7 +676,6 @@ metadata:
spec: spec:
target: http://user@cyber.cam.k-space.ee:8080/?action=stream target: http://user@cyber.cam.k-space.ee:8080/?action=stream
secretRef: camera-secrets secretRef: camera-secrets
replicas: 1
--- ---
apiVersion: k-space.ee/v1alpha1 apiVersion: k-space.ee/v1alpha1
kind: Camera kind: Camera
@ -320,36 +684,19 @@ metadata:
spec: spec:
target: http://user@kitchen.cam.k-space.ee:8080/?action=stream target: http://user@kitchen.cam.k-space.ee:8080/?action=stream
secretRef: camera-secrets secretRef: camera-secrets
replicas: 1
--- ---
apiVersion: k-space.ee/v1alpha1 apiVersion: k-space.ee/v1alpha1
kind: Camera kind: Camera
metadata: metadata:
name: back-door name: back-door
spec: spec:
target: http://user@100.102.3.3:8080/?action=stream target: http://user@back-door.cam.k-space.ee:8080/?action=stream
secretRef: camera-secrets secretRef: camera-secrets
replicas: 1
--- ---
apiVersion: k-space.ee/v1alpha1 apiVersion: k-space.ee/v1alpha1
kind: Camera kind: Camera
metadata: metadata:
name: ground-door name: ground-door
spec: spec:
target: http://user@100.102.3.1:8080/?action=stream target: http://user@ground-door.cam.k-space.ee:8080/?action=stream
secretRef: camera-secrets secretRef: camera-secrets
replicas: 1
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: camera-motion-detect
spec:
selector:
matchLabels:
component: camera-motion-detect
podMetricsEndpoints:
- port: http
podTargetLabels:
- app.kubernetes.io/name
- component

View File

@ -1,98 +0,0 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: camera-tiler
annotations:
keel.sh/policy: force
keel.sh/trigger: poll
spec:
revisionHistoryLimit: 0
replicas: 2
selector:
matchLabels: &selectorLabels
app.kubernetes.io/name: camtiler
component: camera-tiler
template:
metadata:
labels: *selectorLabels
spec:
serviceAccountName: camera-tiler
containers:
- name: camera-tiler
image: harbor.k-space.ee/k-space/camera-tiler:latest
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
ports:
- containerPort: 5001
name: "http"
resources:
requests:
memory: "200Mi"
cpu: "100m"
limits:
memory: "500Mi"
cpu: "4000m"
---
apiVersion: v1
kind: Service
metadata:
name: camera-tiler
labels:
app.kubernetes.io/name: camtiler
component: camera-tiler
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: camtiler
component: camera-tiler
ports:
- protocol: TCP
port: 5001
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: camera-tiler
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: camera-tiler
rules:
- apiGroups:
- ""
resources:
- services
verbs:
- list
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: camera-tiler
subjects:
- kind: ServiceAccount
name: camera-tiler
apiGroup: ""
roleRef:
kind: Role
name: camera-tiler
apiGroup: ""
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: camtiler
spec:
selector:
matchLabels:
app.kubernetes.io/name: camtiler
component: camera-tiler
podMetricsEndpoints:
- port: http
podTargetLabels:
- app.kubernetes.io/name
- component

View File

@ -1,78 +0,0 @@
---
apiVersion: codemowers.io/v1alpha1
kind: OIDCGWMiddlewareClient
metadata:
name: sso
spec:
displayName: Cameras
uri: 'https://cams.k-space.ee/tiled'
allowedGroups:
- k-space:floor
- k-space:friends
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: camtiler
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: camtiler-sso@kubernetescrd,camtiler-redirect@kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
spec:
rules:
- host: cams.k-space.ee
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: logmower-frontend
port:
number: 8080
- host: cam.k-space.ee
http:
paths:
- pathType: Prefix
path: "/tiled"
backend:
service:
name: camera-tiler
port:
number: 5001
- pathType: Prefix
path: "/m"
backend:
service:
name: camera-tiler
port:
number: 5001
- pathType: Prefix
path: "/events"
backend:
service:
name: logmower-eventsource
port:
number: 3002
- pathType: Prefix
path: "/"
backend:
service:
name: logmower-frontend
port:
number: 8080
tls:
- hosts:
- "*.k-space.ee"
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: redirect
spec:
redirectRegex:
regex: ^https://cams.k-space.ee/(.*)$
replacement: https://cam.k-space.ee/$1
permanent: false

View File

@ -1,182 +0,0 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: logmower-eventsource
spec:
revisionHistoryLimit: 0
replicas: 2
selector:
matchLabels: &selectorLabels
app.kubernetes.io/name: camtiler
component: logmower-eventsource
template:
metadata:
labels: *selectorLabels
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- camtiler
- key: component
operator: In
values:
- logmower-eventsource
topologyKey: topology.kubernetes.io/zone
nodeSelector:
dedicated: storage
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule
containers:
- name: logmower-eventsource
image: harbor.k-space.ee/k-space/logmower-eventsource
ports:
- containerPort: 3002
name: nodejs
env:
- name: MONGO_COLLECTION
value: eventlog
- name: MONGODB_HOST
valueFrom:
secretKeyRef:
name: mongodb-application-readonly
key: connectionString.standard
- name: BACKEND
value: 'camtiler'
- name: BACKEND_BROKER_URL
value: 'http://logmower-event-broker'
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: logmower-event-broker
spec:
revisionHistoryLimit: 0
replicas: 2
selector:
matchLabels: &selectorLabels
app.kubernetes.io/name: camtiler
component: logmower-event-broker
template:
metadata:
labels: *selectorLabels
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- camtiler
- key: component
operator: In
values:
- logmower-event-broker
topologyKey: topology.kubernetes.io/zone
nodeSelector:
dedicated: storage
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule
containers:
- name: logmower-event-broker
image: harbor.k-space.ee/k-space/camera-event-broker
ports:
- containerPort: 3000
env:
- name: MINIO_BUCKET
valueFrom:
secretKeyRef:
name: miniobucket-camtiler-owner-secrets
key: BUCKET_NAME
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: miniobucket-camtiler-owner-secrets
key: AWS_SECRET_ACCESS_KEY
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: miniobucket-camtiler-owner-secrets
key: AWS_ACCESS_KEY_ID
- name: MINIO_HOSTNAME
value: 'dedicated-5ee6428f-4cb5-4c2e-90b5-364668f515c2.minio-clusters.k-space.ee'
- name: MINIO_PORT
value: '443'
- name: MINIO_SCHEMA
value: 'https'
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: logmower-frontend
spec:
revisionHistoryLimit: 0
replicas: 2
selector:
matchLabels: &selectorLabels
app.kubernetes.io/name: camtiler
component: logmower-frontend
template:
metadata:
labels: *selectorLabels
spec:
containers:
- name: logmower-frontend
image: harbor.k-space.ee/k-space/logmower-frontend
ports:
- containerPort: 8080
name: http
---
apiVersion: v1
kind: Service
metadata:
name: logmower-frontend
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: camtiler
component: logmower-frontend
ports:
- protocol: TCP
port: 8080
---
apiVersion: v1
kind: Service
metadata:
name: logmower-eventsource
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: camtiler
component: logmower-eventsource
ports:
- protocol: TCP
port: 3002
---
apiVersion: v1
kind: Service
metadata:
name: logmower-event-broker
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: camtiler
component: logmower-event-broker
ports:
- protocol: TCP
port: 80
targetPort: 3000

1
camtiler/minio-support.yml Symbolic link
View File

@ -0,0 +1 @@
../shared/minio-support.yml

View File

@ -1,154 +0,0 @@
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: camera-motion-detect
spec:
podSelector:
matchLabels:
component: camera-motion-detect
policyTypes:
- Ingress
# - Egress # Something wrong with using minio-clusters as namespaceSelector.
ingress:
- from:
- podSelector:
matchLabels:
app.kubernetes.io/name: camtiler
component: camera-tiler
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus
egress:
- to:
- ipBlock:
# Permit access to cameras outside the cluster
cidr: 100.102.0.0/16
- to:
- podSelector:
matchLabels:
app: mongodb-svc
ports:
- port: 27017
- to:
- podSelector:
matchLabels:
app.kubernetes.io/name: minio
ports:
- port: 9000
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: camera-tiler
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: camtiler
component: camera-tiler
policyTypes:
- Ingress
- Egress
egress:
- to:
- podSelector:
matchLabels:
component: camera-motion-detect
ports:
- port: 5000
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: logmower-eventsource
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: camtiler
component: logmower-eventsource
policyTypes:
- Ingress
# - Egress # Something wrong with using mongodb-svc as podSelector.
egress:
- to:
- podSelector:
matchLabels:
app: mongodb-svc
- podSelector:
matchLabels:
component: logmower-event-broker
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: logmower-event-broker
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: camtiler
component: logmower-event-broker
policyTypes:
- Ingress
- Egress
egress:
- to:
# Minio access via Traefik's public endpoint
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
ingress:
- from:
- podSelector:
matchLabels:
component: logmower-eventsource
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: logmower-frontend
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: camtiler
component: logmower-frontend
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik

View File

@ -4,16 +4,12 @@ kind: MongoDBCommunity
metadata: metadata:
name: mongodb name: mongodb
spec: spec:
agent:
logLevel: ERROR
maxLogFileDurationHours: 1
additionalMongodConfig: additionalMongodConfig:
systemLog: systemLog:
quiet: true quiet: true
members: 2 members: 3
arbiters: 1
type: ReplicaSet type: ReplicaSet
version: "6.0.3" version: "5.0.9"
security: security:
authentication: authentication:
modes: ["SCRAM"] modes: ["SCRAM"]
@ -31,7 +27,7 @@ spec:
passwordSecretRef: passwordSecretRef:
name: mongodb-application-readonly-password name: mongodb-application-readonly-password
roles: roles:
- name: read - name: readOnly
db: application db: application
scramCredentialsSecretName: mongodb-application-readonly scramCredentialsSecretName: mongodb-application-readonly
statefulSet: statefulSet:
@ -39,24 +35,6 @@ spec:
logLevel: WARN logLevel: WARN
template: template:
spec: spec:
containers:
- name: mongod
resources:
requests:
cpu: 100m
memory: 512Mi
limits:
cpu: 500m
memory: 1Gi
volumeMounts:
- name: journal-volume
mountPath: /data/journal
- name: mongodb-agent
resources:
requests:
cpu: 1m
memory: 100Mi
limits: {}
affinity: affinity:
podAntiAffinity: podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution: requiredDuringSchedulingIgnoredDuringExecution:
@ -66,7 +44,7 @@ spec:
operator: In operator: In
values: values:
- mongodb-svc - mongodb-svc
topologyKey: topology.kubernetes.io/zone topologyKey: kubernetes.io/hostname
nodeSelector: nodeSelector:
dedicated: storage dedicated: storage
tolerations: tolerations:
@ -77,34 +55,76 @@ spec:
volumeClaimTemplates: volumeClaimTemplates:
- metadata: - metadata:
name: logs-volume name: logs-volume
labels:
usecase: logs
spec: spec:
storageClassName: mongo storageClassName: local-path
accessModes: accessModes:
- ReadWriteOnce - ReadWriteOnce
resources: resources:
requests: requests:
storage: 200Mi storage: 512Mi
- metadata:
name: journal-volume
labels:
usecase: journal
spec:
storageClassName: mongo
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
- metadata: - metadata:
name: data-volume name: data-volume
labels:
usecase: data
spec: spec:
storageClassName: mongo storageClassName: local-path
accessModes: accessModes:
- ReadWriteOnce - ReadWriteOnce
resources: resources:
requests: requests:
storage: 2Gi storage: 2Gi
---
apiVersion: minio.min.io/v2
kind: Tenant
metadata:
name: minio
annotations:
prometheus.io/path: /minio/prometheus/metrics
prometheus.io/port: "9000"
prometheus.io/scrape: "true"
spec:
credsSecret:
name: minio-secret
buckets:
- name: application
requestAutoCert: false
users:
- name: minio-user-0
pools:
- name: pool-0
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: v1.min.io/tenant
operator: In
values:
- minio
- key: v1.min.io/pool
operator: In
values:
- pool-0
topologyKey: kubernetes.io/hostname
resources:
requests:
cpu: '1'
memory: 512Mi
servers: 4
volumesPerServer: 1
volumeClaimTemplate:
metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: '30Gi'
storageClassName: local-path
status: {}
nodeSelector:
dedicated: storage
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule

View File

@ -1,11 +1,12 @@
--- ---
# AD/Samba group "Kubernetes Admins" members have full access
apiVersion: rbac.authorization.k8s.io/v1 apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding kind: ClusterRoleBinding
metadata: metadata:
name: kubernetes-admins name: kubernetes-admins
subjects: subjects:
- kind: Group - kind: Group
name: "k-space:kubernetes:admins" name: "Kubernetes Admins"
apiGroup: rbac.authorization.k8s.io apiGroup: rbac.authorization.k8s.io
roleRef: roleRef:
kind: ClusterRole kind: ClusterRole

View File

@ -1,8 +0,0 @@
# CloudNativePG
To deploy:
```
wget https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/release-1.20/releases/cnpg-1.20.2.yaml -O application.yml
kubectl apply -f application.yml
```

File diff suppressed because it is too large Load Diff

View File

@ -1,3 +1,14 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: application-config
data:
DRONE_GITEA_SERVER: "https://git.k-space.ee"
DRONE_GIT_ALWAYS_AUTH: "false"
DRONE_PROMETHEUS_ANONYMOUS_ACCESS: "true"
DRONE_SERVER_HOST: "drone.k-space.ee"
DRONE_SERVER_PROTO: "https"
DRONE_USER_CREATE: "username:lauri,admin:true"
--- ---
apiVersion: v1 apiVersion: v1
kind: Service kind: Service
@ -48,24 +59,11 @@ spec:
httpGet: httpGet:
path: / path: /
port: http port: http
env:
- name: DRONE_GITEA_SERVER
value: https://git.k-space.ee
- name: DRONE_GIT_ALWAYS_AUTH
value: "false"
- name: DRONE_SERVER_HOST
value: drone.k-space.ee
- name: DRONE_SERVER_PROTO
value: https
- name: DRONE_USER_CREATE
value: username:lauri,admin:true
- name: DRONE_DEBUG
value: "true"
- name: DRONE_TRACE
value: "true"
envFrom: envFrom:
- secretRef: - secretRef:
name: application-secrets name: application-secrets
- configMapRef:
name: application-config
volumeMounts: volumeMounts:
- name: drone-data - name: drone-data
mountPath: /data mountPath: /data
@ -80,16 +78,6 @@ spec:
requests: requests:
storage: 8Gi storage: 8Gi
--- ---
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: redirect
spec:
redirectRegex:
regex: ^https://(.*)/register$
replacement: https://${1}/
permanent: false
---
apiVersion: networking.k8s.io/v1 apiVersion: networking.k8s.io/v1
kind: Ingress kind: Ingress
metadata: metadata:
@ -99,7 +87,6 @@ metadata:
kubernetes.io/ingress.class: traefik kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true" traefik.ingress.kubernetes.io/router.tls: "true"
traefik.ingress.kubernetes.io/router.middlewares: drone-redirect@kubernetescrd
spec: spec:
tls: tls:
- hosts: - hosts:

View File

@ -1,5 +1,12 @@
To apply changes: To apply changes:
``` ```
kubectl apply -n etherpad -f application.yml kubectl apply -n etherpad -f application.yml -f networkpolicy-base.yml
``` ```
Initialize MySQL secrets:
```
kubectl create secret generic -n etherpad mariadb-secrets \
--from-literal=MYSQL_ROOT_PASSWORD=$(cat /dev/urandom | base64 | head -c 30) \
--from-literal=MYSQL_PASSWORD=$(cat /dev/urandom | base64 | head -c 30)

View File

@ -1,12 +1,4 @@
--- ---
apiVersion: codemowers.io/v1alpha1
kind: OIDCGWMiddlewareClient
metadata:
name: sso
spec:
displayName: Etherpad
uri: 'https://pad.k-space.ee/'
---
apiVersion: apps/v1 apiVersion: apps/v1
kind: StatefulSet kind: StatefulSet
metadata: metadata:
@ -40,8 +32,6 @@ spec:
ports: ports:
- containerPort: 9001 - containerPort: 9001
env: env:
- name: MINIFY
value: 'false'
- name: DB_TYPE - name: DB_TYPE
value: mysql value: mysql
- name: DB_HOST - name: DB_HOST
@ -126,12 +116,89 @@ spec:
matchLabels: matchLabels:
kubernetes.io/metadata.name: traefik kubernetes.io/metadata.name: traefik
ports: ports:
- port: 9001 - protocol: TCP
protocol: TCP port: 9001
egress: egress:
- ports: - to:
- port: 3306
protocol: TCP
to:
- ipBlock: - ipBlock:
cidr: 172.20.36.1/32 cidr: 172.20.36.1/32
ports:
- protocol: TCP
port: 3306
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: mysql-operator
spec:
podSelector:
matchLabels:
app: etherpad
policyTypes:
- Ingress
- Egress
ingress:
- # TODO: Not sure why mysql-operator needs to be able to connect
from:
- namespaceSelector:
matchExpressions:
- key: kubernetes.io/metadata.name
operator: In
values:
- mysql-operator
ports:
- protocol: TCP
port: 3306
- # Allow connecting from other MySQL pods in same namespace
from:
- podSelector:
matchLabels:
app.kubernetes.io/managed-by: mysql-operator
ports:
- protocol: TCP
port: 3306
egress:
- # Allow connecting to other MySQL pods in same namespace
to:
- podSelector:
matchLabels:
app.kubernetes.io/managed-by: mysql-operator
ports:
- protocol: TCP
port: 3306
---
apiVersion: mysql.oracle.com/v2
kind: InnoDBCluster
metadata:
name: mysql-cluster
spec:
secretName: mysql-secrets
instances: 3
router:
instances: 1
tlsUseSelfSigned: true
datadirVolumeClaimTemplate:
storageClassName: local-path
accessModes:
- ReadWriteOnce
resources:
requests:
storage: "10Gi"
podSpec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/managed-by
operator: In
values:
- mysql-operator
topologyKey: kubernetes.io/hostname
nodeSelector:
dedicated: storage
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule

15
external-dns/README.md Normal file
View File

@ -0,0 +1,15 @@
Before applying replace the secret with the actual one.
For debugging add `- --log-level=debug`:
```
kubectl apply -n external-dns -f external-dns.yml
```
Insert TSIG secret:
```
kubectl -n external-dns create secret generic tsig-secret \
--from-literal=TSIG_SECRET=<secret>
```

View File

@ -0,0 +1,84 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: external-dns
namespace: external-dns
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- pods
- nodes
verbs:
- get
- watch
- list
- apiGroups:
- extensions
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: external-dns
namespace: external-dns
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: external-dns-viewer
namespace: external-dns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: external-dns
subjects:
- kind: ServiceAccount
name: external-dns
namespace: external-dns
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns
namespace: external-dns
spec:
revisionHistoryLimit: 0
selector:
matchLabels:
app: external-dns
template:
metadata:
labels:
app: external-dns
spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: k8s.gcr.io/external-dns/external-dns:v0.10.2
envFrom:
- secretRef:
name: tsig-secret
args:
- --registry=txt
- --txt-prefix=external-dns-
- --txt-owner-id=k8s
- --provider=rfc2136
- --source=ingress
- --source=service
- --domain-filter=k-space.ee
- --rfc2136-host=193.40.103.2
- --rfc2136-port=53
- --rfc2136-zone=k-space.ee
- --rfc2136-tsig-keyname=acme
- --rfc2136-tsig-secret-alg=hmac-sha512
- --rfc2136-tsig-secret=$(TSIG_SECRET)
# https://github.com/kubernetes-sigs/external-dns/issues/2446

View File

@ -1,9 +0,0 @@
# Freescout
This application is managed by [ArgoCD](https://argocd.k-space.ee/applications/argocd/freescout)
Should ArgoCD be down manifests here can be applied with:
```
kubectl apply -n freescout -f application.yaml
```

View File

@ -1,217 +1,4 @@
--- ---
apiVersion: codemowers.io/v1alpha1
kind: OIDCGWMiddlewareClient
metadata:
name: freescout
spec:
displayName: Freescout Middleware
uri: 'https://freescout.k-space.ee'
allowedGroups:
- k-space:floor
headerMapping:
email: Remote-Email
groups: Remote-Groups
name: Remote-Name
user: Remote-User
---
apiVersion: codemowers.io/v1alpha1
kind: OIDCGWClient
metadata:
name: freescout
spec:
displayName: Freescout
uri: https://freescout.k-space.ee
redirectUris:
- https://freescout.k-space.ee/oauth_callback
allowedGroups:
- k-space:floor
grantTypes:
- authorization_code
- refresh_token
responseTypes:
- code
availableScopes:
- openid
- profile
pkce: false
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: oidc-gateway
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
traefik.ingress.kubernetes.io/router.middlewares: freescout-freescout@kubernetescrd
spec:
rules:
- host: freescout.k-space.ee
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: freescout
port:
number: 80
tls:
- hosts:
- "*.k-space.ee"
---
apiVersion: v1
kind: Service
metadata:
name: freescout
spec:
type: ClusterIP
selector:
app: freescout
ports:
- protocol: TCP
port: 80
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: freescout
labels:
app: freescout
spec:
selector:
matchLabels:
app: freescout
replicas: 1
template:
metadata:
labels:
app: freescout
spec:
containers:
- name: freescout
image: harbor.k-space.ee/k-space/freescout@sha256:de1a6c8bd1f285f6f6c61aa48921a884fe7a1496655b31c9536805397c01ee58
ports:
- containerPort: 8080
env:
- name: DISPLAY_ERRORS
value: 'true'
- name: SITE_URL
value: 'https://freescout.k-space.ee'
- name: APP_URL
value: 'https://freescout.k-space.ee'
- name: DB_HOST
value: mariadb.infra.k-space.ee
- name: DB_PORT
value: "3306"
- name: DB_DATABASE
value: kspace_freescout
- name: DB_USERNAME
value: kspace_freescout
- name: ADMIN_EMAIL
value: lauri@k-space.ee
- name: ADMIN_PASS
value: Salakala1!
- name: TIMEZONE
value: Europe/Tallinn
- name: FREESCOUT_ATTACHMENTS_DRIVER
value: s3
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: freescout-secrets
key: DB_PASS
- name: AWS_USE_PATH_STYLE_ENDPOINT
value: "true"
- name: AWS_BUCKET
valueFrom:
secretKeyRef:
name: miniobucket-attachments-owner-secrets
key: BUCKET_NAME
- name: APP_KEY
valueFrom:
secretKeyRef:
name: freescout-app
key: APP_KEY
envFrom:
- secretRef:
name: miniobucket-attachments-owner-secrets
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: freescout-cron
spec:
schedule: "0,30 * * * *" # Should be every minute in theory, keeps hanging
jobTemplate:
spec:
activeDeadlineSeconds: 1800 # this is unholy https://github.com/freescout-helpdesk/freescout/blob/dist/app/Console/Kernel.php
ttlSecondsAfterFinished: 100
template:
spec:
containers:
- name: freescout-cron
image: harbor.k-space.ee/k-space/freescout@sha256:de1a6c8bd1f285f6f6c61aa48921a884fe7a1496655b31c9536805397c01ee58
imagePullPolicy: Always
command:
- php
- artisan
- schedule:run
env:
- name: DISPLAY_ERRORS
value: 'true'
- name: SITE_URL
value: 'https://freescout.k-space.ee'
- name: APP_URL
value: 'https://freescout.k-space.ee'
- name: DB_HOST
value: mariadb.infra.k-space.ee
- name: DB_PORT
value: "3306"
- name: DB_DATABASE
value: kspace_freescout
- name: DB_USERNAME
value: kspace_freescout
- name: ADMIN_EMAIL
value: lauri@k-space.ee
- name: ADMIN_PASS
value: Salakala1!
- name: TIMEZONE
value: Europe/Tallinn
- name: FREESCOUT_ATTACHMENTS_DRIVER
value: s3
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: freescout-secrets
key: DB_PASS
- name: AWS_USE_PATH_STYLE_ENDPOINT
value: "true"
- name: AWS_BUCKET
valueFrom:
secretKeyRef:
name: miniobucket-attachments-owner-secrets
key: BUCKET_NAME
- name: APP_KEY
valueFrom:
secretKeyRef:
name: freescout-app
key: APP_KEY
envFrom:
- secretRef:
name: miniobucket-attachments-owner-secrets
restartPolicy: Never
---
apiVersion: codemowers.cloud/v1beta1
kind: MinioBucketClaim
metadata:
name: attachments
spec:
capacity: 10Gi
class: external
---
apiVersion: monitoring.coreos.com/v1 apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule kind: PrometheusRule
metadata: metadata:

View File

@ -1,50 +0,0 @@
---
apiVersion: batch/v1
kind: Job
metadata:
name: reset-oidc-config
spec:
template:
spec:
volumes:
- name: tmp
emptyDir: {}
initContainers:
- name: jq
image: alpine/k8s:1.24.16@sha256:06f8942d87fa17b40795bb9a8eff029a9be3fc3c9bcc13d62071de4cc3324153
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /tmp
name: tmp
envFrom:
- secretRef:
name: oidc-client-freescout-owner-secrets
command:
- /bin/bash
- -c
- rm -fv /tmp/update.sql;
jq '{"name":"oauth.client_id","value":$ENV.OIDC_CLIENT_ID} | "UPDATE options SET value=\(.value|tostring|@sh) WHERE name=\(.name|tostring|@sh) LIMIT 1;"' -n -r >> /tmp/update.sql;
jq '{"name":"oauth.client_secret","value":$ENV.OIDC_CLIENT_SECRET} | "UPDATE options SET value=\(.value|tostring|@sh) WHERE name=\(.name|tostring|@sh) LIMIT 1;"' -n -r >> /tmp/update.sql;
jq '{"name":"oauth.auth_url","value":$ENV.OIDC_GATEWAY_AUTH_URI} | "UPDATE options SET value=\(.value + "?scope=openid+profile" |tostring|@sh) WHERE name=\(.name|tostring|@sh) LIMIT 1;"' -n -r >> /tmp/update.sql;
jq '{"name":"oauth.token_url","value":$ENV.OIDC_GATEWAY_TOKEN_URI} | "UPDATE options SET value=\(.value|tostring|@sh) WHERE name=\(.name|tostring|@sh) LIMIT 1;"' -n -r >> /tmp/update.sql;
jq '{"name":"oauth.user_url","value":$ENV.OIDC_GATEWAY_USERINFO_URI} | "UPDATE options SET value=\(.value|tostring|@sh) WHERE name=\(.name|tostring|@sh) LIMIT 1;"' -n -r >> /tmp/update.sql;
cat /tmp/update.sql
containers:
- name: mysql
image: mysql
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /tmp
name: tmp
env:
- name: MYSQL_PWD
valueFrom:
secretKeyRef:
name: freescout-secrets
key: DB_PASS
command:
- /bin/bash
- -c
- mysql -u kspace_freescout kspace_freescout -h 172.20.36.1 -p${MYSQL_PWD} < /tmp/update.sql
restartPolicy: OnFailure
backoffLimit: 4

View File

@ -1,9 +0,0 @@
# Gitea
This application is managed by [ArgoCD](https://argocd.k-space.ee/applications/argocd/gitea)
Should ArgoCD be down manifests here can be applied with:
```
kubectl apply -n gitea -f application.yaml
```

View File

@ -1,240 +0,0 @@
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: gitea
namespace: gitea
spec:
dnsNames:
- git.k-space.ee
issuerRef:
kind: ClusterIssuer
name: default
secretName: git-tls
---
apiVersion: codemowers.cloud/v1beta1
kind: SecretClaim
metadata:
name: gitea-security-secret-key
spec:
size: 32
mapping:
- key: secret
value: "%(plaintext)s"
---
apiVersion: codemowers.cloud/v1beta1
kind: SecretClaim
metadata:
name: gitea-security-internal-token
spec:
size: 32
mapping:
- key: secret
value: "%(plaintext)s"
---
apiVersion: codemowers.io/v1alpha1
kind: OIDCGWClient
metadata:
name: gitea
spec:
displayName: Gitea
uri: https://git.k-space.ee/user/oauth2/OpenID
redirectUris:
- https://git.k-space.ee/user/oauth2/OpenID/callback
allowedGroups:
- k-space:floor
- k-space:friends
grantTypes:
- authorization_code
- refresh_token
responseTypes:
- code
availableScopes:
- openid
- profile
pkce: false
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: gitea
labels:
app.kubernetes.io/name: gitea
spec:
revisionHistoryLimit: 0
serviceName: gitea
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: gitea
template:
metadata:
labels:
app.kubernetes.io/name: gitea
spec:
enableServiceLinks: false
securityContext:
fsGroup: 1000
runAsUser: 1000
runAsGroup: 1000
runAsNonRoot: true
containers:
- name: gitea
image: gitea/gitea:1.21.5-rootless
imagePullPolicy: IfNotPresent
securityContext:
readOnlyRootFilesystem: true
env:
- name: GITEA__REPOSITORY__DISABLED_REPO_UNITS
value: repo.releases,repo.wiki
- name: GITEA__ADMIN__DISABLE_REGULAR_ORG_CREATION
value: "true"
- name: GITEA__SERVER__SSH_SERVER_HOST_KEYS
value: ssh/gitea.rsa,ssh/gitea.ecdsa,ssh/gitea.ed25519
- name: GITEA__SERVER__START_SSH_SERVER
value: "true"
- name: GITEA__SERVER__CERT_FILE
value: "/cert/tls.crt"
- name: GITEA__SERVER__KEY_FILE
value: "/cert/tls.key"
- name: GITEA__SERVER__SSH_PORT
value: "22"
- name: GITEA__SERVER__PROTOCOL
value: https
- name: GITEA__SERVER__REDIRECT_OTHER_PORT
value: "true"
- name: GITEA__SERVER__PORT_TO_REDIRECT
value: "8080"
- name: GITEA__SERVER__DOMAIN
value: git.k-space.ee
- name: GITEA__SERVER__SSH_DOMAIN
value: git.k-space.ee
- name: GITEA__SERVER__HTTP_ADDR
value: 0.0.0.0
- name: GITEA__SERVER__ROOT_URL
value: https://git.k-space.ee
- name: GITEA__SSH.MINIMUM_KEY_SIZES__DSA
value: "-1"
- name: GITEA__DATABASE__DB_TYPE
value: mysql
- name: GITEA__DATABASE__HOST
value: mariadb.infra.k-space.ee:3306
- name: GITEA__DATABASE__NAME
value: kspace_git
- name: GITEA__DATABASE__USER
value: kspace_git
- name: GITEA__DATABASE__SSL_MODE
value: disable
- name: GITEA__DATABASE__LOG_SQL
value: "false"
- name: GITEA__SECURITY__INSTALL_LOCK
value: "true"
- name: GITEA__SERVICE__REGISTER_EMAIL_CONFIRM
value: "true"
- name: GITEA__SERVICE__DISABLE_REGISTRATION
value: "true"
- name: GITEA__SERVICE__ENABLE_NOTIFY_MAIL
value: "true"
- name: GITEA__MAILER__ENABLED
value: "true"
- name: GITEA__MAILER__SMTP_ADDR
value: mail.k-space.ee
- name: GITEA__MAILER__SMTP_PORT
value: "465"
- name: GITEA__MAILER__FROM
value: Gitea <git@k-space.ee>
- name: GITEA__MAILER__USER
value: git
- name: GITEA__MAILER__USE_PLAIN_TEXT
value: "false"
- name: GITEA__SESSION__PROVIDER
value: file
- name: GITEA__SESSION__COOKIE_SECURE
value: "true"
- name: GITEA__CRON__ENABLED
value: "true"
- name: GITEA__OAUTH2_CLIENT__ENABLE_AUTO_REGISTRATION
value: "true"
- name: GITEA__DATABASE__PASSWD
valueFrom:
secretKeyRef:
name: gitea-secrets
key: GITEA__DATABASE__PASSWD
- name: GITEA__MAILER__PASSWD
valueFrom:
secretKeyRef:
name: gitea-secrets
key: GITEA__MAILER__PASSWD
- name: GITEA__OAUTH2__JWT_SECRET
valueFrom:
secretKeyRef:
name: gitea-secrets
key: GITEA__OAUTH2__JWT_SECRET
- name: GITEA__SECURITY__INTERNAL_TOKEN
valueFrom:
secretKeyRef:
name: gitea-security-internal-token
key: secret
- name: GITEA__SECURITY__SECRET_KEY
valueFrom:
secretKeyRef:
name: gitea-security-secret-key
key: secret
ports:
- containerPort: 8080
name: http
- containerPort: 3000
name: https
- containerPort: 2222
name: ssh
volumeMounts:
- mountPath: /tmp
name: tmp
- mountPath: /etc/gitea
name: etc
- mountPath: /cert
name: cert
- mountPath: /var/lib/gitea
name: data
volumes:
- name: tmp
emptyDir: {}
- name: etc
emptyDir: {}
- name: cert
secret:
secretName: git-tls
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
storageClassName: gitea
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: Service
metadata:
name: gitea
namespace: gitea
annotations:
external-dns.alpha.kubernetes.io/hostname: git.k-space.ee
spec:
type: LoadBalancer
externalTrafficPolicy: Local
selector:
app.kubernetes.io/name: gitea
ports:
- port: 22
name: ssh
targetPort: 2222
- port: 80
name: http
targetPort: 8080
- port: 443
name: https
targetPort: 3000
sessionAffinity: ClientIP

View File

@ -1,15 +1,19 @@
# Grafana # Grafana
This application is managed by [ArgoCD](https://argocd.k-space.ee/applications/argocd/grafana)
Should ArgoCD be down manifests here can be applied with:
``` ```
kubectl create namespace grafana kubectl create namespace grafana
kubectl apply -n grafana -f application.yml kubectl apply -n grafana -f application.yml
``` ```
## OIDC secret
See Authelia README on provisioning and updating OIDC secrets for Grafana
## Grafana post deployment steps ## Grafana post deployment steps
* Configure Prometheus datasource with URL set to * Configure Prometheus datasource with URL set to
`http://prometheus-operated.monitoring.svc.cluster.local:9090` `http://prometheus-operated.prometheus-operator.svc.cluster.local:9090`
* Configure Elasticsearch datasource with URL set to
`http://elasticsearch.elastic-system.svc.cluster.local`,
Time field name set to `timestamp` and
ElasticSearch version set to `7.10+`

View File

@ -1,25 +1,4 @@
--- ---
apiVersion: codemowers.io/v1alpha1
kind: OIDCGWClient
metadata:
name: grafana
spec:
displayName: Grafana
uri: https://grafana.k-space.ee/login/generic_oauth
redirectUris:
- https://grafana.k-space.ee/login/generic_oauth
allowedGroups:
- k-space:floor
grantTypes:
- authorization_code
- refresh_token
responseTypes:
- code
availableScopes:
- openid
- profile
tokenEndpointAuthMethod: none
---
apiVersion: v1 apiVersion: v1
kind: ConfigMap kind: ConfigMap
metadata: metadata:
@ -35,12 +14,14 @@ data:
name = OAuth name = OAuth
icon = signin icon = signin
enabled = true enabled = true
client_id = grafana
scopes = openid profile email groups
empty_scopes = false empty_scopes = false
auth_url = https://auth.k-space.ee/api/oidc/authorize
token_url = https://auth.k-space.ee/api/oidc/token
api_url = https://auth.k-space.ee/api/oidc/userinfo
allow_sign_up = true allow_sign_up = true
use_pkce = true role_attribute_path = contains(groups[*], 'Grafana Admins') && 'Admin' || 'Viewer'
role_attribute_path = contains(groups[*], 'github.com:codemowers') && 'Admin' || 'Viewer'
[security]
disable_initial_admin_creation = true
--- ---
apiVersion: apps/v1 apiVersion: apps/v1
kind: StatefulSet kind: StatefulSet
@ -63,47 +44,14 @@ spec:
fsGroup: 472 fsGroup: 472
containers: containers:
- name: grafana - name: grafana
image: grafana/grafana:8.5.24 image: grafana/grafana:8.5.0
securityContext: securityContext:
readOnlyRootFilesystem: true readOnlyRootFilesystem: true
runAsNonRoot: true runAsNonRoot: true
runAsUser: 472 runAsUser: 472
env: envFrom:
- name: GF_AUTH_GENERIC_OAUTH_SIGNOUT_REDIRECT_URL - secretRef:
valueFrom: name: oidc-secret
secretKeyRef:
name: oidc-client-grafana-owner-secrets
key: OIDC_GATEWAY_URI
- name: GF_AUTH_GENERIC_OAUTH_CLIENT_ID
valueFrom:
secretKeyRef:
name: oidc-client-grafana-owner-secrets
key: OIDC_CLIENT_ID
- name: GF_AUTH_GENERIC_OAUTH_SECRET
valueFrom:
secretKeyRef:
name: oidc-client-grafana-owner-secrets
key: OIDC_CLIENT_SECRET
- name: GF_AUTH_GENERIC_OAUTH_SCOPES
valueFrom:
secretKeyRef:
name: oidc-client-grafana-owner-secrets
key: OIDC_AVAILABLE_SCOPES
- name: GF_AUTH_GENERIC_OAUTH_AUTH_URL
valueFrom:
secretKeyRef:
name: oidc-client-grafana-owner-secrets
key: OIDC_GATEWAY_AUTH_URI
- name: GF_AUTH_GENERIC_OAUTH_TOKEN_URL
valueFrom:
secretKeyRef:
name: oidc-client-grafana-owner-secrets
key: OIDC_GATEWAY_TOKEN_URI
- name: GF_AUTH_GENERIC_OAUTH_API_URL
valueFrom:
secretKeyRef:
name: oidc-client-grafana-owner-secrets
key: OIDC_GATEWAY_USERINFO_URI
ports: ports:
- containerPort: 3000 - containerPort: 3000
name: http-grafana name: http-grafana
@ -185,11 +133,3 @@ spec:
tls: tls:
- hosts: - hosts:
- "*.k-space.ee" - "*.k-space.ee"
---
apiVersion: codemowers.cloud/v1beta1
kind: MysqlDatabaseClaim
metadata:
name: grafana
spec:
capacity: 1Gi
class: shared

View File

@ -1,57 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: goredirect
namespace: hackerspace
spec:
replicas: 2
revisionHistoryLimit: 0
selector:
matchLabels:
app.kubernetes.io/name: goredirect
template:
metadata:
labels:
app.kubernetes.io/name: goredirect
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- goredirect
topologyKey: topology.kubernetes.io/zone
weight: 100
containers:
- image: harbor.k-space.ee/k-space/goredirect:latest
imagePullPolicy: Always
env:
- name: GOREDIRECT_NOT_FOUND
value: https://inventory.k-space.ee/m/inventory/add-slug/%s
- name: GOREDIRECT_FOUND
value: https://inventory.k-space.ee/m/inventory/%s/view
- name: MONGO_URI
valueFrom:
secretKeyRef:
key: connectionString.standard
name: inventory-mongodb-application-readwrite
name: goredirect
ports:
- containerPort: 8080
name: http
protocol: TCP
resources:
limits:
cpu: "1"
memory: 500Mi
requests:
cpu: 100m
memory: 200Mi
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000

View File

@ -1,200 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: inventory
namespace: hackerspace
spec:
replicas: 1
revisionHistoryLimit: 0
selector:
matchLabels:
app.kubernetes.io/name: inventory
template:
metadata:
labels:
app.kubernetes.io/name: inventory
spec:
containers:
- image: harbor.k-space.ee/k-space/inventory-app:latest
imagePullPolicy: Always
env:
- name: ENVIRONMENT_TYPE
value: PROD
- name: PYTHONUNBUFFERED
value: "1"
- name: MEMBERS_HOST
value: https://members.k-space.ee
- name: INVENTORY_ASSETS_BASE_URL
value: https://minio-cluster-shared.k-space.ee/inventory-5b342be1-60a1-4290-8061-e0b8fc17d40d/
- name: OIDC_USERS_NAMESPACE
value: oidc-gateway
- name: MONGO_URI
valueFrom:
secretKeyRef:
key: connectionString.standard
name: inventory-mongodb-application-readwrite
- name: SECRET_KEY
valueFrom:
secretKeyRef:
key: SECRET_KEY
name: inventory-secrets
- name: INVENTORY_API_KEY
valueFrom:
secretKeyRef:
key: INVENTORY_API_KEY
name: inventory-api-key
- name: SLACK_DOORLOG_CALLBACK
valueFrom:
secretKeyRef:
key: SLACK_DOORLOG_CALLBACK
name: slack-secrets
- name: SLACK_VERIFICATION_TOKEN
valueFrom:
secretKeyRef:
key: SLACK_VERIFICATION_TOKEN
name: slack-secrets
envFrom:
- secretRef:
name: miniobucket-inventory-owner-secrets
- secretRef:
name: oidc-client-inventory-app-owner-secrets
name: inventory
ports:
- containerPort: 5000
name: http
protocol: TCP
resources:
limits:
cpu: "1"
memory: 500Mi
requests:
cpu: 100m
memory: 200Mi
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /tmp
name: tmp
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
serviceAccount: inventory
serviceAccountName: inventory
terminationGracePeriodSeconds: 30
volumes:
- name: tmp
---
apiVersion: codemowers.cloud/v1beta1
kind: SecretClaim
metadata:
name: inventory-mongodb-readwrite-password
spec:
size: 32
mapping:
- key: password
value: "%(plaintext)s"
---
apiVersion: mongodbcommunity.mongodb.com/v1
kind: MongoDBCommunity
metadata:
name: inventory-mongodb
spec:
agent:
logLevel: ERROR
maxLogFileDurationHours: 1
additionalMongodConfig:
systemLog:
quiet: true
members: 3
type: ReplicaSet
version: "6.0.3"
security:
authentication:
modes: ["SCRAM"]
users:
- name: readwrite
db: application
passwordSecretRef:
name: inventory-mongodb-readwrite-password
roles:
- name: readWrite
db: application
scramCredentialsSecretName: inventory-mongodb-readwrite
statefulSet:
spec:
logLevel: WARN
template:
spec:
containers:
- name: mongod
resources:
requests:
cpu: 100m
memory: 1Gi
limits:
cpu: 4000m
memory: 1Gi
volumeMounts:
- name: journal-volume
mountPath: /data/journal
- name: mongodb-agent
resources:
requests:
cpu: 1m
memory: 100Mi
limits: {}
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- inventory-mongodb-svc
topologyKey: kubernetes.io/hostname
nodeSelector:
dedicated: monitoring
tolerations:
- key: dedicated
operator: Equal
value: monitoring
effect: NoSchedule
volumeClaimTemplates:
- metadata:
name: logs-volume
labels:
usecase: logs
spec:
storageClassName: mongo
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
- metadata:
name: journal-volume
labels:
usecase: journal
spec:
storageClassName: mongo
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 512Mi
- metadata:
name: data-volume
labels:
usecase: data
spec:
storageClassName: mongo
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi

View File

@ -1 +0,0 @@
../mongodb-operator/mongodb-support.yml

View File

@ -35,7 +35,7 @@ data:
TRIVY_ADAPTER_URL: "http://harbor-trivy:8080" TRIVY_ADAPTER_URL: "http://harbor-trivy:8080"
REGISTRY_STORAGE_PROVIDER_NAME: "filesystem" REGISTRY_STORAGE_PROVIDER_NAME: "filesystem"
WITH_CHARTMUSEUM: "false" WITH_CHARTMUSEUM: "false"
LOG_LEVEL: "warning" LOG_LEVEL: "info"
CONFIG_PATH: "/etc/core/app.conf" CONFIG_PATH: "/etc/core/app.conf"
CHART_CACHE_DRIVER: "redis" CHART_CACHE_DRIVER: "redis"
_REDIS_URL_CORE: "redis://harbor-redis:6379/0?idle_timeout_seconds=30" _REDIS_URL_CORE: "redis://harbor-redis:6379/0?idle_timeout_seconds=30"
@ -396,7 +396,7 @@ spec:
terminationGracePeriodSeconds: 120 terminationGracePeriodSeconds: 120
containers: containers:
- name: core - name: core
image: mirror.gcr.io/goharbor/harbor-core:v2.4.2 image: goharbor/harbor-core:v2.4.2
startupProbe: startupProbe:
httpGet: httpGet:
path: /api/v2.0/ping path: /api/v2.0/ping
@ -500,7 +500,7 @@ spec:
terminationGracePeriodSeconds: 120 terminationGracePeriodSeconds: 120
containers: containers:
- name: jobservice - name: jobservice
image: mirror.gcr.io/goharbor/harbor-jobservice:v2.4.2 image: goharbor/harbor-jobservice:v2.4.2
readinessProbe: readinessProbe:
httpGet: httpGet:
path: /api/v1/stats path: /api/v1/stats
@ -571,7 +571,7 @@ spec:
automountServiceAccountToken: false automountServiceAccountToken: false
containers: containers:
- name: portal - name: portal
image: mirror.gcr.io/goharbor/harbor-portal:v2.4.2 image: goharbor/harbor-portal:v2.4.2
readinessProbe: readinessProbe:
httpGet: httpGet:
path: / path: /
@ -625,7 +625,7 @@ spec:
terminationGracePeriodSeconds: 120 terminationGracePeriodSeconds: 120
containers: containers:
- name: registry - name: registry
image: mirror.gcr.io/goharbor/registry-photon:v2.4.2 image: goharbor/registry-photon:v2.4.2
readinessProbe: readinessProbe:
httpGet: httpGet:
path: / path: /
@ -652,7 +652,7 @@ spec:
mountPath: /etc/registry/config.yml mountPath: /etc/registry/config.yml
subPath: config.yml subPath: config.yml
- name: registryctl - name: registryctl
image: mirror.gcr.io/goharbor/harbor-registryctl:v2.4.2 image: goharbor/harbor-registryctl:v2.4.2
readinessProbe: readinessProbe:
httpGet: httpGet:
path: /api/health path: /api/health
@ -743,7 +743,7 @@ spec:
# for more detail. # for more detail.
# we may remove it after several releases # we may remove it after several releases
- name: "data-migrator" - name: "data-migrator"
image: mirror.gcr.io/goharbor/harbor-db:v2.4.2 image: goharbor/harbor-db:v2.4.2
command: ["/bin/sh"] command: ["/bin/sh"]
args: ["-c", "[ -e /var/lib/postgresql/data/postgresql.conf ] && [ ! -d /var/lib/postgresql/data/pgdata ] && mkdir -m 0700 /var/lib/postgresql/data/pgdata && mv /var/lib/postgresql/data/* /var/lib/postgresql/data/pgdata/ || true"] args: ["-c", "[ -e /var/lib/postgresql/data/postgresql.conf ] && [ ! -d /var/lib/postgresql/data/pgdata ] && mkdir -m 0700 /var/lib/postgresql/data/pgdata && mv /var/lib/postgresql/data/* /var/lib/postgresql/data/pgdata/ || true"]
volumeMounts: volumeMounts:
@ -755,7 +755,7 @@ spec:
# use this init container to correct the permission # use this init container to correct the permission
# as "fsGroup" applied before the init container running, the container has enough permission to execute the command # as "fsGroup" applied before the init container running, the container has enough permission to execute the command
- name: "data-permissions-ensurer" - name: "data-permissions-ensurer"
image: mirror.gcr.io/goharbor/harbor-db:v2.4.2 image: goharbor/harbor-db:v2.4.2
command: ["/bin/sh"] command: ["/bin/sh"]
args: ["-c", "chmod -R 700 /var/lib/postgresql/data/pgdata || true"] args: ["-c", "chmod -R 700 /var/lib/postgresql/data/pgdata || true"]
volumeMounts: volumeMounts:
@ -764,7 +764,7 @@ spec:
subPath: subPath:
containers: containers:
- name: database - name: database
image: mirror.gcr.io/goharbor/harbor-db:v2.4.2 image: goharbor/harbor-db:v2.4.2
readinessProbe: readinessProbe:
exec: exec:
command: command:
@ -838,7 +838,7 @@ spec:
terminationGracePeriodSeconds: 120 terminationGracePeriodSeconds: 120
containers: containers:
- name: redis - name: redis
image: mirror.gcr.io/goharbor/redis-photon:v2.4.2 image: goharbor/redis-photon:v2.4.2
readinessProbe: readinessProbe:
tcpSocket: tcpSocket:
port: 6379 port: 6379
@ -895,7 +895,7 @@ spec:
automountServiceAccountToken: false automountServiceAccountToken: false
containers: containers:
- name: trivy - name: trivy
image: mirror.gcr.io/goharbor/trivy-adapter-photon:v2.4.2 image: goharbor/trivy-adapter-photon:v2.4.2
imagePullPolicy: IfNotPresent imagePullPolicy: IfNotPresent
securityContext: securityContext:
privileged: false privileged: false

View File

@ -1,38 +0,0 @@
all:
children:
bind:
hosts:
ns1.k-space.ee:
kubernetes:
children:
masters:
hosts:
master1.kube.k-space.ee:
master2.kube.k-space.ee:
master3.kube.k-space.ee:
kubelets:
children:
mon:
hosts:
mon1.kube.k-space.ee:
mon2.kube.k-space.ee:
mon3.kube.k-space.ee:
storage:
hosts:
storage1.kube.k-space.ee:
storage2.kube.k-space.ee:
storage3.kube.k-space.ee:
storage4.kube.k-space.ee:
workers:
hosts:
worker1.kube.k-space.ee:
worker2.kube.k-space.ee:
worker3.kube.k-space.ee:
worker4.kube.k-space.ee:
worker9.kube.k-space.ee:
doors:
hosts:
100.102.3.1:
100.102.3.2:
100.102.3.3:
100.102.3.4:

10
keel/README.md Normal file
View File

@ -0,0 +1,10 @@
To generate secrets and to deploy:
```
kubectl create secret generic -n $(basename $(pwd)) application-secrets \
--from-literal=BASIC_AUTH_PASSWORD=$(cat /dev/urandom | base64 | head -c 30) \
--from-literal=MAIL_SMTP_PASS=... \
--from-literal=SLACK_TOKEN=...
kubectl apply -n keel -f application.yml
kubectl -n keel rollout restart deployment.apps/keel
```

176
keel/application.yml Normal file
View File

@ -0,0 +1,176 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: keel
namespace: keel
labels:
app: keel
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: keel
rules:
- apiGroups:
- ""
resources:
- namespaces
verbs:
- watch
- list
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- watch
- list
- apiGroups:
- ""
- extensions
- apps
- batch
resources:
- pods
- replicasets
- replicationcontrollers
- statefulsets
- deployments
- daemonsets
- jobs
- cronjobs
verbs:
- get
- delete # required to delete pods during force upgrade of the same tag
- watch
- list
- update
- apiGroups:
- ""
resources:
- configmaps
- pods/portforward
verbs:
- get
- create
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: keel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: keel
subjects:
- kind: ServiceAccount
name: keel
namespace: keel
---
apiVersion: v1
kind: Service
metadata:
name: keel
namespace: keel
labels:
app: keel
spec:
type: ClusterIP
ports:
- port: 9300
targetPort: 9300
protocol: TCP
name: keel
selector:
app: keel
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: keel
labels:
app: keel
annotations:
keel.sh/policy: force
keel.sh/trigger: poll
keel.sh/pollSchedule: "@midnight"
spec:
replicas: 1
serviceName: keel
selector:
matchLabels:
app: keel
template:
metadata:
labels:
app: keel
spec:
serviceAccountName: keel
containers:
- name: keel
image: keelhq/keel:latest
imagePullPolicy: Always
command: ["/bin/keel"]
volumeMounts:
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POLL
value: "true"
- name: HELM_PROVIDER
value: "false"
- name: TILLER_NAMESPACE
value: "kube-system"
- name: TILLER_ADDRESS
value: "tiller-deploy:44134"
- name: NOTIFICATION_LEVEL
value: "info"
- name: BASIC_AUTH_USER
value: admin
- name: SLACK_CHANNELS
value: kube-prod
- name: SLACK_BOT_NAME
value: keel.k-space.ee
envFrom:
- secretRef:
name: application-secrets
ports:
- containerPort: 9300
livenessProbe:
httpGet:
path: /healthz
port: 9300
initialDelaySeconds: 30
timeoutSeconds: 10
readinessProbe:
httpGet:
path: /healthz
port: 9300
initialDelaySeconds: 30
timeoutSeconds: 10
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 50m
memory: 64Mi
volumeMounts:
- name: keel-data
mountPath: /data
volumeClaimTemplates:
- metadata:
name: keel-data
spec:
storageClassName: longhorn
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi

View File

@ -1,165 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: descheduler
namespace: kube-system
labels:
app.kubernetes.io/name: descheduler
---
apiVersion: v1
kind: ConfigMap
metadata:
name: descheduler
namespace: kube-system
labels:
app.kubernetes.io/name: descheduler
data:
policy.yaml: |
apiVersion: "descheduler/v1alpha1"
kind: "DeschedulerPolicy"
strategies:
LowNodeUtilization:
enabled: true
params:
nodeResourceUtilizationThresholds:
targetThresholds:
cpu: 50
memory: 50
pods: 50
thresholds:
cpu: 20
memory: 20
pods: 20
RemoveDuplicates:
enabled: true
RemovePodsHavingTooManyRestarts:
enabled: true
params:
podsHavingTooManyRestarts:
includingInitContainers: true
podRestartThreshold: 100
RemovePodsViolatingInterPodAntiAffinity:
enabled: true
RemovePodsViolatingNodeAffinity:
enabled: true
params:
nodeAffinityType:
- requiredDuringSchedulingIgnoredDuringExecution
RemovePodsViolatingNodeTaints:
enabled: true
RemovePodsViolatingTopologySpreadConstraint:
enabled: true
params:
includeSoftConstraints: false
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: descheduler
labels:
app.kubernetes.io/name: descheduler
rules:
- apiGroups: ["events.k8s.io"]
resources: ["events"]
verbs: ["create", "update"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "watch", "list"]
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "watch", "list"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list", "delete"]
- apiGroups: [""]
resources: ["pods/eviction"]
verbs: ["create"]
- apiGroups: ["scheduling.k8s.io"]
resources: ["priorityclasses"]
verbs: ["get", "watch", "list"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["create", "update"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
resourceNames: ["descheduler"]
verbs: ["get", "patch", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: descheduler
labels:
app.kubernetes.io/name: descheduler
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: descheduler
subjects:
- kind: ServiceAccount
name: descheduler
namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: descheduler
namespace: kube-system
labels:
app.kubernetes.io/name: descheduler
spec:
replicas: 2
selector:
matchLabels: &selectorLabels
app.kubernetes.io/name: descheduler
template:
metadata:
labels: *selectorLabels
spec:
priorityClassName: system-cluster-critical
serviceAccountName: descheduler
containers:
- name: descheduler
image: "k8s.gcr.io/descheduler/descheduler:v0.25.1"
imagePullPolicy: IfNotPresent
command:
- "/bin/descheduler"
args:
- "--policy-config-file"
- "/policy-dir/policy.yaml"
- "--descheduling-interval"
- 5m
- "--v"
- "3"
- --leader-elect=true
ports:
- containerPort: 10258
protocol: TCP
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10258
scheme: HTTPS
initialDelaySeconds: 3
periodSeconds: 10
resources:
requests:
cpu: 500m
memory: 256Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
runAsNonRoot: true
volumeMounts:
- mountPath: /policy-dir
name: policy-volume
volumes:
- name: policy-volume
configMap:
name: descheduler

View File

@ -159,9 +159,7 @@ spec:
spec: spec:
automountServiceAccountToken: true automountServiceAccountToken: true
containers: containers:
- image: registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.9.2 - image: registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.6.0
args:
- --metric-labels-allowlist=pods=[*]
livenessProbe: livenessProbe:
httpGet: httpGet:
path: /healthz path: /healthz
@ -310,6 +308,14 @@ spec:
annotations: annotations:
summary: Kubernetes Volume out of disk space (instance {{ $labels.instance }}) summary: Kubernetes Volume out of disk space (instance {{ $labels.instance }})
description: "Volume is almost full (< 10% left)\n VALUE = {{ $value }}\n LABELS = {{ $labels }}" description: "Volume is almost full (< 10% left)\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesVolumeFullInFourDays
expr: predict_linear(kubelet_volume_stats_available_bytes[6h], 4 * 24 * 3600) < 0
for: 0m
labels:
severity: critical
annotations:
summary: Kubernetes Volume full in four days (instance {{ $labels.instance }})
description: "{{ $labels.namespace }}/{{ $labels.persistentvolumeclaim }} is expected to fill up within four days. Currently {{ $value | humanize }}% is available.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesPersistentvolumeError - alert: KubernetesPersistentvolumeError
expr: kube_persistentvolume_status_phase{phase=~"Failed|Pending", job="kube-state-metrics"} > 0 expr: kube_persistentvolume_status_phase{phase=~"Failed|Pending", job="kube-state-metrics"} > 0
for: 0m for: 0m
@ -423,13 +429,29 @@ spec:
summary: Kubernetes DaemonSet rollout stuck (instance {{ $labels.instance }}) summary: Kubernetes DaemonSet rollout stuck (instance {{ $labels.instance }})
description: "Some Pods of DaemonSet are not scheduled or not ready\n VALUE = {{ $value }}\n LABELS = {{ $labels }}" description: "Some Pods of DaemonSet are not scheduled or not ready\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesDaemonsetMisscheduled - alert: KubernetesDaemonsetMisscheduled
expr: sum by (namespace, daemonset) (kube_daemonset_status_number_misscheduled) > 0 expr: kube_daemonset_status_number_misscheduled > 0
for: 1m for: 1m
labels: labels:
severity: critical severity: critical
annotations: annotations:
summary: Kubernetes DaemonSet misscheduled (instance {{ $labels.instance }}) summary: Kubernetes DaemonSet misscheduled (instance {{ $labels.instance }})
description: "Some DaemonSet Pods are running where they are not supposed to run\n VALUE = {{ $value }}\n LABELS = {{ $labels }}" description: "Some DaemonSet Pods are running where they are not supposed to run\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesCronjobTooLong
expr: time() - kube_cronjob_next_schedule_time > 3600
for: 0m
labels:
severity: warning
annotations:
summary: Kubernetes CronJob too long (instance {{ $labels.instance }})
description: "CronJob {{ $labels.namespace }}/{{ $labels.cronjob }} is taking more than 1h to complete.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesJobSlowCompletion
expr: kube_job_spec_completions - kube_job_status_succeeded > 0
for: 12h
labels:
severity: critical
annotations:
summary: Kubernetes job slow completion (instance {{ $labels.instance }})
description: "Kubernetes Job {{ $labels.namespace }}/{{ $labels.job_name }} did not complete in time.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesApiServerErrors - alert: KubernetesApiServerErrors
expr: sum(rate(apiserver_request_total{job="apiserver",code=~"^(?:5..)$"}[1m])) / sum(rate(apiserver_request_total{job="apiserver"}[1m])) * 100 > 3 expr: sum(rate(apiserver_request_total{job="apiserver",code=~"^(?:5..)$"}[1m])) / sum(rate(apiserver_request_total{job="apiserver"}[1m])) * 100 > 3
for: 2m for: 2m

View File

@ -1,7 +1,5 @@
# Logging infrastructure # Logging infrastructure
Note: This is deprecated since we moved to [Logmower stack](https://github.com/logmower)
## Background ## Background
Fluent Bit picks up the logs from Kubernetes workers and sends them to Graylog Fluent Bit picks up the logs from Kubernetes workers and sends them to Graylog

View File

@ -1,512 +0,0 @@
---
apiVersion: codemowers.io/v1alpha1
kind: OIDCGWMiddlewareClient
metadata:
name: frontend
spec:
displayName: Kubernetes pod log aggregator
uri: 'https://log.k-space.ee'
allowedGroups:
- k-space:kubernetes:developers
- k-space:kubernetes:admins
headerMapping:
email: Remote-Email
groups: Remote-Groups
name: Remote-Name
user: Remote-Username
---
apiVersion: codemowers.cloud/v1beta1
kind: SecretClaim
metadata:
name: logmower-readwrite-password
spec:
size: 32
mapping:
- key: password
value: "%(plaintext)s"
---
apiVersion: codemowers.cloud/v1beta1
kind: SecretClaim
metadata:
name: logmower-readonly-password
spec:
size: 32
mapping:
- key: password
value: "%(plaintext)s"
---
apiVersion: mongodbcommunity.mongodb.com/v1
kind: MongoDBCommunity
metadata:
name: logmower-mongodb
spec:
agent:
logLevel: ERROR
maxLogFileDurationHours: 1
additionalMongodConfig:
systemLog:
quiet: true
members: 2
arbiters: 1
type: ReplicaSet
version: "6.0.3"
security:
authentication:
modes: ["SCRAM"]
users:
- name: readwrite
db: application
passwordSecretRef:
name: logmower-readwrite-password
roles:
- name: readWrite
db: application
scramCredentialsSecretName: logmower-readwrite
- name: readonly
db: application
passwordSecretRef:
name: logmower-readonly-password
roles:
- name: read
db: application
scramCredentialsSecretName: logmower-readonly
statefulSet:
spec:
logLevel: WARN
template:
spec:
containers:
- name: mongod
resources:
requests:
cpu: 100m
memory: 1Gi
limits:
cpu: 4000m
memory: 1Gi
volumeMounts:
- name: journal-volume
mountPath: /data/journal
- name: mongodb-agent
resources:
requests:
cpu: 1m
memory: 100Mi
limits: {}
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- logmower-mongodb-svc
topologyKey: kubernetes.io/hostname
nodeSelector:
dedicated: monitoring
tolerations:
- key: dedicated
operator: Equal
value: monitoring
effect: NoSchedule
volumeClaimTemplates:
- metadata:
name: logs-volume
labels:
usecase: logs
spec:
storageClassName: mongo
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
- metadata:
name: journal-volume
labels:
usecase: journal
spec:
storageClassName: mongo
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 512Mi
- metadata:
name: data-volume
labels:
usecase: data
spec:
storageClassName: mongo
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: logmower-shipper
spec:
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 50%
selector:
matchLabels:
app: logmower-shipper
template:
metadata:
labels:
app: logmower-shipper
spec:
serviceAccountName: logmower-shipper
containers:
- name: logmower-shipper
image: logmower/shipper:latest
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: MONGO_URI
valueFrom:
secretKeyRef:
name: logmower-mongodb-application-readwrite
key: connectionString.standard
ports:
- containerPort: 8000
name: metrics
securityContext:
readOnlyRootFilesystem: true
command:
- /app/log_shipper.py
- --parse-json
- --normalize-log-level
- --stream-to-log-level
- --merge-top-level
- --max-collection-size
- "10000000000"
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: etcmachineid
mountPath: /etc/machine-id
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
volumes:
- name: etcmachineid
hostPath:
path: /etc/machine-id
- name: varlog
hostPath:
path: /var/log
tolerations:
- operator: "Exists"
effect: "NoSchedule"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: logging-logmower-shipper
subjects:
- kind: ServiceAccount
name: logmower-shipper
namespace: logmower
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: logmower-shipper
labels:
app: logmower-shipper
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: logmower-shipper
spec:
podSelector:
matchLabels:
app: logmower-shipper
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus
egress:
- to:
- podSelector:
matchLabels:
app: logmower-mongodb-svc
ports:
- port: 27017
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: logmower-eventsource
spec:
podSelector:
matchLabels:
app: logmower-eventsource
policyTypes:
- Ingress
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: logmower-mongodb-svc
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: logmower-frontend
spec:
podSelector:
matchLabels:
app: logmower-frontend
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: logmower-shipper
spec:
selector:
matchLabels:
app: logmower-shipper
podMetricsEndpoints:
- port: metrics
---
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: logmower-shipper
spec:
groups:
- name: logmower-shipper
rules:
- alert: LogmowerSingleInsertionErrors
annotations:
summary: Logmower shipper is having issues submitting log records
to database
expr: rate(logmower_insertion_error_count_total[30m]) > 0
for: 0m
labels:
severity: warning
- alert: LogmowerBulkInsertionErrors
annotations:
summary: Logmower shipper is having issues submitting log records
to database
expr: rate(logmower_bulk_insertion_error_count_total[30m]) > 0
for: 0m
labels:
severity: warning
- alert: LogmowerHighDatabaseLatency
annotations:
summary: Database operations are slow
expr: histogram_quantile(0.95, logmower_database_operation_latency_bucket) > 10
for: 1m
labels:
severity: warning
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: logmower
annotations:
kubernetes.io/ingress.class: traefik
cert-manager.io/cluster-issuer: default
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: logmower-frontend@kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
spec:
rules:
- host: log.k-space.ee
http:
paths:
- pathType: Prefix
path: "/events"
backend:
service:
name: logmower-eventsource
port:
number: 3002
- pathType: Prefix
path: "/"
backend:
service:
name: logmower-frontend
port:
number: 8080
tls:
- hosts:
- "*.k-space.ee"
---
apiVersion: v1
kind: Service
metadata:
name: logmower-eventsource
spec:
type: ClusterIP
selector:
app: logmower-eventsource
ports:
- protocol: TCP
port: 3002
---
apiVersion: v1
kind: Service
metadata:
name: logmower-frontend
spec:
type: ClusterIP
selector:
app: logmower-frontend
ports:
- protocol: TCP
port: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: logmower-frontend
spec:
selector:
matchLabels:
app: logmower-frontend
template:
metadata:
labels:
app: logmower-frontend
spec:
containers:
- name: logmower-frontend
image: logmower/frontend:latest
ports:
- containerPort: 8080
name: http
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
resources:
limits:
memory: 50Mi
requests:
cpu: 1m
memory: 20Mi
volumeMounts:
- name : nginx-cache
mountPath: /var/cache/nginx/
- name : nginx-config
mountPath: /var/config/nginx/
- name: var-run
mountPath: /var/run/
volumes:
- emptyDir: {}
name: nginx-cache
- emptyDir: {}
name: nginx-config
- emptyDir: {}
name: var-run
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: logmower-eventsource
spec:
selector:
matchLabels:
app: logmower-eventsource
template:
metadata:
labels:
app: logmower-eventsource
spec:
containers:
- name: logmower-eventsource
image: logmower/eventsource:latest
ports:
- containerPort: 3002
name: nodejs
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
resources:
limits:
cpu: 500m
memory: 200Mi
requests:
cpu: 10m
memory: 100Mi
env:
- name: MONGODB_HOST
valueFrom:
secretKeyRef:
name: logmower-mongodb-application-readonly
key: connectionString.standard
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: logmower-mongodb
spec:
podSelector:
matchLabels:
app: logmower-mongodb-svc
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector: {}
ports:
- port: 27017
egress:
- to:
- podSelector:
matchLabels:
app: logmower-mongodb-svc
ports:
- port: 27017

View File

@ -1 +0,0 @@
../mongodb-operator/mongodb-support.yml

View File

@ -1,47 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: logmower-mongoexpress
spec:
revisionHistoryLimit: 0
replicas: 1
selector:
matchLabels:
app: logmower-mongoexpress
template:
metadata:
labels:
app: logmower-mongoexpress
spec:
containers:
- name: mongoexpress
image: mongo-express
ports:
- name: mongoexpress
containerPort: 8081
env:
- name: ME_CONFIG_MONGODB_URL
valueFrom:
secretKeyRef:
name: logmower-mongodb-application-readonly
key: connectionString.standard
- name: ME_CONFIG_MONGODB_ENABLE_ADMIN
value: "true"
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: logmower-mongoexpress
spec:
podSelector:
matchLabels:
app: logmower-mongoexpress
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: logmower-mongodb-svc
ports:
- port: 27017

View File

@ -1,13 +1,10 @@
# Longhorn distributed block storage system # Longhorn distributed block storage system
Pull the manifest and apply changes The manifest was fetched from
https://raw.githubusercontent.com/longhorn/longhorn/v1.2.4/deploy/longhorn.yaml
and then heavily modified.
``` To deploy Longhorn use following:
wget https://raw.githubusercontent.com/longhorn/longhorn/v1.5.1/deploy/longhorn.yaml -O application.yml
patch -p0 < changes.diff
```
To upgrade use following:
``` ```
kubectl -n longhorn-system apply -f application.yml -f application-extras.yml kubectl -n longhorn-system apply -f application.yml -f application-extras.yml
@ -17,3 +14,7 @@ After deploying specify `dedicated=storage:NoSchedule`
for `Kubernetes Taint Toleration` under `Setting -> General` on for `Kubernetes Taint Toleration` under `Setting -> General` on
[Longhorn Dashboard](https://longhorn.k-space.ee/). [Longhorn Dashboard](https://longhorn.k-space.ee/).
Proceed to tag suitable nodes with `storage` and disable Longhorn scheduling on others. Proceed to tag suitable nodes with `storage` and disable Longhorn scheduling on others.
# Known issues
* Longhorn does not support [trim](https://github.com/longhorn/longhorn/issues/836)

View File

@ -1,19 +1,3 @@
---
apiVersion: codemowers.io/v1alpha1
kind: OIDCGWMiddlewareClient
metadata:
name: ui
spec:
displayName: Longhorn
uri: 'https://longhorn.k-space.ee'
allowedGroups:
- k-space:kubernetes:admins
headerMapping:
email: Remote-Email
groups: Remote-Groups
name: Remote-Name
user: Remote-Username
---
apiVersion: networking.k8s.io/v1 apiVersion: networking.k8s.io/v1
kind: Ingress kind: Ingress
metadata: metadata:
@ -23,7 +7,7 @@ metadata:
kubernetes.io/ingress.class: traefik kubernetes.io/ingress.class: traefik
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
traefik.ingress.kubernetes.io/router.entrypoints: websecure traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: longhorn-system-ui@kubernetescrd traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true" traefik.ingress.kubernetes.io/router.tls: "true"
spec: spec:
rules: rules:

File diff suppressed because it is too large Load Diff

View File

@ -1,46 +0,0 @@
---
apiVersion: codemowers.cloud/v1beta1
kind: MinioBucketClaim
metadata:
name: backup
spec:
capacity: 1Ti
class: external
---
apiVersion: longhorn.io/v1beta2
kind: Setting
metadata:
name: backup-target
namespace: longhorn-system
value: 's3://longhorn-system-a4b235c5-7919-4cb0-9949-259e60c579f1@us-east1/'
---
apiVersion: longhorn.io/v1beta2
kind: Setting
metadata:
name: backup-target-credential-secret
namespace: longhorn-system
value: 'miniobucket-backup-owner-secrets'
---
apiVersion: longhorn.io/v1beta1
kind: RecurringJob
metadata:
name: backup
namespace: longhorn-system
spec:
cron: "0 2 * * *"
task: backup
groups:
- default
retain: 1
concurrency: 4
---
apiVersion: longhorn.io/v1beta1
kind: RecurringJob
metadata:
name: trim
namespace: longhorn-system
spec:
cron: "0 * * * *"
task: trim
groups:
- default

View File

@ -1,53 +0,0 @@
--- application.yml 2023-07-25 22:20:02.300421340 +0300
+++ application.modded 2023-07-25 22:19:47.040360210 +0300
@@ -60,14 +60,14 @@
storageclass.kubernetes.io/is-default-class: "true"
provisioner: driver.longhorn.io
allowVolumeExpansion: true
- reclaimPolicy: "Delete"
+ reclaimPolicy: "Retain"
volumeBindingMode: Immediate
parameters:
- numberOfReplicas: "3"
+ numberOfReplicas: "2"
staleReplicaTimeout: "30"
fromBackup: ""
- fsType: "ext4"
- dataLocality: "disabled"
+ fsType: "xfs"
+ dataLocality: "best-effort"
---
# Source: longhorn/templates/crds.yaml
apiVersion: apiextensions.k8s.io/v1
@@ -4085,6 +4085,15 @@
app.kubernetes.io/version: v1.5.1
app: longhorn-manager
spec:
+ tolerations:
+ - key: dedicated
+ operator: Equal
+ value: storage
+ effect: NoSchedule
+ - key: arch
+ operator: Equal
+ value: arm64
+ effect: NoSchedule
containers:
- name: longhorn-manager
image: longhornio/longhorn-manager:v1.5.1
@@ -4188,6 +4197,15 @@
app.kubernetes.io/version: v1.5.1
app: longhorn-driver-deployer
spec:
+ tolerations:
+ - key: dedicated
+ operator: Equal
+ value: storage
+ effect: NoSchedule
+ - key: arch
+ operator: Equal
+ value: arm64
+ effect: NoSchedule
initContainers:
- name: wait-longhorn-manager
image: longhornio/longhorn-manager:v1.5.1

View File

@ -1,158 +0,0 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: doorboy-proxy
annotations:
keel.sh/policy: force
keel.sh/trigger: poll
spec:
revisionHistoryLimit: 0
replicas: 3
selector:
matchLabels: &selectorLabels
app.kubernetes.io/name: doorboy-proxy
template:
metadata:
labels: *selectorLabels
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- doorboy-proxy
topologyKey: kubernetes.io/hostname
weight: 100
containers:
- name: doorboy-proxy
image: harbor.k-space.ee/k-space/doorboy-proxy:latest
envFrom:
- secretRef:
name: doorboy-api
env:
- name: MONGO_URI
valueFrom:
secretKeyRef:
name: mongo-application-readwrite
key: connectionString.standard
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
ports:
- containerPort: 5000
name: "http"
resources:
requests:
memory: "200Mi"
cpu: "100m"
limits:
memory: "500Mi"
cpu: "1"
---
apiVersion: v1
kind: Service
metadata:
name: doorboy-proxy
spec:
selector:
app.kubernetes.io/name: doorboy-proxy
ports:
- protocol: TCP
name: http
port: 5000
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: doorboy-proxy
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
spec:
rules:
- host: doorboy-proxy.k-space.ee
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: doorboy-proxy
port:
name: http
tls:
- hosts:
- "*.k-space.ee"
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: doorboy-proxy
spec:
selector:
matchLabels:
app.kubernetes.io/name: doorboy-proxy
podMetricsEndpoints:
- port: http
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kdoorpi
spec:
selector:
matchLabels: &selectorLabels
app.kubernetes.io/name: kdoorpi
template:
metadata:
labels: *selectorLabels
spec:
containers:
- name: kdoorpi
image: harbor.k-space.ee/k-space/kdoorpi:latest
env:
- name: KDOORPI_API_ALLOWED
value: https://doorboy-proxy.k-space.ee/allowed
- name: KDOORPI_API_LONGPOLL
value: https://doorboy-proxy.k-space.ee/longpoll
- name: KDOORPI_API_SWIPE
value: http://172.21.99.98/swipe
- name: KDOORPI_DOOR
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: KDOORPI_API_KEY
valueFrom:
secretKeyRef:
name: doorboy-api
key: DOORBOY_SECRET
- name: KDOORPI_UID_SALT
valueFrom:
secretKeyRef:
name: doorboy-uid-hash-salt
key: KDOORPI_UID_SALT
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
nodeSelector:
dedicated: door
tolerations:
- key: dedicated
operator: Equal
value: door
effect: NoSchedule
- key: arch
operator: Equal
value: arm64
effect: NoSchedule

11
meta-operator/README.md Normal file
View File

@ -0,0 +1,11 @@
# meta-operator
Meta operator enables creating operators without building any binaries or
Docker images.
For example operator declaration see `keydb.yml`
```
kubectl create namespace meta-operator
kubectl apply -f application.yml -f keydb.yml
```

View File

@ -0,0 +1,220 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: clusteroperators.codemowers.io
spec:
group: codemowers.io
names:
plural: clusteroperators
singular: clusteroperator
kind: ClusterOperator
shortNames:
- clusteroperator
scope: Cluster
versions:
- name: v1alpha1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
resource:
type: object
properties:
group:
type: string
version:
type: string
plural:
type: string
secret:
type: object
properties:
name:
type: string
enabled:
type: boolean
structure:
type: array
items:
type: object
properties:
key:
type: string
value:
type: string
services:
type: array
items:
type: object
x-kubernetes-preserve-unknown-fields: true
deployments:
type: array
items:
type: object
x-kubernetes-preserve-unknown-fields: true
statefulsets:
type: array
items:
type: object
x-kubernetes-preserve-unknown-fields: true
configmaps:
type: array
items:
type: object
x-kubernetes-preserve-unknown-fields: true
customresources:
type: array
items:
type: object
x-kubernetes-preserve-unknown-fields: true
required: ["spec"]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: meta-operator
namespace: meta-operator
labels:
app.kubernetes.io/name: meta-operator
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: meta-operator
template:
metadata:
labels:
app.kubernetes.io/name: meta-operator
spec:
serviceAccountName: meta-operator
containers:
- name: meta-operator
image: harbor.k-space.ee/k-space/meta-operator
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
env:
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
---
apiVersion: codemowers.io/v1alpha1
kind: ClusterOperator
metadata:
name: meta
spec:
resource:
group: codemowers.io
version: v1alpha1
plural: clusteroperators
secret:
enabled: false
deployments:
- apiVersion: apps/v1
kind: Deployment
metadata:
name: foobar-operator
labels:
app.kubernetes.io/name: foobar-operator
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: foobar-operator
template:
metadata:
labels:
app.kubernetes.io/name: foobar-operator
spec:
serviceAccountName: meta-operator
containers:
- name: meta-operator
image: harbor.k-space.ee/k-space/meta-operator
command:
- /meta-operator.py
- --target
- foobar
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
env:
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: meta-operator
rules:
- apiGroups:
- ""
resources:
- secrets
- configmaps
- services
verbs:
- create
- get
- patch
- update
- delete
- list
- apiGroups:
- apps
resources:
- deployments
- statefulsets
verbs:
- create
- delete
- list
- update
- patch
- apiGroups:
- codemowers.io
resources:
- bindzones
- clusteroperators
- keydbs
verbs:
- get
- list
- watch
- apiGroups:
- k-space.ee
resources:
- cams
verbs:
- get
- list
- watch
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: meta-operator
namespace: meta-operator
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: meta-operator
subjects:
- kind: ServiceAccount
name: meta-operator
namespace: meta-operator
roleRef:
kind: ClusterRole
name: meta-operator
apiGroup: rbac.authorization.k8s.io

253
meta-operator/keydb.yml Normal file
View File

@ -0,0 +1,253 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: keydbs.codemowers.io
spec:
group: codemowers.io
names:
plural: keydbs
singular: keydb
kind: KeyDBCluster
shortNames:
- keydb
scope: Namespaced
versions:
- name: v1alpha1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
replicas:
type: integer
description: Replica count
required: ["spec"]
---
apiVersion: codemowers.io/v1alpha1
kind: ClusterOperator
metadata:
name: keydb
spec:
resource:
group: codemowers.io
version: v1alpha1
plural: keydbs
secret:
enabled: true
name: foobar-secrets
structure:
- key: REDIS_PASSWORD
value: "%s"
- key: REDIS_URI
value: "redis://:%s@foobar"
configmaps:
- apiVersion: v1
kind: ConfigMap
metadata:
name: foobar-scripts
labels:
app.kubernetes.io/name: foobar
data:
entrypoint.sh: |
#!/bin/bash
set -euxo pipefail
host="$(hostname)"
port="6379"
replicas=()
for node in {0..2}; do
if [ "${host}" != "redis-${node}" ]; then
replicas+=("--replicaof redis-${node}.redis-headless ${port}")
fi
done
exec keydb-server /etc/keydb/redis.conf \
--active-replica "yes" \
--multi-master "yes" \
--appendonly "no" \
--bind "0.0.0.0" \
--port "${port}" \
--protected-mode "no" \
--server-threads "2" \
--masterauth "${REDIS_PASSWORD}" \
--requirepass "${REDIS_PASSWORD}" \
"${replicas[@]}"
ping_readiness_local.sh: |-
#!/bin/bash
set -e
[[ -n "${REDIS_PASSWORD}" ]] && export REDISCLI_AUTH="${REDIS_PASSWORD}"
response="$(
timeout -s 3 "${1}" \
keydb-cli \
-h localhost \
-p 6379 \
ping
)"
if [ "${response}" != "PONG" ]; then
echo "${response}"
exit 1
fi
ping_liveness_local.sh: |-
#!/bin/bash
set -e
[[ -n "${REDIS_PASSWORD}" ]] && export REDISCLI_AUTH="${REDIS_PASSWORD}"
response="$(
timeout -s 3 "${1}" \
keydb-cli \
-h localhost \
-p 6379 \
ping
)"
if [ "${response}" != "PONG" ] && [[ ! "${response}" =~ ^.*LOADING.*$ ]]; then
echo "${response}"
exit 1
fi
cleanup_tempfiles.sh: |-
#!/bin/bash
set -e
find /data/ -type f \( -name "temp-*.aof" -o -name "temp-*.rdb" \) -mmin +60 -delete
services:
- apiVersion: v1
kind: Service
metadata:
name: foobar-headless
labels:
app.kubernetes.io/name: foobar
spec:
type: ClusterIP
clusterIP: None
ports:
- name: redis
port: 6379
protocol: TCP
targetPort: redis
selector:
app.kubernetes.io/name: foobar
- apiVersion: v1
kind: Service
metadata:
name: foobar
labels:
app.kubernetes.io/name: foobar
annotations:
{}
spec:
type: ClusterIP
ports:
- name: redis
port: 6379
protocol: TCP
targetPort: redis
- name: exporter
port: 9121
protocol: TCP
targetPort: exporter
selector:
app.kubernetes.io/name: foobar
sessionAffinity: ClientIP
statefulsets:
- apiVersion: apps/v1
kind: StatefulSet
metadata:
name: foobar
labels:
app.kubernetes.io/name: foobar
spec:
replicas: 3
serviceName: foobar-headless
selector:
matchLabels:
app.kubernetes.io/name: foobar
template:
metadata:
labels:
app.kubernetes.io/name: foobar
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- 'foobar'
topologyKey: kubernetes.io/hostname
weight: 100
containers:
- name: redis
image: eqalpha/keydb:x86_64_v6.3.1
imagePullPolicy: Always
command:
- /scripts/entrypoint.sh
ports:
- name: redis
containerPort: 6379
protocol: TCP
livenessProbe:
initialDelaySeconds: 20
periodSeconds: 5
# One second longer than command timeout should prevent generation of zombie processes.
timeoutSeconds: 6
successThreshold: 1
failureThreshold: 5
exec:
command:
- sh
- -c
- /scripts/ping_liveness_local.sh 5
readinessProbe:
initialDelaySeconds: 20
periodSeconds: 5
# One second longer than command timeout should prevent generation of zombie processes.
timeoutSeconds: 2
successThreshold: 1
failureThreshold: 5
exec:
command:
- sh
- -c
- /scripts/ping_readiness_local.sh 1
startupProbe:
periodSeconds: 5
# One second longer than command timeout should prevent generation of zombie processes.
timeoutSeconds: 2
failureThreshold: 24
exec:
command:
- sh
- -c
- /scripts/ping_readiness_local.sh 1
resources:
{}
securityContext:
{}
volumeMounts:
- name: foobar-scripts
mountPath: /scripts
- name: foobar-data
mountPath: /data
envFrom:
- secretRef:
name: foobar-secrets
- name: exporter
image: quay.io/oliver006/redis_exporter
ports:
- name: exporter
containerPort: 9121
envFrom:
- secretRef:
name: foobar-secrets
securityContext:
{}
volumes:
- name: foobar-scripts
configMap:
name: foobar-scripts
defaultMode: 0755
- name: foobar-data
emptyDir: {}

View File

@ -10,7 +10,7 @@ MetalLB exposes services to the outside world.
To update manifests: To update manifests:
``` ```
curl -O https://raw.githubusercontent.com/metallb/metallb-operator/v0.13.11/bin/metallb-operator.yaml curl -O https://raw.githubusercontent.com/metallb/metallb-operator/v0.13.4/bin/metallb-operator.yaml
kubectl apply -f metallb-operator.yaml kubectl apply -f metallb-operator.yaml
kubectl apply -f application.yml kubectl apply -f application.yml
``` ```

View File

@ -36,9 +36,6 @@ metadata:
spec: spec:
ipAddressPools: ipAddressPools:
- zoo - zoo
- bind-secondary-external
- bind-secondary-internal
- wildduck
--- ---
# Slice of public EEnet subnet using MetalLB L3 method # Slice of public EEnet subnet using MetalLB L3 method
apiVersion: metallb.io/v1beta1 apiVersion: metallb.io/v1beta1
@ -60,33 +57,6 @@ spec:
addresses: addresses:
- 62.65.250.36/30 - 62.65.250.36/30
--- ---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: bind-secondary-internal
namespace: metallb-system
spec:
addresses:
- 172.20.53.0/24
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: bind-secondary-external
namespace: metallb-system
spec:
addresses:
- 62.65.250.2/32
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: wildduck
namespace: metallb-system
spec:
addresses:
- 193.40.103.25/32
---
apiVersion: metallb.io/v1beta2 apiVersion: metallb.io/v1beta2
kind: BGPPeer kind: BGPPeer
metadata: metadata:
@ -101,7 +71,7 @@ spec:
namespace: metallb-system namespace: metallb-system
--- ---
apiVersion: metallb.io/v1beta1 apiVersion: metallb.io/v1beta1
kind: L2Advertisement kind: BGPAdvertisement
metadata: metadata:
name: public name: public
namespace: metallb-system namespace: metallb-system

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More