Compare commits

..

54 Commits

Author SHA1 Message Date
a51b041621 Upgrade to Kubernetes 1.24 and Longhorn 1.4.0 2023-02-20 11:16:12 +02:00
1d6cf0a521 camtiler: Restore cams on members site 2023-01-25 09:55:04 +02:00
19d66801df prometheus-operator: Update node-exporter and add pve2 2023-01-07 10:27:05 +02:00
d2a719af43 README: Improve cluster formation docs
- Begin code block with sudo to remind the following shall be ran as root.
- Remove hardcoded key, instead copy from ubutnu user.
2023-01-03 16:09:17 +00:00
34369d211b Add nyancat server 2023-01-03 10:25:08 +02:00
cadb38126b prometheus-operator: Prevent scrape timeouts 2022-12-26 14:15:05 +02:00
414d044909 prometheus-operator: Less noisy alerting from node-exporter 2022-12-24 21:11:00 +02:00
ea23a52d6b prometheus-operator: Remove bundle.yml 2022-12-24 21:07:07 +02:00
3458cbd694 Update README 2022-12-24 21:01:49 +02:00
0a40686c16 logmower: Remove explicit command for event source 2022-12-24 00:02:01 +02:00
222fca8b8f camtiler: Fix scheduling issues 2022-12-23 23:32:18 +02:00
75df3e2a41 logmower: Fix Mongo affinity rules 2022-12-23 23:31:10 +02:00
5516ad195c Add descheduler 2022-12-23 23:30:39 +02:00
d0ac3b0361 prometheus-operator: Remove noisy kube-state-metrics alerts 2022-12-23 23:30:13 +02:00
c7daada4f4 Bump kube-state-metrics to v2.7.0 2022-12-22 20:04:05 +02:00
3a11207783 prometheus-operator: Remove useless KubernetesCronjobTooLong alert 2022-12-21 14:59:16 +02:00
3586309c4e prometheus-operator: Post only critical alerts to Slack 2022-12-21 14:13:57 +02:00
960103eb40 prometheus-operator: Bump bundle version 2022-12-21 14:08:23 +02:00
34b48308ff camtiler: Split up manifests 2022-12-18 16:28:45 +02:00
d8471da75f Migrate doorboy to Kubernetes 2022-12-17 17:49:57 +02:00
3dfa8e3203 camtiler: Clean ups 2022-12-14 19:50:55 +02:00
2a8c685345 camtiler: Scale down motion detectors 2022-12-14 18:58:32 +02:00
bccd2c6458 logmower: Updates 2022-12-14 18:56:08 +02:00
c65835c6a4 Update external-dns 2022-12-14 18:46:00 +02:00
76cfcd083b camtiler: Specify Mongo collection for event source 2022-12-13 13:10:11 +02:00
98ae369b41 camtiler: Fix event broker image name 2022-12-13 12:51:52 +02:00
4ccfd3d21a Replace old log viewer with Logmower + camera-event-broker 2022-12-13 12:43:38 +02:00
ea9b63b7cc camtiler: Dozen updates 2022-12-12 20:37:03 +02:00
b5ee891c97 Introduce separated storage classes per workload type 2022-12-06 09:06:07 +02:00
eccfb43aa1 Add rawfile-localpv 2022-12-02 00:10:04 +02:00
8f99b1b03d Source meta-operator from separate repo 2022-11-13 07:19:56 +02:00
024897a083 kube-system: Record pod labels with kube-state-metrics 2022-11-12 17:52:59 +02:00
18c4764687 prometheus-exporter: Fix antiaffinity rule for Mikrotik exporter 2022-11-12 16:50:31 +02:00
7b9cb6184b prometheus-operator: Reduce retention size 2022-11-12 16:07:42 +02:00
9dd32af3cb logmower: Update shipper arguments 2022-11-10 21:07:54 +02:00
a1cc066927 README: Bump sysctl limits 2022-11-10 07:56:13 +02:00
029572872e logmower: Update env vars 2022-11-09 11:49:13 +02:00
30f1c32815 harbor: Reduce logging verbosity 2022-11-05 22:43:00 +02:00
0c14283136 Add logmower 2022-11-05 20:55:52 +02:00
587748343d traefik: Namespace filtering breaks allowExternalNameServices 2022-11-04 12:20:30 +02:00
1bcfbed130 traefik: Bump version 2022-10-21 08:30:04 +03:00
3b1cda8a58 traefik: Pull resources only from trusted namespaces 2022-10-21 08:27:53 +03:00
2fd0112c28 elastic-system: Exclude logging ECK stack itself 2022-10-21 00:57:11 +03:00
9275f745ce elastic-system: Remove Filebeat's dependency on Kibana 2022-10-21 00:56:54 +03:00
3d86b6acde elastic-system: Bump to 8.4.3 2022-10-14 20:18:28 +03:00
4a94cd4af0 longhorn-system: Remove Prometheus annotation as we use PodMonitor already 2022-10-14 15:03:48 +03:00
a27f273c0b Add Grafana 2022-10-14 14:38:23 +03:00
4686108f42 Switch to wildcard *.k-space.ee certificate 2022-10-14 14:32:36 +03:00
30b7e50afb kube-system: Add metrics-server 2022-10-14 14:23:21 +03:00
e4c9675b99 tigera-operator: Remove unrelated files 2022-10-14 14:05:40 +03:00
017bdd9fd8 tigera-operator: Upgrade Calico 2022-10-14 14:03:34 +03:00
0fd0094ba0 playground: Initial commit 2022-10-14 00:14:35 +03:00
d20fdf350d drone: Switch templates to drone-kaniko plugin 2022-10-12 14:24:57 +03:00
bac5040d2a README: access/auth: collapse bootstrapping
For 'how to connect to cluster', server-side setup
is not needed from connecting clients.
Hiding the section makes the steps more concise.
2022-10-11 10:47:41 +03:00
68 changed files with 19318 additions and 31103 deletions

106
README.md
View File

@ -23,6 +23,7 @@ Most endpoints are protected by OIDC autentication or Authelia SSO middleware.
General discussion is happening in the `#kube` Slack channel. General discussion is happening in the `#kube` Slack channel.
<details><summary>Bootstrapping access</summary>
For bootstrap access obtain `/etc/kubernetes/admin.conf` from one of the master For bootstrap access obtain `/etc/kubernetes/admin.conf` from one of the master
nodes and place it under `~/.kube/config` on your machine. nodes and place it under `~/.kube/config` on your machine.
@ -46,9 +47,9 @@ EOF
sudo systemctl daemon-reload sudo systemctl daemon-reload
systemctl restart kubelet systemctl restart kubelet
``` ```
</details>
Afterwards following can be used to talk to the Kubernetes cluster using The following can be used to talk to the Kubernetes cluster using OIDC credentials:
OIDC credentials:
```bash ```bash
kubectl krew install oidc-login kubectl krew install oidc-login
@ -89,6 +90,16 @@ EOF
For access control mapping see [cluster-role-bindings.yml](cluster-role-bindings.yml) For access control mapping see [cluster-role-bindings.yml](cluster-role-bindings.yml)
### systemd-resolved issues on access
```sh
Unable to connect to the server: dial tcp: lookup master.kube.k-space.ee on 127.0.0.53:53: no such host
```
```
Network → VPN → `IPv4` → Other nameservers (Muud nimeserverid): `172.21.0.1`
Network → VPN → `IPv6` → Other nameservers (Muud nimeserverid): `2001:bb8:4008:21::1`
Network → VPN → `IPv4` → Search domains (Otsingudomeenid): `kube.k-space.ee`
Network → VPN → `IPv6` → Search domains (Otsingudomeenid): `kube.k-space.ee`
```
# Technology mapping # Technology mapping
@ -144,7 +155,8 @@ these should be handled by `tls:` section in Ingress.
## Cluster formation ## Cluster formation
Create Ubuntu 20.04 VM-s on Proxmox with local storage. Created Ubuntu 22.04 VM-s on Proxmox with local storage.
Added some ARM64 workers by using Ubuntu 22.04 server on Raspberry Pi.
After machines have booted up and you can reach them via SSH: After machines have booted up and you can reach them via SSH:
@ -162,6 +174,13 @@ net.ipv4.conf.all.accept_redirects = 0
net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1 net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-ip6tables = 1
# Elasticsearch needs this
vm.max_map_count = 524288
# Bump inotify limits to make sure
fs.inotify.max_user_instances=1280
fs.inotify.max_user_watches=655360
EOF EOF
sysctl --system sysctl --system
@ -175,32 +194,23 @@ nameserver 8.8.8.8
EOF EOF
# Disable multipathd as Longhorn handles that itself # Disable multipathd as Longhorn handles that itself
systemctl mask multipathd systemctl mask multipathd snapd
systemctl disable multipathd systemctl disable --now multipathd snapd bluetooth ModemManager hciuart wpa_supplicant packagekit
systemctl stop multipathd
# Disable Snapcraft
systemctl mask snapd
systemctl disable snapd
systemctl stop snapd
# Permit root login # Permit root login
sed -i -e 's/PermitRootLogin no/PermitRootLogin without-password/' /etc/ssh/sshd_config sed -i -e 's/PermitRootLogin no/PermitRootLogin without-password/' /etc/ssh/sshd_config
systemctl reload ssh systemctl reload ssh
cat << EOF > /root/.ssh/authorized_keys cat ~ubuntu/.ssh/authorized_keys > /root/.ssh/authorized_keys
sk-ecdsa-sha2-nistp256@openssh.com AAAAInNrLWVjZHNhLXNoYTItbmlzdHAyNTZAb3BlbnNzaC5jb20AAAAIbmlzdHAyNTYAAABBBD4/e9SWYWYoNZMkkF+NirhbmHuUgjoCap42kAq0pLIXFwIqgVTCre03VPoChIwBClc8RspLKqr5W3j0fG8QwnQAAAAEc3NoOg== lauri@lauri-x13
EOF
userdel -f ubuntu userdel -f ubuntu
apt-get remove -yq cloud-init apt-get install -yqq linux-image-generic
apt-get remove -yq cloud-init linux-image-*-kvm
``` ```
Install packages, for Raspbian set `OS=Debian_11` Install packages:
```bash ```bash
OS=xUbuntu_20.04 OS=xUbuntu_22.04
VERSION=1.23 VERSION=1.24
cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ / deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /
EOF EOF
@ -208,17 +218,26 @@ cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cr
deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ / deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /
EOF EOF
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers.gpg add - rm -fv /etc/apt/trusted.gpg
curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers-cri-o.gpg add - curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | gpg --dearmor > /etc/apt/trusted.gpg.d/libcontainers.gpg
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/Release.key | gpg --dearmor > /etc/apt/trusted.gpg.d/libcontainers-cri-o.gpg
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg > /etc/apt/trusted.gpg.d/packages-cloud-google.gpg
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list
apt-get update apt-get update
apt-get install -yqq apt-transport-https curl cri-o cri-o-runc kubelet=1.23.5-00 kubectl=1.23.5-00 kubeadm=1.23.5-00 apt-get install -yqq --allow-change-held-packages apt-transport-https curl cri-o cri-o-runc kubelet=1.24.10-00 kubectl=1.24.10-00 kubeadm=1.24.10-00
cat << \EOF > /etc/containers/registries.conf
unqualified-search-registries = ["docker.io"]
# To pull Docker images from a mirror uncomment following
#[[registry]]
#prefix = "docker.io"
#location = "mirror.gcr.io"
EOF
sudo systemctl restart crio
sudo systemctl daemon-reload sudo systemctl daemon-reload
sudo systemctl enable crio --now sudo systemctl enable crio --now
apt-mark hold kubelet kubeadm kubectl apt-mark hold kubelet kubeadm kubectl
sed -i -e 's/unqualified-search-registries = .*/unqualified-search-registries = ["docker.io"]/' /etc/containers/registries.conf
``` ```
On master: On master:
@ -229,6 +248,16 @@ kubeadm init --token-ttl=120m --pod-network-cidr=10.244.0.0/16 --control-plane-e
For the `kubeadm join` command specify FQDN via `--node-name $(hostname -f)`. For the `kubeadm join` command specify FQDN via `--node-name $(hostname -f)`.
Set AZ labels:
```
for j in $(seq 1 9); do
for t in master mon worker storage; do
kubectl label nodes ${t}${j}.kube.k-space.ee topology.kubernetes.io/zone=node${j}
done
done
```
After forming the cluster add taints: After forming the cluster add taints:
```bash ```bash
@ -236,7 +265,7 @@ for j in $(seq 1 9); do
kubectl label nodes worker${j}.kube.k-space.ee node-role.kubernetes.io/worker='' kubectl label nodes worker${j}.kube.k-space.ee node-role.kubernetes.io/worker=''
done done
for j in $(seq 1 3); do for j in $(seq 1 4); do
kubectl taint nodes mon${j}.kube.k-space.ee dedicated=monitoring:NoSchedule kubectl taint nodes mon${j}.kube.k-space.ee dedicated=monitoring:NoSchedule
kubectl label nodes mon${j}.kube.k-space.ee dedicated=monitoring kubectl label nodes mon${j}.kube.k-space.ee dedicated=monitoring
done done
@ -247,15 +276,26 @@ for j in $(seq 1 4); do
done done
``` ```
On Raspberry Pi you need to take additonal steps:
* Manually enable cgroups by appending
`cgroup_memory=1 cgroup_enable=memory` to `/boot/cmdline.txt`,
* Disable swap with `swapoff -a; apt-get purge -y dphys-swapfile`
* For mounting Longhorn volumes on Rasbian install `open-iscsi`
For `arm64` nodes add suitable taint to prevent scheduling non-multiarch images on them: For `arm64` nodes add suitable taint to prevent scheduling non-multiarch images on them:
```bash ```bash
kubectl taint nodes worker9.kube.k-space.ee arch=arm64:NoSchedule kubectl taint nodes worker9.kube.k-space.ee arch=arm64:NoSchedule
``` ```
For door controllers:
```
for j in ground front back; do
kubectl taint nodes door-${j}.kube.k-space.ee dedicated=door:NoSchedule
kubectl label nodes door-${j}.kube.k-space.ee dedicated=door
kubectl taint nodes door-${j}.kube.k-space.ee arch=arm64:NoSchedule
done
```
To reduce wear on storage:
```
echo StandardOutput=null >> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
systemctl daemon-reload
systemctl restart kubelet
```

View File

@ -0,0 +1,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: grafana
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: grafana
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: grafana
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -0,0 +1,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: logmower
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: logmower
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: logmower
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -16,7 +16,6 @@ server:
ingress: ingress:
enabled: true enabled: true
annotations: annotations:
cert-manager.io/cluster-issuer: default
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
traefik.ingress.kubernetes.io/router.entrypoints: websecure traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true" traefik.ingress.kubernetes.io/router.tls: "true"
@ -24,8 +23,7 @@ server:
- argocd.k-space.ee - argocd.k-space.ee
tls: tls:
- hosts: - hosts:
- argocd.k-space.ee - "*.k-space.ee"
secretName: argocd-server-tls
configEnabled: true configEnabled: true
config: config:
admin.enabled: "false" admin.enabled: "false"

View File

@ -162,8 +162,8 @@ kubectl -n argocd create secret generic argocd-secret \
kubectl get secret -n authelia oidc-secrets -o json \ kubectl get secret -n authelia oidc-secrets -o json \
| jq '.data."oidc-secrets.yml"' -r | base64 -d | yq -o json \ | jq '.data."oidc-secrets.yml"' -r | base64 -d | yq -o json \
| jq '.identity_providers.oidc.clients[] | select(.id == "argocd") | .secret' -r) | jq '.identity_providers.oidc.clients[] | select(.id == "argocd") | .secret' -r)
kubectl -n monitoring delete secret oidc-secret kubectl -n grafana delete secret oidc-secret
kubectl -n monitoring create secret generic oidc-secret \ kubectl -n grafana create secret generic oidc-secret \
--from-literal=GF_AUTH_GENERIC_OAUTH_CLIENT_SECRET=$( \ --from-literal=GF_AUTH_GENERIC_OAUTH_CLIENT_SECRET=$( \
kubectl get secret -n authelia oidc-secrets -o json \ kubectl get secret -n authelia oidc-secrets -o json \
| jq '.data."oidc-secrets.yml"' -r | base64 -d | yq -o json \ | jq '.data."oidc-secrets.yml"' -r | base64 -d | yq -o json \

View File

@ -295,7 +295,6 @@ metadata:
labels: labels:
app.kubernetes.io/name: authelia app.kubernetes.io/name: authelia
annotations: annotations:
cert-manager.io/cluster-issuer: default
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
kubernetes.io/tls-acme: "true" kubernetes.io/tls-acme: "true"
traefik.ingress.kubernetes.io/router.entryPoints: websecure traefik.ingress.kubernetes.io/router.entryPoints: websecure
@ -315,8 +314,7 @@ spec:
number: 80 number: 80
tls: tls:
- hosts: - hosts:
- auth.k-space.ee - "*.k-space.ee"
secretName: authelia-tls
--- ---
apiVersion: traefik.containo.us/v1alpha1 apiVersion: traefik.containo.us/v1alpha1
kind: Middleware kind: Middleware

View File

@ -1,7 +1,16 @@
To apply changes: To apply changes:
``` ```
kubectl apply -n camtiler -f application.yml -f persistence.yml -f mongoexpress.yml -f mongodb-support.yml -f networkpolicy-base.yml -f minio-support.yml kubectl apply -n camtiler \
-f application.yml \
-f persistence.yml \
-f mongoexpress.yml \
-f mongodb-support.yml \
-f camera-tiler.yml \
-f logmower.yml \
-f ingress.yml \
-f network-policies.yml \
-f networkpolicy-base.yml
``` ```
To deploy changes: To deploy changes:
@ -15,15 +24,16 @@ To initialize secrets:
``` ```
kubectl create secret generic -n camtiler mongodb-application-readwrite-password --from-literal="password=$(cat /dev/urandom | base64 | head -c 30)" kubectl create secret generic -n camtiler mongodb-application-readwrite-password --from-literal="password=$(cat /dev/urandom | base64 | head -c 30)"
kubectl create secret generic -n camtiler mongodb-application-readonly-password --from-literal="password=$(cat /dev/urandom | base64 | head -c 30)" kubectl create secret generic -n camtiler mongodb-application-readonly-password --from-literal="password=$(cat /dev/urandom | base64 | head -c 30)"
kubectl create secret generic -n camtiler minio-secret \ kubectl create secret generic -n camtiler minio-secrets \
--from-literal=accesskey=application \
--from-literal=secretkey=$(cat /dev/urandom | base64 | head -c 30)
kubectl create secret generic -n camtiler minio-env-configuration \
--from-literal="MINIO_BROWSER=off" \
--from-literal="MINIO_ROOT_USER=root" \ --from-literal="MINIO_ROOT_USER=root" \
--from-literal="MINIO_ROOT_PASSWORD=$(cat /dev/urandom | base64 | head -c 30)" \ --from-literal="MINIO_ROOT_PASSWORD=$(cat /dev/urandom | base64 | head -c 30)"
--from-literal="MINIO_STORAGE_CLASS_STANDARD=EC:4"
kubectl -n camtiler create secret generic camera-secrets \ kubectl -n camtiler create secret generic camera-secrets \
--from-literal=username=... \ --from-literal=username=... \
--from-literal=password=... --from-literal=password=...
``` ```
To restart all deployments:
```
for j in $(kubectl get deployments -n camtiler -o name); do kubectl rollout restart -n camtiler $j; done
```

View File

@ -1,397 +1,4 @@
--- ---
apiVersion: apps/v1
kind: Deployment
metadata:
name: camtiler
annotations:
keel.sh/policy: force
keel.sh/trigger: poll
spec:
revisionHistoryLimit: 0
replicas: 2
selector:
matchLabels:
app.kubernetes.io/name: camtiler
template:
metadata:
labels:
app.kubernetes.io/name: camtiler
component: camtiler
spec:
serviceAccountName: camtiler
containers:
- name: camtiler
image: harbor.k-space.ee/k-space/camera-tiler:latest
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
ports:
- containerPort: 5001
name: "http"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: log-viewer-frontend
annotations:
keel.sh/policy: force
keel.sh/trigger: poll
spec:
revisionHistoryLimit: 0
replicas: 2
selector:
matchLabels:
app.kubernetes.io/name: log-viewer-frontend
template:
metadata:
labels:
app.kubernetes.io/name: log-viewer-frontend
spec:
containers:
- name: log-viewer-frontend
image: harbor.k-space.ee/k-space/log-viewer-frontend:latest
# securityContext:
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: log-viewer-backend
annotations:
keel.sh/policy: force
keel.sh/trigger: poll
spec:
revisionHistoryLimit: 0
replicas: 3
selector:
matchLabels:
app.kubernetes.io/name: log-viewer-backend
template:
metadata:
labels:
app.kubernetes.io/name: log-viewer-backend
spec:
containers:
- name: log-backend-backend
image: harbor.k-space.ee/k-space/log-viewer:latest
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
env:
- name: MONGO_URI
valueFrom:
secretKeyRef:
name: mongodb-application-readwrite
key: connectionString.standard
- name: MINIO_BUCKET
value: application
- name: MINIO_HOSTNAME
value: cams-s3.k-space.ee
- name: MINIO_PORT
value: "443"
- name: MINIO_SCHEME
value: "https"
- name: MINIO_SECRET_KEY
valueFrom:
secretKeyRef:
name: minio-secret
key: secretkey
- name: MINIO_ACCESS_KEY
valueFrom:
secretKeyRef:
name: minio-secret
key: accesskey
---
apiVersion: v1
kind: Service
metadata:
name: log-viewer-frontend
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: log-viewer-frontend
ports:
- protocol: TCP
port: 3003
---
apiVersion: v1
kind: Service
metadata:
name: log-viewer-backend
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: log-viewer-backend
ports:
- protocol: TCP
port: 3002
---
apiVersion: v1
kind: Service
metadata:
name: camtiler
labels:
component: camtiler
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: camtiler
component: camtiler
ports:
- protocol: TCP
port: 5001
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: camtiler
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: camtiler
rules:
- apiGroups:
- ""
resources:
- services
verbs:
- list
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: camtiler
subjects:
- kind: ServiceAccount
name: camtiler
apiGroup: ""
roleRef:
kind: Role
name: camtiler
apiGroup: ""
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: camtiler
annotations:
kubernetes.io/ingress.class: traefik
# Following specifies the certificate issuer defined in
# ../cert-manager/issuer.yml
# This is where the HTTPS certificates for the
# `tls:` section below are obtained from
cert-manager.io/cluster-issuer: default
# This tells Traefik this Ingress object is associated with the
# https:// entrypoint
# Global http:// to https:// redirect is enabled in
# ../traefik/values.yml using `globalArguments`
traefik.ingress.kubernetes.io/router.entrypoints: websecure
# Following enables Authelia intercepting middleware
# which makes sure user is authenticated and then
# proceeds to inject Remote-User header for the application
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true"
# Following tells external-dns to add CNAME entry which makes
# cams.k-space.ee point to same IP address as traefik.k-space.ee
# The A record for traefik.k-space.ee is created via annotation
# added in ../traefik/ingress.yml
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
spec:
rules:
- host: cams.k-space.ee
http:
paths:
- pathType: Prefix
path: "/tiled"
backend:
service:
name: camtiler
port:
number: 5001
- pathType: Prefix
path: "/events"
backend:
service:
name: log-viewer-backend
port:
number: 3002
- pathType: Prefix
path: "/"
backend:
service:
name: log-viewer-frontend
port:
number: 3003
tls:
- hosts:
- cams.k-space.ee
secretName: camtiler-tls
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: camera-motion-detect
spec:
podSelector:
matchLabels:
component: camdetect
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
component: camtiler
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: prometheus-operator
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus
egress:
- to:
- ipBlock:
# Permit access to cameras outside the cluster
cidr: 100.102.0.0/16
- to:
- podSelector:
matchLabels:
app: mongodb-svc
ports:
- port: 27017
- to:
- podSelector:
matchLabels:
v1.min.io/tenant: minio
ports:
- port: 9000
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: camera-tiler
spec:
podSelector:
matchLabels:
component: camtiler
policyTypes:
- Ingress
- Egress
egress:
- to:
- podSelector:
matchLabels:
component: camdetect
ports:
- port: 5000
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: prometheus-operator
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: log-viewer-backend
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: log-viewer-backend
policyTypes:
- Ingress
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: mongodb-svc
- to:
# Minio access via Traefik's public endpoint
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: log-viewer-frontend
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: log-viewer-frontend
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minio
annotations:
kubernetes.io/ingress.class: traefik
cert-manager.io/cluster-issuer: default
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
spec:
rules:
- host: cams-s3.k-space.ee
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: minio
port:
number: 80
tls:
- hosts:
- cams-s3.k-space.ee
secretName: cams-s3-tls
---
apiVersion: apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition kind: CustomResourceDefinition
metadata: metadata:
@ -482,12 +89,13 @@ spec:
metadata: metadata:
name: foobar name: foobar
labels: labels:
component: camdetect app.kubernetes.io/name: foobar
component: camera-motion-detect
spec: spec:
type: ClusterIP type: ClusterIP
selector: selector:
app.kubernetes.io/name: foobar app.kubernetes.io/name: foobar
component: camdetect component: camera-motion-detect
ports: ports:
- protocol: TCP - protocol: TCP
port: 80 port: 80
@ -502,14 +110,15 @@ spec:
keel.sh/policy: force keel.sh/policy: force
keel.sh/trigger: poll keel.sh/trigger: poll
spec: spec:
revisionHistoryLimit: 0
replicas: 1 replicas: 1
# Make sure we do not congest the network during rollout
strategy: strategy:
type: RollingUpdate type: RollingUpdate
rollingUpdate: rollingUpdate:
maxSurge: 0 # Swap following two with replicas: 2
maxUnavailable: 1 maxSurge: 1
maxUnavailable: 0
selector: selector:
matchLabels: matchLabels:
app.kubernetes.io/name: foobar app.kubernetes.io/name: foobar
@ -517,18 +126,25 @@ spec:
metadata: metadata:
labels: labels:
app.kubernetes.io/name: foobar app.kubernetes.io/name: foobar
component: camdetect component: camera-motion-detect
spec: spec:
containers: containers:
- name: camdetect - name: camera-motion-detect
image: harbor.k-space.ee/k-space/camera-motion-detect:latest image: harbor.k-space.ee/k-space/camera-motion-detect:latest
starupProbe:
httpGet:
path: /healthz
port: 5000
initialDelaySeconds: 2
periodSeconds: 180
timeoutSeconds: 60
readinessProbe: readinessProbe:
httpGet: httpGet:
path: /readyz path: /readyz
port: 5000 port: 5000
initialDelaySeconds: 10 initialDelaySeconds: 60
periodSeconds: 180 periodSeconds: 60
timeoutSeconds: 60 timeoutSeconds: 5
ports: ports:
- containerPort: 5000 - containerPort: 5000
name: "http" name: "http"
@ -538,7 +154,7 @@ spec:
cpu: "200m" cpu: "200m"
limits: limits:
memory: "256Mi" memory: "256Mi"
cpu: "1" cpu: "4000m"
securityContext: securityContext:
readOnlyRootFilesystem: true readOnlyRootFilesystem: true
runAsNonRoot: true runAsNonRoot: true
@ -566,13 +182,13 @@ spec:
- name: AWS_SECRET_ACCESS_KEY - name: AWS_SECRET_ACCESS_KEY
valueFrom: valueFrom:
secretKeyRef: secretKeyRef:
name: minio-secret name: minio-secrets
key: secretkey key: MINIO_ROOT_PASSWORD
- name: AWS_ACCESS_KEY_ID - name: AWS_ACCESS_KEY_ID
valueFrom: valueFrom:
secretKeyRef: secretKeyRef:
name: minio-secret name: minio-secrets
key: accesskey key: MINIO_ROOT_USER
# Make sure 2+ pods of same camera are scheduled on different hosts # Make sure 2+ pods of same camera are scheduled on different hosts
affinity: affinity:
@ -580,7 +196,7 @@ spec:
requiredDuringSchedulingIgnoredDuringExecution: requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector: - labelSelector:
matchExpressions: matchExpressions:
- key: app - key: app.kubernetes.io/name
operator: In operator: In
values: values:
- foobar - foobar
@ -594,18 +210,7 @@ spec:
labelSelector: labelSelector:
matchLabels: matchLabels:
app.kubernetes.io/name: foobar app.kubernetes.io/name: foobar
component: camdetect component: camera-motion-detect
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: camtiler
spec:
selector: {}
podMetricsEndpoints:
- port: http
podTargetLabels:
- app.kubernetes.io/name
--- ---
apiVersion: monitoring.coreos.com/v1 apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule kind: PrometheusRule
@ -616,21 +221,21 @@ spec:
- name: cameras - name: cameras
rules: rules:
- alert: CameraLost - alert: CameraLost
expr: rate(camdetect_rx_frames_total[2m]) < 1 expr: rate(camtiler_frames_total{stage="downloaded"}[1m]) < 1
for: 2m for: 2m
labels: labels:
severity: warning severity: warning
annotations: annotations:
summary: Camera feed stopped summary: Camera feed stopped
- alert: CameraServerRoomMotion - alert: CameraServerRoomMotion
expr: camdetect_event_active {app="camdetect-server-room"} > 0 expr: rate(camtiler_events_total{app_kubernetes_io_name="server-room"}[30m]) > 0
for: 1m for: 1m
labels: labels:
severity: warning severity: warning
annotations: annotations:
summary: Motion was detected in server room summary: Motion was detected in server room
- alert: CameraSlowUploads - alert: CameraSlowUploads
expr: rate(camdetect_upload_dropped_frames_total[2m]) > 1 expr: camtiler_queue_frames{stage="upload"} > 10
for: 5m for: 5m
labels: labels:
severity: warning severity: warning
@ -638,13 +243,20 @@ spec:
summary: Motion detect snapshots are piling up and summary: Motion detect snapshots are piling up and
not getting uploaded to S3 not getting uploaded to S3
- alert: CameraSlowProcessing - alert: CameraSlowProcessing
expr: rate(camdetect_download_dropped_frames_total[2m]) > 1 expr: camtiler_queue_frames{stage="download"} > 10
for: 5m for: 5m
labels: labels:
severity: warning severity: warning
annotations: annotations:
summary: Motion detection processing pipeline is not keeping up summary: Motion detection processing pipeline is not keeping up
with incoming frames with incoming frames
- alert: CameraResourcesThrottled
expr: sum by (pod) (rate(container_cpu_cfs_throttled_periods_total{namespace="camtiler"}[1m])) > 0
for: 5m
labels:
severity: warning
annotations:
summary: CPU limits are bottleneck
--- ---
apiVersion: k-space.ee/v1alpha1 apiVersion: k-space.ee/v1alpha1
kind: Camera kind: Camera
@ -653,6 +265,7 @@ metadata:
spec: spec:
target: http://user@workshop.cam.k-space.ee:8080/?action=stream target: http://user@workshop.cam.k-space.ee:8080/?action=stream
secretRef: camera-secrets secretRef: camera-secrets
replicas: 1
--- ---
apiVersion: k-space.ee/v1alpha1 apiVersion: k-space.ee/v1alpha1
kind: Camera kind: Camera
@ -661,6 +274,7 @@ metadata:
spec: spec:
target: http://user@server-room.cam.k-space.ee:8080/?action=stream target: http://user@server-room.cam.k-space.ee:8080/?action=stream
secretRef: camera-secrets secretRef: camera-secrets
replicas: 1
--- ---
apiVersion: k-space.ee/v1alpha1 apiVersion: k-space.ee/v1alpha1
kind: Camera kind: Camera
@ -669,6 +283,7 @@ metadata:
spec: spec:
target: http://user@printer.cam.k-space.ee:8080/?action=stream target: http://user@printer.cam.k-space.ee:8080/?action=stream
secretRef: camera-secrets secretRef: camera-secrets
replicas: 1
--- ---
apiVersion: k-space.ee/v1alpha1 apiVersion: k-space.ee/v1alpha1
kind: Camera kind: Camera
@ -677,6 +292,7 @@ metadata:
spec: spec:
target: http://user@chaos.cam.k-space.ee:8080/?action=stream target: http://user@chaos.cam.k-space.ee:8080/?action=stream
secretRef: camera-secrets secretRef: camera-secrets
replicas: 1
--- ---
apiVersion: k-space.ee/v1alpha1 apiVersion: k-space.ee/v1alpha1
kind: Camera kind: Camera
@ -685,6 +301,7 @@ metadata:
spec: spec:
target: http://user@cyber.cam.k-space.ee:8080/?action=stream target: http://user@cyber.cam.k-space.ee:8080/?action=stream
secretRef: camera-secrets secretRef: camera-secrets
replicas: 1
--- ---
apiVersion: k-space.ee/v1alpha1 apiVersion: k-space.ee/v1alpha1
kind: Camera kind: Camera
@ -693,6 +310,7 @@ metadata:
spec: spec:
target: http://user@kitchen.cam.k-space.ee:8080/?action=stream target: http://user@kitchen.cam.k-space.ee:8080/?action=stream
secretRef: camera-secrets secretRef: camera-secrets
replicas: 1
--- ---
apiVersion: k-space.ee/v1alpha1 apiVersion: k-space.ee/v1alpha1
kind: Camera kind: Camera
@ -701,6 +319,7 @@ metadata:
spec: spec:
target: http://user@back-door.cam.k-space.ee:8080/?action=stream target: http://user@back-door.cam.k-space.ee:8080/?action=stream
secretRef: camera-secrets secretRef: camera-secrets
replicas: 1
--- ---
apiVersion: k-space.ee/v1alpha1 apiVersion: k-space.ee/v1alpha1
kind: Camera kind: Camera
@ -709,3 +328,4 @@ metadata:
spec: spec:
target: http://user@ground-door.cam.k-space.ee:8080/?action=stream target: http://user@ground-door.cam.k-space.ee:8080/?action=stream
secretRef: camera-secrets secretRef: camera-secrets
replicas: 1

98
camtiler/camera-tiler.yml Normal file
View File

@ -0,0 +1,98 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: camera-tiler
annotations:
keel.sh/policy: force
keel.sh/trigger: poll
spec:
revisionHistoryLimit: 0
replicas: 2
selector:
matchLabels: &selectorLabels
app.kubernetes.io/name: camtiler
component: camera-tiler
template:
metadata:
labels: *selectorLabels
spec:
serviceAccountName: camera-tiler
containers:
- name: camera-tiler
image: harbor.k-space.ee/k-space/camera-tiler:latest
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
ports:
- containerPort: 5001
name: "http"
resources:
requests:
memory: "200Mi"
cpu: "100m"
limits:
memory: "500Mi"
cpu: "4000m"
---
apiVersion: v1
kind: Service
metadata:
name: camera-tiler
labels:
app.kubernetes.io/name: camtiler
component: camera-tiler
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: camtiler
component: camera-tiler
ports:
- protocol: TCP
port: 5001
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: camera-tiler
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: camera-tiler
rules:
- apiGroups:
- ""
resources:
- services
verbs:
- list
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: camera-tiler
subjects:
- kind: ServiceAccount
name: camera-tiler
apiGroup: ""
roleRef:
kind: Role
name: camera-tiler
apiGroup: ""
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: camtiler
spec:
selector:
matchLabels:
app.kubernetes.io/name: camtiler
component: camera-tiler
podMetricsEndpoints:
- port: http
podTargetLabels:
- app.kubernetes.io/name
- component

67
camtiler/ingress.yml Normal file
View File

@ -0,0 +1,67 @@
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: camtiler
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd,camtiler-redirect@kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
spec:
rules:
- host: cams.k-space.ee
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: logmower-frontend
port:
number: 8080
- host: cam.k-space.ee
http:
paths:
- pathType: Prefix
path: "/tiled"
backend:
service:
name: camera-tiler
port:
number: 5001
- pathType: Prefix
path: "/m"
backend:
service:
name: camera-tiler
port:
number: 5001
- pathType: Prefix
path: "/events"
backend:
service:
name: logmower-eventsource
port:
number: 3002
- pathType: Prefix
path: "/"
backend:
service:
name: logmower-frontend
port:
number: 8080
tls:
- hosts:
- "*.k-space.ee"
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: redirect
spec:
redirectRegex:
regex: ^https://cams.k-space.ee/(.*)$
replacement: https://cam.k-space.ee/$1
permanent: false

137
camtiler/logmower.yml Normal file
View File

@ -0,0 +1,137 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: logmower-eventsource
spec:
revisionHistoryLimit: 0
replicas: 2
selector:
matchLabels: &selectorLabels
app.kubernetes.io/name: camtiler
component: logmower-eventsource
template:
metadata:
labels: *selectorLabels
spec:
containers:
- name: logmower-eventsource
image: harbor.k-space.ee/k-space/logmower-eventsource
ports:
- containerPort: 3002
name: nodejs
env:
- name: MONGO_COLLECTION
value: eventlog
- name: MONGODB_HOST
valueFrom:
secretKeyRef:
name: mongodb-application-readonly
key: connectionString.standard
- name: BACKEND
value: 'camtiler'
- name: BACKEND_BROKER_URL
value: 'http://logmower-event-broker'
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: logmower-event-broker
spec:
revisionHistoryLimit: 0
replicas: 5
selector:
matchLabels: &selectorLabels
app.kubernetes.io/name: camtiler
component: logmower-event-broker
template:
metadata:
labels: *selectorLabels
spec:
containers:
- name: logmower-event-broker
image: harbor.k-space.ee/k-space/camera-event-broker
ports:
- containerPort: 3000
env:
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: minio-secrets
key: MINIO_ROOT_PASSWORD
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: minio-secrets
key: MINIO_ROOT_USER
- name: MINIO_BUCKET
value: 'application'
- name: MINIO_HOSTNAME
value: 'cams-s3.k-space.ee'
- name: MINIO_PORT
value: '443'
- name: MINIO_SCHEMA
value: 'https'
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: logmower-frontend
spec:
revisionHistoryLimit: 0
replicas: 2
selector:
matchLabels: &selectorLabels
app.kubernetes.io/name: camtiler
component: logmower-frontend
template:
metadata:
labels: *selectorLabels
spec:
containers:
- name: logmower-frontend
image: harbor.k-space.ee/k-space/logmower-frontend
ports:
- containerPort: 8080
name: http
---
apiVersion: v1
kind: Service
metadata:
name: logmower-frontend
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: camtiler
component: logmower-frontend
ports:
- protocol: TCP
port: 8080
---
apiVersion: v1
kind: Service
metadata:
name: logmower-eventsource
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: camtiler
component: logmower-eventsource
ports:
- protocol: TCP
port: 3002
---
apiVersion: v1
kind: Service
metadata:
name: logmower-event-broker
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: camtiler
component: logmower-event-broker
ports:
- protocol: TCP
port: 80
targetPort: 3000

View File

@ -1 +0,0 @@
../shared/minio-support.yml

199
camtiler/minio.yml Normal file
View File

@ -0,0 +1,199 @@
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: minio
labels:
app.kubernetes.io/name: minio
spec:
selector:
matchLabels:
app.kubernetes.io/name: minio
serviceName: minio-svc
replicas: 4
podManagementPolicy: Parallel
template:
metadata:
labels:
app.kubernetes.io/name: minio
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- minio
topologyKey: kubernetes.io/hostname
nodeSelector:
dedicated: storage
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule
containers:
- name: minio
env:
- name: MINIO_PROMETHEUS_AUTH_TYPE
value: public
envFrom:
- secretRef:
name: minio-secrets
image: minio/minio:RELEASE.2022-12-12T19-27-27Z
args:
- server
- http://minio-{0...3}.minio-svc.camtiler.svc.cluster.local/data
- --address
- 0.0.0.0:9000
- --console-address
- 0.0.0.0:9001
ports:
- containerPort: 9000
name: http
- containerPort: 9001
name: console
readinessProbe:
httpGet:
path: /minio/health/ready
port: 9000
initialDelaySeconds: 2
periodSeconds: 5
resources:
requests:
cpu: 300m
memory: 1Gi
limits:
cpu: 4000m
memory: 2Gi
volumeMounts:
- name: minio-data
mountPath: /data
volumeClaimTemplates:
- metadata:
name: minio-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: '30Gi'
storageClassName: minio
---
apiVersion: v1
kind: Service
metadata:
name: minio
spec:
sessionAffinity: ClientIP
type: ClusterIP
ports:
- port: 80
targetPort: 9000
protocol: TCP
name: http
selector:
app.kubernetes.io/name: minio
---
kind: Service
apiVersion: v1
metadata:
name: minio-svc
spec:
selector:
app.kubernetes.io/name: minio
clusterIP: None
publishNotReadyAddresses: true
ports:
- name: http
port: 9000
- name: console
port: 9001
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: minio
spec:
selector:
matchLabels:
app.kubernetes.io/name: minio
podMetricsEndpoints:
- port: http
path: /minio/v2/metrics/node
podTargetLabels:
- app.kubernetes.io/name
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: minio
spec:
endpoints:
- bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
honorLabels: true
port: minio
path: /minio/v2/metrics/cluster
selector:
matchLabels:
app.kubernetes.io/name: minio
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minio
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
spec:
rules:
- host: cams-s3.k-space.ee
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: minio-svc
port:
name: http
tls:
- hosts:
- "*.k-space.ee"
---
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: minio
spec:
groups:
- name: minio
rules:
- alert: MinioClusterDiskOffline
expr: minio_cluster_disk_offline_total > 0
for: 0m
labels:
severity: critical
annotations:
summary: Minio cluster disk offline (instance {{ $labels.instance }})
description: "Minio cluster disk is offline"
- alert: MinioNodeDiskOffline
expr: minio_cluster_nodes_offline_total > 0
for: 0m
labels:
severity: critical
annotations:
summary: Minio node disk offline (instance {{ $labels.instance }})
description: "Minio cluster node disk is offline"
- alert: MinioDiskSpaceUsage
expr: disk_storage_available / disk_storage_total * 100 < 10
for: 0m
labels:
severity: warning
annotations:
summary: Minio disk space usage (instance {{ $labels.instance }})
description: "Minio available free space is low (< 10%)"

View File

@ -7,9 +7,10 @@ spec:
additionalMongodConfig: additionalMongodConfig:
systemLog: systemLog:
quiet: true quiet: true
members: 3 members: 2
arbiters: 1
type: ReplicaSet type: ReplicaSet
version: "5.0.9" version: "6.0.3"
security: security:
authentication: authentication:
modes: ["SCRAM"] modes: ["SCRAM"]
@ -27,7 +28,7 @@ spec:
passwordSecretRef: passwordSecretRef:
name: mongodb-application-readonly-password name: mongodb-application-readonly-password
roles: roles:
- name: readOnly - name: read
db: application db: application
scramCredentialsSecretName: mongodb-application-readonly scramCredentialsSecretName: mongodb-application-readonly
statefulSet: statefulSet:
@ -35,6 +36,24 @@ spec:
logLevel: WARN logLevel: WARN
template: template:
spec: spec:
containers:
- name: mongod
resources:
requests:
cpu: 100m
memory: 512Mi
limits:
cpu: 500m
memory: 1Gi
volumeMounts:
- name: journal-volume
mountPath: /data/journal
- name: mongodb-agent
resources:
requests:
cpu: 1m
memory: 100Mi
limits: {}
affinity: affinity:
podAntiAffinity: podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution: requiredDuringSchedulingIgnoredDuringExecution:
@ -55,8 +74,21 @@ spec:
volumeClaimTemplates: volumeClaimTemplates:
- metadata: - metadata:
name: logs-volume name: logs-volume
labels:
usecase: logs
spec: spec:
storageClassName: local-path storageClassName: mongo
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
- metadata:
name: journal-volume
labels:
usecase: journal
spec:
storageClassName: mongo
accessModes: accessModes:
- ReadWriteOnce - ReadWriteOnce
resources: resources:
@ -64,67 +96,12 @@ spec:
storage: 512Mi storage: 512Mi
- metadata: - metadata:
name: data-volume name: data-volume
labels:
usecase: data
spec: spec:
storageClassName: local-path storageClassName: mongo
accessModes: accessModes:
- ReadWriteOnce - ReadWriteOnce
resources: resources:
requests: requests:
storage: 2Gi storage: 2Gi
---
apiVersion: minio.min.io/v2
kind: Tenant
metadata:
name: minio
annotations:
prometheus.io/path: /minio/prometheus/metrics
prometheus.io/port: "9000"
prometheus.io/scrape: "true"
spec:
credsSecret:
name: minio-secret
buckets:
- name: application
requestAutoCert: false
users:
- name: minio-user-0
pools:
- name: pool-0
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: v1.min.io/tenant
operator: In
values:
- minio
- key: v1.min.io/pool
operator: In
values:
- pool-0
topologyKey: kubernetes.io/hostname
resources:
requests:
cpu: '1'
memory: 512Mi
servers: 4
volumesPerServer: 1
volumeClaimTemplate:
metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: '30Gi'
storageClassName: local-path
status: {}
nodeSelector:
dedicated: storage
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule

View File

@ -0,0 +1,192 @@
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: camera-motion-detect
spec:
podSelector:
matchLabels:
component: camera-motion-detect
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app.kubernetes.io/name: camtiler
component: camera-tiler
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: prometheus-operator
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus
egress:
- to:
- ipBlock:
# Permit access to cameras outside the cluster
cidr: 100.102.0.0/16
- to:
- podSelector:
matchLabels:
app: mongodb-svc
ports:
- port: 27017
- to:
- podSelector:
matchLabels:
app.kubernetes.io/name: minio
ports:
- port: 9000
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: camera-tiler
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: camtiler
component: camera-tiler
policyTypes:
- Ingress
- Egress
egress:
- to:
- podSelector:
matchLabels:
component: camera-motion-detect
ports:
- port: 5000
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: prometheus-operator
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: logmower-eventsource
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: camtiler
component: logmower-eventsource
policyTypes:
- Ingress
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: mongodb-svc
- podSelector:
matchLabels:
component: logmower-event-broker
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: logmower-event-broker
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: camtiler
component: logmower-event-broker
policyTypes:
- Ingress
- Egress
egress:
- to:
# Minio access via Traefik's public endpoint
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
ingress:
- from:
- podSelector:
matchLabels:
component: logmower-eventsource
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: logmower-frontend
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: camtiler
component: logmower-frontend
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: minio
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: minio
policyTypes:
- Ingress
- Egress
egress:
- ports:
- port: http
to:
- podSelector:
matchLabels:
app.kubernetes.io/name: minio
ingress:
- ports:
- port: http
from:
- podSelector: {}
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: prometheus-operator
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus

View File

@ -77,14 +77,11 @@ steps:
- echo "ENV GIT_COMMIT_TIMESTAMP=$(git log -1 --format=%cd --date=iso-strict)" >> Dockerfile - echo "ENV GIT_COMMIT_TIMESTAMP=$(git log -1 --format=%cd --date=iso-strict)" >> Dockerfile
- cat Dockerfile - cat Dockerfile
- name: docker - name: docker
image: plugins/docker image: harbor.k-space.ee/k-space/drone-kaniko
settings: settings:
repo: harbor.k-space.ee/${DRONE_REPO} repo: ${DRONE_REPO}
tags: latest-arm64 tags: latest-arm64
registry: harbor.k-space.ee registry: harbor.k-space.ee
squash: true
experimental: true
mtu: 1300
username: username:
from_secret: docker_username from_secret: docker_username
password: password:
@ -109,14 +106,11 @@ steps:
- echo "ENV GIT_COMMIT_TIMESTAMP=$(git log -1 --format=%cd --date=iso-strict)" >> Dockerfile - echo "ENV GIT_COMMIT_TIMESTAMP=$(git log -1 --format=%cd --date=iso-strict)" >> Dockerfile
- cat Dockerfile - cat Dockerfile
- name: docker - name: docker
image: plugins/docker image: harbor.k-space.ee/k-space/drone-kaniko
settings: settings:
repo: harbor.k-space.ee/${DRONE_REPO} repo: ${DRONE_REPO}
tags: latest-amd64 tags: latest-amd64
registry: harbor.k-space.ee registry: harbor.k-space.ee
squash: true
experimental: true
mtu: 1300
storage_driver: vfs storage_driver: vfs
username: username:
from_secret: docker_username from_secret: docker_username
@ -130,8 +124,8 @@ steps:
- name: manifest - name: manifest
image: plugins/manifest image: plugins/manifest
settings: settings:
target: harbor.k-space.ee/${DRONE_REPO}:latest target: ${DRONE_REPO}:latest
template: harbor.k-space.ee/${DRONE_REPO}:latest-ARCH template: ${DRONE_REPO}:latest-ARCH
platforms: platforms:
- linux/amd64 - linux/amd64
- linux/arm64 - linux/arm64

View File

@ -83,7 +83,6 @@ kind: Ingress
metadata: metadata:
name: drone name: drone
annotations: annotations:
cert-manager.io/cluster-issuer: default
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
kubernetes.io/ingress.class: traefik kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure traefik.ingress.kubernetes.io/router.entrypoints: websecure
@ -91,8 +90,7 @@ metadata:
spec: spec:
tls: tls:
- hosts: - hosts:
- "drone.k-space.ee" - "*.k-space.ee"
secretName: drone-tls
rules: rules:
- host: "drone.k-space.ee" - host: "drone.k-space.ee"
http: http:

View File

@ -5,11 +5,9 @@ metadata:
name: filebeat name: filebeat
spec: spec:
type: filebeat type: filebeat
version: 8.4.1 version: 8.4.3
elasticsearchRef: elasticsearchRef:
name: elasticsearch name: elasticsearch
kibanaRef:
name: kibana
config: config:
logging: logging:
level: warning level: warning
@ -29,6 +27,9 @@ spec:
- /var/log/containers/*${data.kubernetes.container.id}.log - /var/log/containers/*${data.kubernetes.container.id}.log
daemonSet: daemonSet:
podTemplate: podTemplate:
metadata:
annotations:
co.elastic.logs/enabled: 'false'
spec: spec:
serviceAccountName: filebeat serviceAccountName: filebeat
automountServiceAccountToken: true automountServiceAccountToken: true
@ -85,11 +86,9 @@ metadata:
name: filebeat-syslog name: filebeat-syslog
spec: spec:
type: filebeat type: filebeat
version: 8.4.1 version: 8.4.3
elasticsearchRef: elasticsearchRef:
name: elasticsearch name: elasticsearch
kibanaRef:
name: kibana
config: config:
logging: logging:
level: warning level: warning
@ -109,6 +108,9 @@ spec:
deployment: deployment:
replicas: 2 replicas: 2
podTemplate: podTemplate:
metadata:
annotations:
co.elastic.logs/enabled: 'false'
spec: spec:
terminationGracePeriodSeconds: 30 terminationGracePeriodSeconds: 30
containers: containers:
@ -216,7 +218,7 @@ kind: Elasticsearch
metadata: metadata:
name: elasticsearch name: elasticsearch
spec: spec:
version: 8.4.1 version: 8.4.3
nodeSets: nodeSets:
- name: default - name: default
count: 1 count: 1
@ -240,7 +242,7 @@ kind: Kibana
metadata: metadata:
name: kibana name: kibana
spec: spec:
version: 8.4.1 version: 8.4.3
count: 1 count: 1
elasticsearchRef: elasticsearchRef:
name: elasticsearch name: elasticsearch
@ -263,6 +265,9 @@ spec:
- key: elastic - key: elastic
path: xpack.security.authc.providers.anonymous.anonymous1.credentials.password path: xpack.security.authc.providers.anonymous.anonymous1.credentials.password
podTemplate: podTemplate:
metadata:
annotations:
co.elastic.logs/enabled: 'false'
spec: spec:
containers: containers:
- name: kibana - name: kibana
@ -283,7 +288,6 @@ metadata:
name: kibana name: kibana
annotations: annotations:
kubernetes.io/ingress.class: traefik kubernetes.io/ingress.class: traefik
cert-manager.io/cluster-issuer: default
traefik.ingress.kubernetes.io/router.entrypoints: websecure traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true" traefik.ingress.kubernetes.io/router.tls: "true"
@ -302,8 +306,7 @@ spec:
number: 5601 number: 5601
tls: tls:
- hosts: - hosts:
- kibana.k-space.ee - "*.k-space.ee"
secretName: kibana-tls
--- ---
apiVersion: monitoring.coreos.com/v1 apiVersion: monitoring.coreos.com/v1
kind: PodMonitor kind: PodMonitor

View File

@ -79,7 +79,6 @@ metadata:
namespace: etherpad namespace: etherpad
annotations: annotations:
kubernetes.io/ingress.class: traefik kubernetes.io/ingress.class: traefik
cert-manager.io/cluster-issuer: default
traefik.ingress.kubernetes.io/router.entrypoints: websecure traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true" traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
@ -97,8 +96,7 @@ spec:
number: 9001 number: 9001
tls: tls:
- hosts: - hosts:
- pad.k-space.ee - "*.k-space.ee"
secretName: pad-tls
--- ---
apiVersion: networking.k8s.io/v1 apiVersion: networking.k8s.io/v1
kind: NetworkPolicy kind: NetworkPolicy

View File

@ -2,9 +2,9 @@ Before applying replace the secret with the actual one.
For debugging add `- --log-level=debug`: For debugging add `- --log-level=debug`:
``` ```
kubectl apply -n external-dns -f external-dns.yml wget https://raw.githubusercontent.com/kubernetes-sigs/external-dns/master/docs/contributing/crd-source/crd-manifest.yaml -O crd.yml
kubectl apply -n external-dns -f application.yml -f crd.yml
``` ```
Insert TSIG secret: Insert TSIG secret:

View File

@ -24,6 +24,20 @@ rules:
- get - get
- list - list
- watch - watch
- apiGroups:
- externaldns.k8s.io
resources:
- dnsendpoints
verbs:
- get
- watch
- list
- apiGroups:
- externaldns.k8s.io
resources:
- dnsendpoints/status
verbs:
- update
--- ---
apiVersion: v1 apiVersion: v1
kind: ServiceAccount kind: ServiceAccount
@ -63,7 +77,7 @@ spec:
serviceAccountName: external-dns serviceAccountName: external-dns
containers: containers:
- name: external-dns - name: external-dns
image: k8s.gcr.io/external-dns/external-dns:v0.10.2 image: k8s.gcr.io/external-dns/external-dns:v0.13.1
envFrom: envFrom:
- secretRef: - secretRef:
name: tsig-secret name: tsig-secret

19
grafana/README.md Normal file
View File

@ -0,0 +1,19 @@
# Grafana
```
kubectl create namespace grafana
kubectl apply -n grafana -f application.yml
```
## OIDC secret
See Authelia README on provisioning and updating OIDC secrets for Grafana
## Grafana post deployment steps
* Configure Prometheus datasource with URL set to
`http://prometheus-operated.prometheus-operator.svc.cluster.local:9090`
* Configure Elasticsearch datasource with URL set to
`http://elasticsearch.elastic-system.svc.cluster.local`,
Time field name set to `timestamp` and
ElasticSearch version set to `7.10+`

135
grafana/application.yml Normal file
View File

@ -0,0 +1,135 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: grafana-config
data:
grafana.ini: |
[log]
level = warn
[server]
domain = grafana.k-space.ee
root_url = https://%(domain)s/
[auth.generic_oauth]
name = OAuth
icon = signin
enabled = true
client_id = grafana
scopes = openid profile email groups
empty_scopes = false
auth_url = https://auth.k-space.ee/api/oidc/authorize
token_url = https://auth.k-space.ee/api/oidc/token
api_url = https://auth.k-space.ee/api/oidc/userinfo
allow_sign_up = true
role_attribute_path = contains(groups[*], 'Grafana Admins') && 'Admin' || 'Viewer'
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: grafana
name: grafana
spec:
revisionHistoryLimit: 0
serviceName: grafana
selector:
matchLabels:
app: grafana
template:
metadata:
labels:
app: grafana
spec:
securityContext:
fsGroup: 472
containers:
- name: grafana
image: grafana/grafana:8.5.0
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 472
envFrom:
- secretRef:
name: oidc-secret
ports:
- containerPort: 3000
name: http-grafana
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /robots.txt
port: 3000
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 2
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: 3000
timeoutSeconds: 1
resources:
requests:
cpu: 250m
memory: 750Mi
volumeMounts:
- mountPath: /var/lib/grafana
name: grafana-data
- mountPath: /etc/grafana
name: grafana-config
volumes:
- name: grafana-config
configMap:
name: grafana-config
volumeClaimTemplates:
- metadata:
name: grafana-data
spec:
storageClassName: longhorn
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: grafana
spec:
ports:
- port: 80
protocol: TCP
targetPort: http-grafana
selector:
app: grafana
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: grafana
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
spec:
rules:
- host: grafana.k-space.ee
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: grafana
port:
number: 80
tls:
- hosts:
- "*.k-space.ee"

View File

@ -35,7 +35,7 @@ data:
TRIVY_ADAPTER_URL: "http://harbor-trivy:8080" TRIVY_ADAPTER_URL: "http://harbor-trivy:8080"
REGISTRY_STORAGE_PROVIDER_NAME: "filesystem" REGISTRY_STORAGE_PROVIDER_NAME: "filesystem"
WITH_CHARTMUSEUM: "false" WITH_CHARTMUSEUM: "false"
LOG_LEVEL: "info" LOG_LEVEL: "warning"
CONFIG_PATH: "/etc/core/app.conf" CONFIG_PATH: "/etc/core/app.conf"
CHART_CACHE_DRIVER: "redis" CHART_CACHE_DRIVER: "redis"
_REDIS_URL_CORE: "redis://harbor-redis:6379/0?idle_timeout_seconds=30" _REDIS_URL_CORE: "redis://harbor-redis:6379/0?idle_timeout_seconds=30"
@ -1001,7 +1001,6 @@ metadata:
labels: labels:
app: harbor app: harbor
annotations: annotations:
cert-manager.io/cluster-issuer: default
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
ingress.kubernetes.io/proxy-body-size: "0" ingress.kubernetes.io/proxy-body-size: "0"
ingress.kubernetes.io/ssl-redirect: "true" ingress.kubernetes.io/ssl-redirect: "true"
@ -1012,9 +1011,8 @@ metadata:
traefik.ingress.kubernetes.io/router.tls: "true" traefik.ingress.kubernetes.io/router.tls: "true"
spec: spec:
tls: tls:
- secretName: harbor-tls - hosts:
hosts: - "*.k-space.ee"
- harbor.k-space.ee
rules: rules:
- http: - http:
paths: paths:

View File

@ -0,0 +1,165 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: descheduler
namespace: kube-system
labels:
app.kubernetes.io/name: descheduler
---
apiVersion: v1
kind: ConfigMap
metadata:
name: descheduler
namespace: kube-system
labels:
app.kubernetes.io/name: descheduler
data:
policy.yaml: |
apiVersion: "descheduler/v1alpha1"
kind: "DeschedulerPolicy"
strategies:
LowNodeUtilization:
enabled: true
params:
nodeResourceUtilizationThresholds:
targetThresholds:
cpu: 50
memory: 50
pods: 50
thresholds:
cpu: 20
memory: 20
pods: 20
RemoveDuplicates:
enabled: true
RemovePodsHavingTooManyRestarts:
enabled: true
params:
podsHavingTooManyRestarts:
includingInitContainers: true
podRestartThreshold: 100
RemovePodsViolatingInterPodAntiAffinity:
enabled: true
RemovePodsViolatingNodeAffinity:
enabled: true
params:
nodeAffinityType:
- requiredDuringSchedulingIgnoredDuringExecution
RemovePodsViolatingNodeTaints:
enabled: true
RemovePodsViolatingTopologySpreadConstraint:
enabled: true
params:
includeSoftConstraints: false
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: descheduler
labels:
app.kubernetes.io/name: descheduler
rules:
- apiGroups: ["events.k8s.io"]
resources: ["events"]
verbs: ["create", "update"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "watch", "list"]
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "watch", "list"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list", "delete"]
- apiGroups: [""]
resources: ["pods/eviction"]
verbs: ["create"]
- apiGroups: ["scheduling.k8s.io"]
resources: ["priorityclasses"]
verbs: ["get", "watch", "list"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["create", "update"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
resourceNames: ["descheduler"]
verbs: ["get", "patch", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: descheduler
labels:
app.kubernetes.io/name: descheduler
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: descheduler
subjects:
- kind: ServiceAccount
name: descheduler
namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: descheduler
namespace: kube-system
labels:
app.kubernetes.io/name: descheduler
spec:
replicas: 2
selector:
matchLabels: &selectorLabels
app.kubernetes.io/name: descheduler
template:
metadata:
labels: *selectorLabels
spec:
priorityClassName: system-cluster-critical
serviceAccountName: descheduler
containers:
- name: descheduler
image: "k8s.gcr.io/descheduler/descheduler:v0.25.1"
imagePullPolicy: IfNotPresent
command:
- "/bin/descheduler"
args:
- "--policy-config-file"
- "/policy-dir/policy.yaml"
- "--descheduling-interval"
- 5m
- "--v"
- "3"
- --leader-elect=true
ports:
- containerPort: 10258
protocol: TCP
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10258
scheme: HTTPS
initialDelaySeconds: 3
periodSeconds: 10
resources:
requests:
cpu: 500m
memory: 256Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
runAsNonRoot: true
volumeMounts:
- mountPath: /policy-dir
name: policy-volume
volumes:
- name: policy-volume
configMap:
name: descheduler

View File

@ -159,7 +159,9 @@ spec:
spec: spec:
automountServiceAccountToken: true automountServiceAccountToken: true
containers: containers:
- image: registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.6.0 - image: registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.7.0
args:
- --metric-labels-allowlist=pods=[*]
livenessProbe: livenessProbe:
httpGet: httpGet:
path: /healthz path: /healthz
@ -308,14 +310,6 @@ spec:
annotations: annotations:
summary: Kubernetes Volume out of disk space (instance {{ $labels.instance }}) summary: Kubernetes Volume out of disk space (instance {{ $labels.instance }})
description: "Volume is almost full (< 10% left)\n VALUE = {{ $value }}\n LABELS = {{ $labels }}" description: "Volume is almost full (< 10% left)\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesVolumeFullInFourDays
expr: predict_linear(kubelet_volume_stats_available_bytes[6h], 4 * 24 * 3600) < 0
for: 0m
labels:
severity: critical
annotations:
summary: Kubernetes Volume full in four days (instance {{ $labels.instance }})
description: "{{ $labels.namespace }}/{{ $labels.persistentvolumeclaim }} is expected to fill up within four days. Currently {{ $value | humanize }}% is available.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesPersistentvolumeError - alert: KubernetesPersistentvolumeError
expr: kube_persistentvolume_status_phase{phase=~"Failed|Pending", job="kube-state-metrics"} > 0 expr: kube_persistentvolume_status_phase{phase=~"Failed|Pending", job="kube-state-metrics"} > 0
for: 0m for: 0m
@ -429,21 +423,13 @@ spec:
summary: Kubernetes DaemonSet rollout stuck (instance {{ $labels.instance }}) summary: Kubernetes DaemonSet rollout stuck (instance {{ $labels.instance }})
description: "Some Pods of DaemonSet are not scheduled or not ready\n VALUE = {{ $value }}\n LABELS = {{ $labels }}" description: "Some Pods of DaemonSet are not scheduled or not ready\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesDaemonsetMisscheduled - alert: KubernetesDaemonsetMisscheduled
expr: kube_daemonset_status_number_misscheduled > 0 expr: sum by (namespace, daemonset) (kube_daemonset_status_number_misscheduled) > 0
for: 1m for: 1m
labels: labels:
severity: critical severity: critical
annotations: annotations:
summary: Kubernetes DaemonSet misscheduled (instance {{ $labels.instance }}) summary: Kubernetes DaemonSet misscheduled (instance {{ $labels.instance }})
description: "Some DaemonSet Pods are running where they are not supposed to run\n VALUE = {{ $value }}\n LABELS = {{ $labels }}" description: "Some DaemonSet Pods are running where they are not supposed to run\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesCronjobTooLong
expr: time() - kube_cronjob_next_schedule_time > 3600
for: 0m
labels:
severity: warning
annotations:
summary: Kubernetes CronJob too long (instance {{ $labels.instance }})
description: "CronJob {{ $labels.namespace }}/{{ $labels.cronjob }} is taking more than 1h to complete.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesJobSlowCompletion - alert: KubernetesJobSlowCompletion
expr: kube_job_spec_completions - kube_job_status_succeeded > 0 expr: kube_job_spec_completions - kube_job_status_succeeded > 0
for: 12h for: 12h

View File

@ -0,0 +1,197 @@
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-view: "true"
name: system:aggregated-metrics-reader
rules:
- apiGroups:
- metrics.k8s.io
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
rules:
- apiGroups:
- ""
resources:
- nodes/metrics
verbs:
- get
- apiGroups:
- ""
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server-auth-reader
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: metrics-server
strategy:
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
k8s-app: metrics-server
spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --kubelet-insecure-tls
- --metric-resolution=15s
image: k8s.gcr.io/metrics-server/metrics-server:v0.6.1
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /livez
port: https
scheme: HTTPS
periodSeconds: 10
name: metrics-server
ports:
- containerPort: 4443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readyz
port: https
scheme: HTTPS
initialDelaySeconds: 20
periodSeconds: 10
resources:
requests:
cpu: 100m
memory: 200Mi
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- mountPath: /tmp
name: tmp-dir
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
serviceAccountName: metrics-server
volumes:
- emptyDir: {}
name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
labels:
k8s-app: metrics-server
name: v1beta1.metrics.k8s.io
spec:
group: metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: metrics-server
namespace: kube-system
version: v1beta1
versionPriority: 100

View File

@ -269,7 +269,6 @@ metadata:
certManager: "true" certManager: "true"
rewriteTarget: "true" rewriteTarget: "true"
annotations: annotations:
cert-manager.io/cluster-issuer: default
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
kubernetes.io/ingress.class: traefik kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure traefik.ingress.kubernetes.io/router.entrypoints: websecure
@ -289,5 +288,4 @@ spec:
number: 80 number: 80
tls: tls:
- hosts: - hosts:
- dashboard.k-space.ee - "*.k-space.ee"
secretName: dashboard-tls

View File

@ -1,5 +0,0 @@
---
apiVersion: v1
kind: Namespace
metadata:
name: deployment

491
logmower/application.yml Normal file
View File

@ -0,0 +1,491 @@
---
apiVersion: codemowers.io/v1alpha1
kind: GeneratedSecret
metadata:
name: logmower-readwrite-password
spec:
mapping:
- key: password
value: "%(password)s"
---
apiVersion: codemowers.io/v1alpha1
kind: GeneratedSecret
metadata:
name: logmower-readonly-password
spec:
mapping:
- key: password
value: "%(password)s"
---
apiVersion: mongodbcommunity.mongodb.com/v1
kind: MongoDBCommunity
metadata:
name: logmower-mongodb
spec:
additionalMongodConfig:
systemLog:
quiet: true
members: 2
arbiters: 1
type: ReplicaSet
version: "6.0.3"
security:
authentication:
modes: ["SCRAM"]
users:
- name: readwrite
db: application
passwordSecretRef:
name: logmower-readwrite-password
roles:
- name: readWrite
db: application
scramCredentialsSecretName: logmower-readwrite
- name: readonly
db: application
passwordSecretRef:
name: logmower-readonly-password
roles:
- name: read
db: application
scramCredentialsSecretName: logmower-readonly
statefulSet:
spec:
logLevel: WARN
template:
spec:
containers:
- name: mongod
resources:
requests:
cpu: 100m
memory: 1Gi
limits:
cpu: 4000m
memory: 1Gi
volumeMounts:
- name: journal-volume
mountPath: /data/journal
- name: mongodb-agent
resources:
requests:
cpu: 1m
memory: 100Mi
limits: {}
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- logmower-mongodb-svc
topologyKey: kubernetes.io/hostname
nodeSelector:
dedicated: monitoring
tolerations:
- key: dedicated
operator: Equal
value: monitoring
effect: NoSchedule
volumeClaimTemplates:
- metadata:
name: logs-volume
labels:
usecase: logs
spec:
storageClassName: mongo
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
- metadata:
name: journal-volume
labels:
usecase: journal
spec:
storageClassName: mongo
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 512Mi
- metadata:
name: data-volume
labels:
usecase: data
spec:
storageClassName: mongo
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: logmower-shipper
spec:
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 50%
selector:
matchLabels:
app: logmower-shipper
template:
metadata:
labels:
app: logmower-shipper
spec:
serviceAccountName: logmower-shipper
containers:
- name: logmower-shipper
image: harbor.k-space.ee/k-space/logmower-shipper-prototype:latest
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: MONGO_URI
valueFrom:
secretKeyRef:
name: logmower-mongodb-application-readwrite
key: connectionString.standard
ports:
- containerPort: 8000
name: metrics
securityContext:
readOnlyRootFilesystem: true
command:
- /app/log_shipper.py
- --parse-json
- --normalize-log-level
- --stream-to-log-level
- --merge-top-level
- --max-collection-size
- "10000000000"
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: etcmachineid
mountPath: /etc/machine-id
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
volumes:
- name: etcmachineid
hostPath:
path: /etc/machine-id
- name: varlog
hostPath:
path: /var/log
tolerations:
- operator: "Exists"
effect: "NoSchedule"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: logging-logmower-shipper
subjects:
- kind: ServiceAccount
name: logmower-shipper
namespace: logmower
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: logmower-shipper
labels:
app: logmower-shipper
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: logmower-shipper
spec:
podSelector:
matchLabels:
app: logmower-shipper
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: prometheus-operator
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus
egress:
- to:
- podSelector:
matchLabels:
app: logmower-mongodb-svc
ports:
- port: 27017
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: logmower-eventsource
spec:
podSelector:
matchLabels:
app: logmower-eventsource
policyTypes:
- Ingress
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: logmower-mongodb-svc
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: logmower-frontend
spec:
podSelector:
matchLabels:
app: logmower-frontend
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: logmower-shipper
spec:
selector:
matchLabels:
app: logmower-shipper
podMetricsEndpoints:
- port: metrics
---
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: logmower-shipper
spec:
groups:
- name: logmower-shipper
rules:
- alert: LogmowerSingleInsertionErrors
annotations:
summary: Logmower shipper is having issues submitting log records
to database
expr: rate(logmower_insertion_error_count_total[30m]) > 0
for: 0m
labels:
severity: warning
- alert: LogmowerBulkInsertionErrors
annotations:
summary: Logmower shipper is having issues submitting log records
to database
expr: rate(logmower_bulk_insertion_error_count_total[30m]) > 0
for: 0m
labels:
severity: warning
- alert: LogmowerHighDatabaseLatency
annotations:
summary: Database operations are slow
expr: histogram_quantile(0.95, logmower_database_operation_latency_bucket) > 10
for: 1m
labels:
severity: warning
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: logmower
annotations:
kubernetes.io/ingress.class: traefik
cert-manager.io/cluster-issuer: default
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
spec:
rules:
- host: log.k-space.ee
http:
paths:
- pathType: Prefix
path: "/events"
backend:
service:
name: logmower-eventsource
port:
number: 3002
- pathType: Prefix
path: "/"
backend:
service:
name: logmower-frontend
port:
number: 8080
tls:
- hosts:
- "*.k-space.ee"
---
apiVersion: v1
kind: Service
metadata:
name: logmower-eventsource
spec:
type: ClusterIP
selector:
app: logmower-eventsource
ports:
- protocol: TCP
port: 3002
---
apiVersion: v1
kind: Service
metadata:
name: logmower-frontend
spec:
type: ClusterIP
selector:
app: logmower-frontend
ports:
- protocol: TCP
port: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: logmower-frontend
spec:
selector:
matchLabels:
app: logmower-frontend
template:
metadata:
labels:
app: logmower-frontend
spec:
containers:
- name: logmower-frontend
image: harbor.k-space.ee/k-space/logmower-frontend
ports:
- containerPort: 8080
name: http
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
resources:
limits:
memory: 50Mi
requests:
cpu: 1m
memory: 20Mi
volumeMounts:
- name : nginx-cache
mountPath: /var/cache/nginx/
- name : nginx-config
mountPath: /var/config/nginx/
- name: var-run
mountPath: /var/run/
volumes:
- emptyDir: {}
name: nginx-cache
- emptyDir: {}
name: nginx-config
- emptyDir: {}
name: var-run
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: logmower-eventsource
spec:
selector:
matchLabels:
app: logmower-eventsource
template:
metadata:
labels:
app: logmower-eventsource
spec:
containers:
- name: logmower-eventsource
image: harbor.k-space.ee/k-space/logmower-eventsource
ports:
- containerPort: 3002
name: nodejs
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
resources:
limits:
cpu: 500m
memory: 200Mi
requests:
cpu: 10m
memory: 100Mi
env:
- name: MONGODB_HOST
valueFrom:
secretKeyRef:
name: logmower-mongodb-application-readonly
key: connectionString.standard
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: logmower-mongodb
spec:
podSelector:
matchLabels:
app: logmower-mongodb-svc
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector: {}
ports:
- port: 27017
egress:
- to:
- podSelector:
matchLabels:
app: logmower-mongodb-svc
ports:
- port: 27017

View File

@ -0,0 +1 @@
../mongodb-operator/mongodb-support.yml

47
logmower/mongoexpress.yml Normal file
View File

@ -0,0 +1,47 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: logmower-mongoexpress
spec:
revisionHistoryLimit: 0
replicas: 1
selector:
matchLabels:
app: logmower-mongoexpress
template:
metadata:
labels:
app: logmower-mongoexpress
spec:
containers:
- name: mongoexpress
image: mongo-express
ports:
- name: mongoexpress
containerPort: 8081
env:
- name: ME_CONFIG_MONGODB_URL
valueFrom:
secretKeyRef:
name: logmower-mongodb-application-readonly
key: connectionString.standard
- name: ME_CONFIG_MONGODB_ENABLE_ADMIN
value: "true"
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: logmower-mongoexpress
spec:
podSelector:
matchLabels:
app: logmower-mongoexpress
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: logmower-mongodb-svc
ports:
- port: 27017

View File

@ -0,0 +1 @@
../shared/networkpolicy-base.yml

View File

@ -1,8 +1,8 @@
# Longhorn distributed block storage system # Longhorn distributed block storage system
The manifest was fetched from The manifest was fetched from
https://raw.githubusercontent.com/longhorn/longhorn/v1.2.4/deploy/longhorn.yaml https://raw.githubusercontent.com/longhorn/longhorn/v1.4.0/deploy/longhorn.yaml
and then heavily modified. and then heavily modified as per `changes.diff`
To deploy Longhorn use following: To deploy Longhorn use following:

View File

@ -5,7 +5,6 @@ metadata:
namespace: longhorn-system namespace: longhorn-system
annotations: annotations:
kubernetes.io/ingress.class: traefik kubernetes.io/ingress.class: traefik
cert-manager.io/cluster-issuer: default
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
traefik.ingress.kubernetes.io/router.entrypoints: websecure traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd
@ -24,9 +23,7 @@ spec:
number: 80 number: 80
tls: tls:
- hosts: - hosts:
- longhorn.k-space.ee - "*.k-space.ee"
secretName: longhorn-tls
--- ---
apiVersion: monitoring.coreos.com/v1 apiVersion: monitoring.coreos.com/v1
kind: PodMonitor kind: PodMonitor

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,92 @@
--- ref 2023-02-20 11:15:07.340650467 +0200
+++ application.yml 2023-02-19 18:38:05.059234209 +0200
@@ -60,14 +60,14 @@
storageclass.kubernetes.io/is-default-class: "true"
provisioner: driver.longhorn.io
allowVolumeExpansion: true
- reclaimPolicy: "Delete"
+ reclaimPolicy: "Retain"
volumeBindingMode: Immediate
parameters:
- numberOfReplicas: "3"
+ numberOfReplicas: "2"
staleReplicaTimeout: "30"
fromBackup: ""
- fsType: "ext4"
- dataLocality: "disabled"
+ fsType: "xfs"
+ dataLocality: "best-effort"
---
# Source: longhorn/templates/crds.yaml
apiVersion: apiextensions.k8s.io/v1
@@ -3869,6 +3869,11 @@
app.kubernetes.io/version: v1.4.0
app: longhorn-manager
spec:
+ tolerations:
+ - key: dedicated
+ operator: Equal
+ value: storage
+ effect: NoSchedule
initContainers:
- name: wait-longhorn-admission-webhook
image: longhornio/longhorn-manager:v1.4.0
@@ -3968,6 +3973,10 @@
app.kubernetes.io/version: v1.4.0
app: longhorn-driver-deployer
spec:
+ tolerations:
+ - key: dedicated
+ operator: Equal
+ value: storage
initContainers:
- name: wait-longhorn-manager
image: longhornio/longhorn-manager:v1.4.0
@@ -4037,6 +4046,11 @@
app.kubernetes.io/version: v1.4.0
app: longhorn-recovery-backend
spec:
+ tolerations:
+ - key: dedicated
+ operator: Equal
+ value: storage
+ effect: NoSchedule
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
@@ -4103,6 +4117,11 @@
app.kubernetes.io/version: v1.4.0
app: longhorn-ui
spec:
+ tolerations:
+ - key: dedicated
+ operator: Equal
+ value: storage
+ effect: NoSchedule
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
@@ -4166,6 +4185,11 @@
app.kubernetes.io/version: v1.4.0
app: longhorn-conversion-webhook
spec:
+ tolerations:
+ - key: dedicated
+ operator: Equal
+ value: storage
+ effect: NoSchedule
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
@@ -4226,6 +4250,11 @@
app.kubernetes.io/version: v1.4.0
app: longhorn-admission-webhook
spec:
+ tolerations:
+ - key: dedicated
+ operator: Equal
+ value: storage
+ effect: NoSchedule
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:

158
member-site/doorboy.yml Normal file
View File

@ -0,0 +1,158 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: doorboy-proxy
annotations:
keel.sh/policy: force
keel.sh/trigger: poll
spec:
revisionHistoryLimit: 0
replicas: 3
selector:
matchLabels: &selectorLabels
app.kubernetes.io/name: doorboy-proxy
template:
metadata:
labels: *selectorLabels
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- doorboy-proxy
topologyKey: kubernetes.io/hostname
weight: 100
containers:
- name: doorboy-proxy
image: harbor.k-space.ee/k-space/doorboy-proxy:latest
envFrom:
- secretRef:
name: doorboy-api
env:
- name: MONGO_URI
valueFrom:
secretKeyRef:
name: mongo-application-readwrite
key: connectionString.standard
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
ports:
- containerPort: 5000
name: "http"
resources:
requests:
memory: "200Mi"
cpu: "100m"
limits:
memory: "500Mi"
cpu: "1"
---
apiVersion: v1
kind: Service
metadata:
name: doorboy-proxy
spec:
selector:
app.kubernetes.io/name: doorboy-proxy
ports:
- protocol: TCP
name: http
port: 5000
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: doorboy-proxy
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
spec:
rules:
- host: doorboy-proxy.k-space.ee
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: doorboy-proxy
port:
name: http
tls:
- hosts:
- "*.k-space.ee"
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: doorboy-proxy
spec:
selector:
matchLabels:
app.kubernetes.io/name: doorboy-proxy
podMetricsEndpoints:
- port: http
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kdoorpi
spec:
selector:
matchLabels: &selectorLabels
app.kubernetes.io/name: kdoorpi
template:
metadata:
labels: *selectorLabels
spec:
containers:
- name: kdoorpi
image: harbor.k-space.ee/k-space/kdoorpi:latest
env:
- name: KDOORPI_API_ALLOWED
value: https://doorboy-proxy.k-space.ee/allowed
- name: KDOORPI_API_LONGPOLL
value: https://doorboy-proxy.k-space.ee/longpoll
- name: KDOORPI_API_SWIPE
value: http://172.21.99.98/swipe
- name: KDOORPI_DOOR
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: KDOORPI_API_KEY
valueFrom:
secretKeyRef:
name: doorboy-api
key: DOORBOY_SECRET
- name: KDOORPI_UID_SALT
valueFrom:
secretKeyRef:
name: doorboy-uid-hash-salt
key: KDOORPI_UID_SALT
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
nodeSelector:
dedicated: door
tolerations:
- key: dedicated
operator: Equal
value: door
effect: NoSchedule
- key: arch
operator: Equal
value: arm64
effect: NoSchedule

View File

@ -1,11 +0,0 @@
# meta-operator
Meta operator enables creating operators without building any binaries or
Docker images.
For example operator declaration see `keydb.yml`
```
kubectl create namespace meta-operator
kubectl apply -f application.yml -f keydb.yml
```

View File

@ -1,220 +0,0 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: clusteroperators.codemowers.io
spec:
group: codemowers.io
names:
plural: clusteroperators
singular: clusteroperator
kind: ClusterOperator
shortNames:
- clusteroperator
scope: Cluster
versions:
- name: v1alpha1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
resource:
type: object
properties:
group:
type: string
version:
type: string
plural:
type: string
secret:
type: object
properties:
name:
type: string
enabled:
type: boolean
structure:
type: array
items:
type: object
properties:
key:
type: string
value:
type: string
services:
type: array
items:
type: object
x-kubernetes-preserve-unknown-fields: true
deployments:
type: array
items:
type: object
x-kubernetes-preserve-unknown-fields: true
statefulsets:
type: array
items:
type: object
x-kubernetes-preserve-unknown-fields: true
configmaps:
type: array
items:
type: object
x-kubernetes-preserve-unknown-fields: true
customresources:
type: array
items:
type: object
x-kubernetes-preserve-unknown-fields: true
required: ["spec"]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: meta-operator
namespace: meta-operator
labels:
app.kubernetes.io/name: meta-operator
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: meta-operator
template:
metadata:
labels:
app.kubernetes.io/name: meta-operator
spec:
serviceAccountName: meta-operator
containers:
- name: meta-operator
image: harbor.k-space.ee/k-space/meta-operator
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
env:
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
---
apiVersion: codemowers.io/v1alpha1
kind: ClusterOperator
metadata:
name: meta
spec:
resource:
group: codemowers.io
version: v1alpha1
plural: clusteroperators
secret:
enabled: false
deployments:
- apiVersion: apps/v1
kind: Deployment
metadata:
name: foobar-operator
labels:
app.kubernetes.io/name: foobar-operator
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: foobar-operator
template:
metadata:
labels:
app.kubernetes.io/name: foobar-operator
spec:
serviceAccountName: meta-operator
containers:
- name: meta-operator
image: harbor.k-space.ee/k-space/meta-operator
command:
- /meta-operator.py
- --target
- foobar
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
env:
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: meta-operator
rules:
- apiGroups:
- ""
resources:
- secrets
- configmaps
- services
verbs:
- create
- get
- patch
- update
- delete
- list
- apiGroups:
- apps
resources:
- deployments
- statefulsets
verbs:
- create
- delete
- list
- update
- patch
- apiGroups:
- codemowers.io
resources:
- bindzones
- clusteroperators
- keydbs
verbs:
- get
- list
- watch
- apiGroups:
- k-space.ee
resources:
- cams
verbs:
- get
- list
- watch
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: meta-operator
namespace: meta-operator
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: meta-operator
subjects:
- kind: ServiceAccount
name: meta-operator
namespace: meta-operator
roleRef:
kind: ClusterRole
name: meta-operator
apiGroup: rbac.authorization.k8s.io

View File

@ -1,253 +0,0 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: keydbs.codemowers.io
spec:
group: codemowers.io
names:
plural: keydbs
singular: keydb
kind: KeyDBCluster
shortNames:
- keydb
scope: Namespaced
versions:
- name: v1alpha1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
replicas:
type: integer
description: Replica count
required: ["spec"]
---
apiVersion: codemowers.io/v1alpha1
kind: ClusterOperator
metadata:
name: keydb
spec:
resource:
group: codemowers.io
version: v1alpha1
plural: keydbs
secret:
enabled: true
name: foobar-secrets
structure:
- key: REDIS_PASSWORD
value: "%s"
- key: REDIS_URI
value: "redis://:%s@foobar"
configmaps:
- apiVersion: v1
kind: ConfigMap
metadata:
name: foobar-scripts
labels:
app.kubernetes.io/name: foobar
data:
entrypoint.sh: |
#!/bin/bash
set -euxo pipefail
host="$(hostname)"
port="6379"
replicas=()
for node in {0..2}; do
if [ "${host}" != "redis-${node}" ]; then
replicas+=("--replicaof redis-${node}.redis-headless ${port}")
fi
done
exec keydb-server /etc/keydb/redis.conf \
--active-replica "yes" \
--multi-master "yes" \
--appendonly "no" \
--bind "0.0.0.0" \
--port "${port}" \
--protected-mode "no" \
--server-threads "2" \
--masterauth "${REDIS_PASSWORD}" \
--requirepass "${REDIS_PASSWORD}" \
"${replicas[@]}"
ping_readiness_local.sh: |-
#!/bin/bash
set -e
[[ -n "${REDIS_PASSWORD}" ]] && export REDISCLI_AUTH="${REDIS_PASSWORD}"
response="$(
timeout -s 3 "${1}" \
keydb-cli \
-h localhost \
-p 6379 \
ping
)"
if [ "${response}" != "PONG" ]; then
echo "${response}"
exit 1
fi
ping_liveness_local.sh: |-
#!/bin/bash
set -e
[[ -n "${REDIS_PASSWORD}" ]] && export REDISCLI_AUTH="${REDIS_PASSWORD}"
response="$(
timeout -s 3 "${1}" \
keydb-cli \
-h localhost \
-p 6379 \
ping
)"
if [ "${response}" != "PONG" ] && [[ ! "${response}" =~ ^.*LOADING.*$ ]]; then
echo "${response}"
exit 1
fi
cleanup_tempfiles.sh: |-
#!/bin/bash
set -e
find /data/ -type f \( -name "temp-*.aof" -o -name "temp-*.rdb" \) -mmin +60 -delete
services:
- apiVersion: v1
kind: Service
metadata:
name: foobar-headless
labels:
app.kubernetes.io/name: foobar
spec:
type: ClusterIP
clusterIP: None
ports:
- name: redis
port: 6379
protocol: TCP
targetPort: redis
selector:
app.kubernetes.io/name: foobar
- apiVersion: v1
kind: Service
metadata:
name: foobar
labels:
app.kubernetes.io/name: foobar
annotations:
{}
spec:
type: ClusterIP
ports:
- name: redis
port: 6379
protocol: TCP
targetPort: redis
- name: exporter
port: 9121
protocol: TCP
targetPort: exporter
selector:
app.kubernetes.io/name: foobar
sessionAffinity: ClientIP
statefulsets:
- apiVersion: apps/v1
kind: StatefulSet
metadata:
name: foobar
labels:
app.kubernetes.io/name: foobar
spec:
replicas: 3
serviceName: foobar-headless
selector:
matchLabels:
app.kubernetes.io/name: foobar
template:
metadata:
labels:
app.kubernetes.io/name: foobar
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- 'foobar'
topologyKey: kubernetes.io/hostname
weight: 100
containers:
- name: redis
image: eqalpha/keydb:x86_64_v6.3.1
imagePullPolicy: Always
command:
- /scripts/entrypoint.sh
ports:
- name: redis
containerPort: 6379
protocol: TCP
livenessProbe:
initialDelaySeconds: 20
periodSeconds: 5
# One second longer than command timeout should prevent generation of zombie processes.
timeoutSeconds: 6
successThreshold: 1
failureThreshold: 5
exec:
command:
- sh
- -c
- /scripts/ping_liveness_local.sh 5
readinessProbe:
initialDelaySeconds: 20
periodSeconds: 5
# One second longer than command timeout should prevent generation of zombie processes.
timeoutSeconds: 2
successThreshold: 1
failureThreshold: 5
exec:
command:
- sh
- -c
- /scripts/ping_readiness_local.sh 1
startupProbe:
periodSeconds: 5
# One second longer than command timeout should prevent generation of zombie processes.
timeoutSeconds: 2
failureThreshold: 24
exec:
command:
- sh
- -c
- /scripts/ping_readiness_local.sh 1
resources:
{}
securityContext:
{}
volumeMounts:
- name: foobar-scripts
mountPath: /scripts
- name: foobar-data
mountPath: /data
envFrom:
- secretRef:
name: foobar-secrets
- name: exporter
image: quay.io/oliver006/redis_exporter
ports:
- name: exporter
containerPort: 9121
envFrom:
- secretRef:
name: foobar-secrets
securityContext:
{}
volumes:
- name: foobar-scripts
configMap:
name: foobar-scripts
defaultMode: 0755
- name: foobar-data
emptyDir: {}

9
nyancat/README.md Normal file
View File

@ -0,0 +1,9 @@
# Nyancat server deployment
Something silly for a change.
To connect use:
```
telnet nyancat.k-space.ee
```

49
nyancat/application.yaml Normal file
View File

@ -0,0 +1,49 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nyancat
namespace: nyancat
spec:
replicas: 2
selector:
matchLabels:
app.kubernetes.io/name: nyancat
template:
metadata:
labels:
app.kubernetes.io/name: nyancat
spec:
containers:
- name: nyancat
image: harbor.k-space.ee/k-space/nyancat-server:latest
command:
- onenetd
- -v1
- "0"
- "2323"
- nyancat
- -I
- --telnet
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 65534
---
apiVersion: v1
kind: Service
metadata:
name: nyancat
namespace: nyancat
annotations:
metallb.universe.tf/address-pool: eenet
external-dns.alpha.kubernetes.io/hostname: nyancat.k-space.ee
spec:
type: LoadBalancer
externalTrafficPolicy: Local
selector:
app.kubernetes.io/name: nyancat
ports:
- protocol: TCP
port: 23
targetPort: 2323

11
openebs/README.md Normal file
View File

@ -0,0 +1,11 @@
# Raw file based local PV-s
We currently only use `rawfile-localpv` portion of OpenEBS.
The manifests were rendered using Helm template from https://github.com/openebs/rawfile-localpv
and subsequently modified
```
kubectl create namespace openebs
kubectl apply -n openebs -f rawfile.yaml
```

404
openebs/rawfile.yaml Normal file
View File

@ -0,0 +1,404 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: rawfile-csi-driver
namespace: openebs
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rawfile-csi-provisioner
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshots"]
verbs: ["get", "list"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshotcontents"]
verbs: ["get", "list"]
- apiGroups: ["storage.k8s.io"]
resources: ["csinodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["csistoragecapacities"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get"]
- apiGroups: ["apps"]
resources: ["daemonsets"]
verbs: ["get"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rawfile-csi-broker
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rawfile-csi-resizer
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "patch"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumeclaims/status"]
verbs: ["patch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rawfile-csi-provisioner
subjects:
- kind: ServiceAccount
name: rawfile-csi-driver
namespace: openebs
roleRef:
kind: ClusterRole
name: rawfile-csi-provisioner
apiGroup: rbac.authorization.k8s.io
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rawfile-csi-broker
subjects:
- kind: ServiceAccount
name: rawfile-csi-driver
namespace: openebs
roleRef:
kind: ClusterRole
name: rawfile-csi-broker
apiGroup: rbac.authorization.k8s.io
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rawfile-csi-resizer
subjects:
- kind: ServiceAccount
name: rawfile-csi-driver
namespace: openebs
roleRef:
kind: ClusterRole
name: rawfile-csi-resizer
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Service
metadata:
name: rawfile-csi-controller
namespace: openebs
labels:
app.kubernetes.io/name: rawfile-csi
component: controller
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: rawfile-csi
component: controller
clusterIP: None
---
apiVersion: v1
kind: Service
metadata:
name: rawfile-csi-node
namespace: openebs
labels:
app.kubernetes.io/name: rawfile-csi
component: node
spec:
type: ClusterIP
ports:
- name: metrics
port: 9100
targetPort: metrics
protocol: TCP
selector:
app.kubernetes.io/name: rawfile-csi
component: node
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: rawfile-csi-node
namespace: openebs
spec:
updateStrategy:
rollingUpdate:
maxUnavailable: "100%"
selector:
matchLabels: &selectorLabels
app.kubernetes.io/name: rawfile-csi
component: node
template:
metadata:
labels: *selectorLabels
spec:
serviceAccount: rawfile-csi-driver
priorityClassName: system-node-critical
tolerations:
- operator: "Exists"
volumes:
- name: registration-dir
hostPath:
path: /var/lib/kubelet/plugins_registry
type: Directory
- name: socket-dir
hostPath:
path: /var/lib/kubelet/plugins/rawfile-csi
type: DirectoryOrCreate
- name: mountpoint-dir
hostPath:
path: /var/lib/kubelet
type: DirectoryOrCreate
- name: data-dir
hostPath:
path: /var/csi/rawfile
type: DirectoryOrCreate
containers:
- name: csi-driver
image: "harbor.k-space.ee/k-space/rawfile-localpv:latest"
imagePullPolicy: Always
securityContext:
privileged: true
env:
- name: PROVISIONER_NAME
value: "rawfile.csi.openebs.io"
- name: CSI_ENDPOINT
value: unix:///csi/csi.sock
- name: IMAGE_REPOSITORY
value: "harbor.k-space.ee/k-space/rawfile-localpv"
- name: IMAGE_TAG
value: "latest"
- name: NODE_ID
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
ports:
- name: metrics
containerPort: 9100
- name: csi-probe
containerPort: 9808
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: mountpoint-dir
mountPath: /var/lib/kubelet
mountPropagation: "Bidirectional"
- name: data-dir
mountPath: /data
resources:
limits:
cpu: 1
memory: 100Mi
requests:
cpu: 10m
memory: 100Mi
- name: node-driver-registrar
image: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0
imagePullPolicy: IfNotPresent
args:
- --csi-address=$(ADDRESS)
- --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)
- --health-port=9809
env:
- name: ADDRESS
value: /csi/csi.sock
- name: DRIVER_REG_SOCK_PATH
value: /var/lib/kubelet/plugins/rawfile-csi/csi.sock
ports:
- containerPort: 9809
name: healthz
livenessProbe:
httpGet:
path: /healthz
port: healthz
initialDelaySeconds: 5
timeoutSeconds: 5
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: registration-dir
mountPath: /registration
resources:
limits:
cpu: 500m
memory: 100Mi
requests:
cpu: 10m
memory: 100Mi
- name: external-provisioner
image: k8s.gcr.io/sig-storage/csi-provisioner:v2.2.2
imagePullPolicy: IfNotPresent
args:
- "--csi-address=$(ADDRESS)"
- "--feature-gates=Topology=true"
- "--strict-topology"
- "--immediate-topology=false"
- "--timeout=120s"
- "--enable-capacity=true"
- "--capacity-ownerref-level=1" # DaemonSet
- "--node-deployment=true"
env:
- name: ADDRESS
value: /csi/csi.sock
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
volumeMounts:
- name: socket-dir
mountPath: /csi
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: rawfile-csi-controller
namespace: openebs
spec:
replicas: 1
serviceName: rawfile-csi
selector:
matchLabels: &selectorLabels
app.kubernetes.io/name: rawfile-csi
component: controller
template:
metadata:
labels: *selectorLabels
spec:
serviceAccount: rawfile-csi-driver
priorityClassName: system-cluster-critical
tolerations:
- key: "node-role.kubernetes.io/master"
operator: Equal
value: "true"
effect: NoSchedule
volumes:
- name: socket-dir
emptyDir: {}
containers:
- name: csi-driver
image: "harbor.k-space.ee/k-space/rawfile-localpv"
imagePullPolicy: Always
args:
- csi-driver
- --disable-metrics
env:
- name: PROVISIONER_NAME
value: "rawfile.csi.openebs.io"
- name: CSI_ENDPOINT
value: unix:///csi/csi.sock
- name: IMAGE_REPOSITORY
value: "harbor.k-space.ee/k-space/rawfile-localpv"
- name: IMAGE_TAG
value: "latest"
volumeMounts:
- name: socket-dir
mountPath: /csi
ports:
- name: csi-probe
containerPort: 9808
resources:
limits:
cpu: 1
memory: 100Mi
requests:
cpu: 10m
memory: 100Mi
- name: external-resizer
image: k8s.gcr.io/sig-storage/csi-resizer:v1.4.0
imagePullPolicy: IfNotPresent
args:
- "--csi-address=$(ADDRESS)"
- "--handle-volume-inuse-error=false"
env:
- name: ADDRESS
value: /csi/csi.sock
volumeMounts:
- name: socket-dir
mountPath: /csi
---
apiVersion: storage.k8s.io/v1
kind: CSIDriver
metadata:
name: rawfile.csi.openebs.io
spec:
attachRequired: false
podInfoOnMount: true
fsGroupPolicy: File
storageCapacity: true
volumeLifecycleModes:
- Persistent
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: rawfile-ext4
provisioner: rawfile.csi.openebs.io
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
parameters:
fsType: "ext4"
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: rawfile-xfs
provisioner: rawfile.csi.openebs.io
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
parameters:
fsType: "xfs"

View File

@ -40,7 +40,6 @@ metadata:
name: phpmyadmin name: phpmyadmin
annotations: annotations:
kubernetes.io/ingress.class: traefik kubernetes.io/ingress.class: traefik
cert-manager.io/cluster-issuer: default
traefik.ingress.kubernetes.io/router.entrypoints: websecure traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true" traefik.ingress.kubernetes.io/router.tls: "true"
@ -59,8 +58,7 @@ spec:
number: 80 number: 80
tls: tls:
- hosts: - hosts:
- phpmyadmin.k-space.ee - "*.k-space.ee"
secretName: phpmyadmin-tls
--- ---
apiVersion: v1 apiVersion: v1
kind: Service kind: Service

10
playground/README.md Normal file
View File

@ -0,0 +1,10 @@
# Playground
Playground namespace is accessible to `Developers` AD group.
Novel log aggregator is being developer in this namespace:
```
kubectl create secret generic -n playground mongodb-application-readwrite-password --from-literal="password=$(cat /dev/urandom | base64 | head -c 30)"
kubectl create secret generic -n playground mongodb-application-readonly-password --from-literal="password=$(cat /dev/urandom | base64 | head -c 30)"
kubectl apply -n playground -f logging.yml -f mongodb-support.yml -f mongoexpress.yml -f networkpolicy-base.yml

263
playground/logging.yml Normal file
View File

@ -0,0 +1,263 @@
---
apiVersion: mongodbcommunity.mongodb.com/v1
kind: MongoDBCommunity
metadata:
name: mongodb
spec:
additionalMongodConfig:
systemLog:
quiet: true
members: 3
type: ReplicaSet
version: "5.0.13"
security:
authentication:
modes: ["SCRAM"]
users:
- name: readwrite
db: application
passwordSecretRef:
name: mongodb-application-readwrite-password
roles:
- name: readWrite
db: application
scramCredentialsSecretName: mongodb-application-readwrite
- name: readonly
db: application
passwordSecretRef:
name: mongodb-application-readonly-password
roles:
- name: readOnly
db: application
scramCredentialsSecretName: mongodb-application-readonly
statefulSet:
spec:
logLevel: WARN
template:
spec:
containers:
- name: mongod
resources:
requests:
cpu: 100m
memory: 2Gi
limits:
cpu: 2000m
memory: 2Gi
- name: mongodb-agent
resources:
requests:
cpu: 1m
memory: 100Mi
limits: {}
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- mongodb-svc
topologyKey: kubernetes.io/hostname
nodeSelector:
dedicated: monitoring
tolerations:
- key: dedicated
operator: Equal
value: monitoring
effect: NoSchedule
volumeClaimTemplates:
- metadata:
name: logs-volume
spec:
storageClassName: local-path
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 512Mi
- metadata:
name: data-volume
spec:
storageClassName: local-path
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: log-shipper
spec:
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 50%
selector:
matchLabels:
app: log-shipper
template:
metadata:
labels:
app: log-shipper
spec:
serviceAccountName: log-shipper
containers:
- name: log-shipper
image: harbor.k-space.ee/k-space/log-shipper
securityContext:
runAsUser: 0
env:
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: MONGODB_HOST
valueFrom:
secretKeyRef:
name: mongodb-application-readwrite
key: connectionString.standard
ports:
- containerPort: 8000
name: metrics
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: etcmachineid
mountPath: /etc/machine-id
readOnly: true
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
volumes:
- name: etcmachineid
hostPath:
path: /etc/machine-id
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
tolerations:
- operator: "Exists"
effect: "NoExecute"
- operator: "Exists"
effect: "NoSchedule"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: logging-log-shipper
subjects:
- kind: ServiceAccount
name: log-shipper
namespace: playground
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: log-shipper
labels:
app: log-shipper
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: log-shipper
spec:
podSelector:
matchLabels:
app: log-shipper
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: prometheus-operator
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus
egress:
- to:
- podSelector:
matchLabels:
app: mongodb-svc
ports:
- port: 27017
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: log-viewer-backend
spec:
podSelector:
matchLabels:
app: log-viewer-backend
policyTypes:
- Ingress
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: mongodb-svc
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: log-viewer-frontend
spec:
podSelector:
matchLabels:
app: log-viewer-frontend
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: log-shipper
spec:
selector:
matchLabels:
app: log-shipper
podMetricsEndpoints:
- port: metrics

View File

@ -0,0 +1 @@
../mongodb-operator/mongodb-support.yml

1
playground/mongoexpress.yml Symbolic link
View File

@ -0,0 +1 @@
../shared/mongoexpress.yml

View File

@ -0,0 +1 @@
../shared/networkpolicy-base.yml

1
prometheus-operator/.gitignore vendored Normal file
View File

@ -0,0 +1 @@
bundle.yml

View File

@ -1,7 +1,7 @@
# Prometheus operator # Prometheus operator
``` ```
curl -L https://github.com/prometheus-operator/prometheus-operator/releases/download/v0.59.0/bundle.yaml | sed -e 's/namespace: default/namespace: prometheus-operator/g' > bundle.yml curl -L https://github.com/prometheus-operator/prometheus-operator/releases/download/v0.61.1/bundle.yaml | sed -e 's/namespace: default/namespace: prometheus-operator/g' > bundle.yml
kubectl create namespace prometheus-operator kubectl create namespace prometheus-operator
kubectl apply --server-side -n prometheus-operator -f bundle.yml kubectl apply --server-side -n prometheus-operator -f bundle.yml
kubectl delete -n prometheus-operator configmap snmp-exporter kubectl delete -n prometheus-operator configmap snmp-exporter

View File

@ -7,7 +7,14 @@ metadata:
app.kubernetes.io/name: alertmanager app.kubernetes.io/name: alertmanager
spec: spec:
route: route:
receiver: 'slack-notifications' routes:
- continue: false
receiver: slack-notifications
matchers:
- matchType: "="
name: severity
value: critical
receiver: 'null'
receivers: receivers:
- name: 'slack-notifications' - name: 'slack-notifications'
slackConfigs: slackConfigs:
@ -33,9 +40,12 @@ kind: Alertmanager
metadata: metadata:
name: alertmanager name: alertmanager
spec: spec:
alertmanagerConfigSelector: alertmanagerConfigMatcherStrategy:
matchLabels: type: None
app.kubernetes.io/name: alertmanager alertmanagerConfigNamespaceSelector: {}
alertmanagerConfigSelector: {}
alertmanagerConfiguration:
name: alertmanager
secrets: secrets:
- slack-secrets - slack-secrets
nodeSelector: nodeSelector:
@ -94,7 +104,7 @@ spec:
probeSelector: {} probeSelector: {}
ruleNamespaceSelector: {} ruleNamespaceSelector: {}
ruleSelector: {} ruleSelector: {}
retentionSize: 80GB retentionSize: 8GB
storage: storage:
volumeClaimTemplate: volumeClaimTemplate:
spec: spec:
@ -102,7 +112,7 @@ spec:
- ReadWriteOnce - ReadWriteOnce
resources: resources:
requests: requests:
storage: 100Gi storage: 10Gi
storageClassName: local-path storageClassName: local-path
--- ---
apiVersion: v1 apiVersion: v1
@ -399,7 +409,6 @@ kind: Ingress
metadata: metadata:
name: prometheus name: prometheus
annotations: annotations:
cert-manager.io/cluster-issuer: default
traefik.ingress.kubernetes.io/router.entrypoints: websecure traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true" traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
@ -418,15 +427,13 @@ spec:
number: 9090 number: 9090
tls: tls:
- hosts: - hosts:
- prom.k-space.ee - "*.k-space.ee"
secretName: prom-tls
--- ---
apiVersion: networking.k8s.io/v1 apiVersion: networking.k8s.io/v1
kind: Ingress kind: Ingress
metadata: metadata:
name: alertmanager name: alertmanager
annotations: annotations:
cert-manager.io/cluster-issuer: default
traefik.ingress.kubernetes.io/router.entrypoints: websecure traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true" traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
@ -445,8 +452,7 @@ spec:
number: 9093 number: 9093
tls: tls:
- hosts: - hosts:
- am.k-space.ee - "*.k-space.ee"
secretName: alertmanager-tls
--- ---
apiVersion: monitoring.coreos.com/v1 apiVersion: monitoring.coreos.com/v1
kind: PodMonitor kind: PodMonitor

File diff suppressed because it is too large Load Diff

View File

@ -87,7 +87,13 @@ spec:
affinity: affinity:
podAntiAffinity: podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution: requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: "kubernetes.io/hostname" - labelSelector:
matchExpressions:
- key: app
operator: In
values:
- mikrotik-exporter
topologyKey: "kubernetes.io/hostname"
--- ---
kind: Service kind: Service
apiVersion: v1 apiVersion: v1

View File

@ -4,11 +4,13 @@ kind: Probe
metadata: metadata:
name: nodes-proxmox name: nodes-proxmox
spec: spec:
scrapeTimeout: 30s
targets: targets:
staticConfig: staticConfig:
static: static:
- nas.mgmt.k-space.ee:9100 - nas.mgmt.k-space.ee:9100
- pve1.proxmox.infra.k-space.ee:9100 - pve1.proxmox.infra.k-space.ee:9100
- pve2.proxmox.infra.k-space.ee:9100
- pve8.proxmox.infra.k-space.ee:9100 - pve8.proxmox.infra.k-space.ee:9100
- pve9.proxmox.infra.k-space.ee:9100 - pve9.proxmox.infra.k-space.ee:9100
relabelingConfigs: relabelingConfigs:
@ -86,37 +88,37 @@ spec:
summary: Host memory under memory pressure (instance {{ $labels.instance }}) summary: Host memory under memory pressure (instance {{ $labels.instance }})
description: The node is under heavy memory pressure. High rate of major page faults description: The node is under heavy memory pressure. High rate of major page faults
- alert: HostUnusualNetworkThroughputIn - alert: HostUnusualNetworkThroughputIn
expr: sum by (instance) (rate(node_network_receive_bytes_total[2m])) > 160e+06 expr: sum by (instance) (rate(node_network_receive_bytes_total[2m])) > 800e+06
for: 1h for: 1h
labels: labels:
severity: warning severity: warning
annotations: annotations:
summary: Host unusual network throughput in (instance {{ $labels.instance }}) summary: Host unusual network throughput in (instance {{ $labels.instance }})
description: Host network interfaces are probably receiving too much data (> 160 MB/s) description: Host network interfaces are probably receiving too much data (> 800 MB/s)
- alert: HostUnusualNetworkThroughputOut - alert: HostUnusualNetworkThroughputOut
expr: sum by (instance) (rate(node_network_transmit_bytes_total[2m])) > 160e+06 expr: sum by (instance) (rate(node_network_transmit_bytes_total[2m])) > 800e+06
for: 1h for: 1h
labels: labels:
severity: warning severity: warning
annotations: annotations:
summary: Host unusual network throughput out (instance {{ $labels.instance }}) summary: Host unusual network throughput out (instance {{ $labels.instance }})
description: Host network interfaces are probably sending too much data (> 160 MB/s) description: Host network interfaces are probably sending too much data (> 800 MB/s)
- alert: HostUnusualDiskReadRate - alert: HostUnusualDiskReadRate
expr: sum by (instance) (rate(node_disk_read_bytes_total[2m])) > 50000000 expr: sum by (instance) (rate(node_disk_read_bytes_total[2m])) > 500e+06
for: 1h for: 1h
labels: labels:
severity: warning severity: warning
annotations: annotations:
summary: Host unusual disk read rate (instance {{ $labels.instance }}) summary: Host unusual disk read rate (instance {{ $labels.instance }})
description: Disk is probably reading too much data (> 50 MB/s) description: Disk is probably reading too much data (> 500 MB/s)
- alert: HostUnusualDiskWriteRate - alert: HostUnusualDiskWriteRate
expr: sum by (instance) (rate(node_disk_written_bytes_total[2m])) > 50000000 expr: sum by (instance) (rate(node_disk_written_bytes_total[2m])) > 500e+06
for: 1h for: 1h
labels: labels:
severity: warning severity: warning
annotations: annotations:
summary: Host unusual disk write rate (instance {{ $labels.instance }}) summary: Host unusual disk write rate (instance {{ $labels.instance }})
description: Disk is probably writing too much data (> 50 MB/s) description: Disk is probably writing too much data (> 500 MB/s)
# Please add ignored mountpoints in node_exporter parameters like # Please add ignored mountpoints in node_exporter parameters like
# "--collector.filesystem.ignored-mount-points=^/(sys|proc|dev|run)($|/)". # "--collector.filesystem.ignored-mount-points=^/(sys|proc|dev|run)($|/)".
# Same rule using "node_filesystem_free_bytes" will fire when disk fills for non-root users. # Same rule using "node_filesystem_free_bytes" will fire when disk fills for non-root users.
@ -361,11 +363,13 @@ kind: PodMonitor
metadata: metadata:
name: node-exporter name: node-exporter
spec: spec:
selector: selector:
matchLabels: matchLabels:
app: node-exporter app: node-exporter
podMetricsEndpoints: podMetricsEndpoints:
- port: web - port: web
scrapeTimeout: 30s
relabelings: relabelings:
- sourceLabels: [__meta_kubernetes_pod_node_name] - sourceLabels: [__meta_kubernetes_pod_node_name]
targetLabel: node targetLabel: node
@ -402,9 +406,10 @@ spec:
- --path.rootfs=/host/root - --path.rootfs=/host/root
- --no-collector.wifi - --no-collector.wifi
- --collector.filesystem.mount-points-exclude=^/(dev|proc|sys|var/lib/docker/.+|var/lib/kubelet/pods/.+)($|/) - --collector.filesystem.mount-points-exclude=^/(dev|proc|sys|var/lib/docker/.+|var/lib/kubelet/pods/.+)($|/)
- --collector.netclass.ignored-devices=^(veth.*|[a-f0-9]{15})$ - --collector.netclass.ignored-devices=^(veth|cali|vxlan|cni|vnet|tap|lo|wg)
- --collector.netdev.device-exclude=^(veth.*|[a-f0-9]{15})$ - --collector.netdev.device-exclude=^(veth|cali|vxlan|cni|vnet|tap|lo|wg)
image: prom/node-exporter:v1.3.1 - --collector.diskstats.ignored-devices=^(sr[0-9][0-9]*)$
image: prom/node-exporter:v1.5.0
resources: resources:
limits: limits:
cpu: 50m cpu: 50m

55
storage-class.yaml Normal file
View File

@ -0,0 +1,55 @@
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: mongo
provisioner: rawfile.csi.openebs.io
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
parameters:
fsType: "xfs"
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: minio
provisioner: rawfile.csi.openebs.io
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
parameters:
fsType: "xfs"
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: prometheus
provisioner: rawfile.csi.openebs.io
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
parameters:
fsType: "xfs"
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: postgres
provisioner: rawfile.csi.openebs.io
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
parameters:
fsType: "xfs"
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: mysql
provisioner: rawfile.csi.openebs.io
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
parameters:
fsType: "xfs"

View File

@ -5,5 +5,6 @@ Calico implements the inter-pod overlay network
``` ```
curl https://projectcalico.docs.tigera.io/manifests/tigera-operator.yaml -O curl https://projectcalico.docs.tigera.io/manifests/tigera-operator.yaml -O
curl https://projectcalico.docs.tigera.io/manifests/custom-resources.yaml -O curl https://projectcalico.docs.tigera.io/manifests/custom-resources.yaml -O
kubectl apply -f tigera-operator.yaml -f custom-resources.yaml kubectl apply -f custom-resources.yaml
kubectl replace -f tigera-operator.yaml
``` ```

View File

@ -1,64 +0,0 @@
#!/bin/bash
NAMESPACE=${NAMESPACE:-longhorn-system}
remove_and_wait() {
local crd=$1
out=`kubectl -n ${NAMESPACE} delete $crd --all 2>&1`
if [ $? -ne 0 ]; then
echo $out
return
fi
while true; do
out=`kubectl -n ${NAMESPACE} get $crd -o yaml | grep 'items: \[\]'`
if [ $? -eq 0 ]; then
break
fi
sleep 1
done
echo all $crd instances deleted
}
remove_crd_instances() {
remove_and_wait volumes.longhorn.rancher.io
# TODO: remove engines and replicas once we fix https://github.com/rancher/longhorn/issues/273
remove_and_wait engines.longhorn.rancher.io
remove_and_wait replicas.longhorn.rancher.io
remove_and_wait engineimages.longhorn.rancher.io
remove_and_wait settings.longhorn.rancher.io
# do this one last; manager crashes
remove_and_wait nodes.longhorn.rancher.io
}
# Delete driver related workloads in specific order
remove_driver() {
kubectl -n ${NAMESPACE} delete deployment.apps/longhorn-driver-deployer
kubectl -n ${NAMESPACE} delete daemonset.apps/longhorn-csi-plugin
kubectl -n ${NAMESPACE} delete statefulset.apps/csi-attacher
kubectl -n ${NAMESPACE} delete service/csi-attacher
kubectl -n ${NAMESPACE} delete statefulset.apps/csi-provisioner
kubectl -n ${NAMESPACE} delete service/csi-provisioner
kubectl -n ${NAMESPACE} delete daemonset.apps/longhorn-flexvolume-driver
}
# Delete all workloads in the namespace
remove_workloads() {
kubectl -n ${NAMESPACE} get daemonset.apps -o yaml | kubectl delete -f -
kubectl -n ${NAMESPACE} get deployment.apps -o yaml | kubectl delete -f -
kubectl -n ${NAMESPACE} get replicaset.apps -o yaml | kubectl delete -f -
kubectl -n ${NAMESPACE} get statefulset.apps -o yaml | kubectl delete -f -
kubectl -n ${NAMESPACE} get pods -o yaml | kubectl delete -f -
kubectl -n ${NAMESPACE} get service -o yaml | kubectl delete -f -
}
# Delete CRD definitions with longhorn.rancher.io in the name
remove_crds() {
for crd in $(kubectl get crd -o jsonpath={.items[*].metadata.name} | tr ' ' '\n' | grep longhorn.rancher.io); do
kubectl delete crd/$crd
done
}
remove_crd_instances
remove_driver
remove_workloads
remove_crds

View File

@ -1,5 +1,5 @@
# This section includes base Calico installation configuration. # This section includes base Calico installation configuration.
# For more information, see: https://projectcalico.docs.tigera.io/v3.23/reference/installation/api#operator.tigera.io/v1.Installation # For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1 apiVersion: operator.tigera.io/v1
kind: Installation kind: Installation
metadata: metadata:
@ -10,7 +10,7 @@ spec:
# Note: The ipPools section cannot be modified post-install. # Note: The ipPools section cannot be modified post-install.
ipPools: ipPools:
- blockSize: 26 - blockSize: 26
cidr: 192.168.0.0/16 cidr: 10.244.0.0/16
encapsulation: VXLANCrossSubnet encapsulation: VXLANCrossSubnet
natOutgoing: Enabled natOutgoing: Enabled
nodeSelector: all() nodeSelector: all()
@ -18,7 +18,7 @@ spec:
--- ---
# This section configures the Calico API server. # This section configures the Calico API server.
# For more information, see: https://projectcalico.docs.tigera.io/v3.23/reference/installation/api#operator.tigera.io/v1.APIServer # For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1 apiVersion: operator.tigera.io/v1
kind: APIServer kind: APIServer
metadata: metadata:

File diff suppressed because it is too large Load Diff

View File

@ -64,8 +64,16 @@ spec:
number: 9000 number: 9000
tls: tls:
- hosts: - hosts:
- traefik.k-space.ee - "*.k-space.ee"
secretName: traefik-tls secretName: wildcard-tls
---
apiVersion: traefik.containo.us/v1alpha1
kind: TLSStore
metadata:
name: default
spec:
defaultCertificate:
secretName: wildcard-tls
--- ---
apiVersion: traefik.containo.us/v1alpha1 apiVersion: traefik.containo.us/v1alpha1
kind: Middleware kind: Middleware

View File

@ -104,7 +104,6 @@ metadata:
name: pve name: pve
annotations: annotations:
kubernetes.io/ingress.class: traefik kubernetes.io/ingress.class: traefik
cert-manager.io/cluster-issuer: default
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
traefik.ingress.kubernetes.io/router.entrypoints: websecure traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd,traefik-proxmox-redirect@kubernetescrd traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd,traefik-proxmox-redirect@kubernetescrd
@ -147,9 +146,7 @@ spec:
number: 8006 number: 8006
tls: tls:
- hosts: - hosts:
- pve.k-space.ee - "*.k-space.ee"
- proxmox.k-space.ee
secretName: pve-tls
--- ---
apiVersion: traefik.containo.us/v1alpha1 apiVersion: traefik.containo.us/v1alpha1
kind: Middleware kind: Middleware

View File

@ -1,12 +1,16 @@
image: image:
tag: "2.8" tag: "2.9"
websecure: websecure:
tls: tls:
enabled: true enabled: true
providers: providers:
kubernetesCRD:
enabled: true
kubernetesIngress: kubernetesIngress:
allowEmptyServices: true
allowExternalNameServices: true allowExternalNameServices: true
deployment: deployment:

View File

@ -17,7 +17,6 @@ metadata:
name: voron name: voron
annotations: annotations:
kubernetes.io/ingress.class: traefik kubernetes.io/ingress.class: traefik
cert-manager.io/cluster-issuer: default
traefik.ingress.kubernetes.io/router.entrypoints: websecure traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true" traefik.ingress.kubernetes.io/router.tls: "true"
@ -36,5 +35,4 @@ spec:
name: http name: http
tls: tls:
- hosts: - hosts:
- voron.k-space.ee - "*.k-space.ee"
secretName: voron-tls

View File

@ -41,7 +41,6 @@ kind: Ingress
metadata: metadata:
name: whoami name: whoami
annotations: annotations:
cert-manager.io/cluster-issuer: default
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
kubernetes.io/ingress.class: traefik kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure traefik.ingress.kubernetes.io/router.entrypoints: websecure
@ -50,8 +49,7 @@ metadata:
spec: spec:
tls: tls:
- hosts: - hosts:
- "whoami.k-space.ee" - "*.k-space.ee"
secretName: whoami-tls
rules: rules:
- host: "whoami.k-space.ee" - host: "whoami.k-space.ee"
http: http:

View File

@ -104,7 +104,6 @@ metadata:
namespace: wildduck namespace: wildduck
annotations: annotations:
kubernetes.io/ingress.class: traefik kubernetes.io/ingress.class: traefik
cert-manager.io/cluster-issuer: default
traefik.ingress.kubernetes.io/router.entrypoints: websecure traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true" traefik.ingress.kubernetes.io/router.tls: "true"
@ -123,8 +122,7 @@ spec:
number: 80 number: 80
tls: tls:
- hosts: - hosts:
- webmail.k-space.ee - "*.k-space.ee"
secretName: webmail-tls
--- ---
apiVersion: codemowers.io/v1alpha1 apiVersion: codemowers.io/v1alpha1
kind: KeyDBCluster kind: KeyDBCluster