Compare commits

..

1 Commits

Author SHA1 Message Date
Danyliuk
73e5a9d7c1 logging: added deployment namespace 2022-10-08 21:03:56 +03:00
261 changed files with 77727 additions and 19822 deletions

10
.drone.yml Normal file
View File

@ -0,0 +1,10 @@
---
kind: pipeline
type: kubernetes
name: gitleaks
steps:
- name: gitleaks
image: zricethezav/gitleaks
commands:
- gitleaks detect --source=/drone/src

1
.gitignore vendored
View File

@ -1,4 +1,3 @@
*.keys
*secrets.yml
*secret.yml
*.swp

View File

@ -1,4 +0,0 @@
extends: default
ignore-from-file: .gitignore
rules:
line-length: disable

View File

@ -1,170 +0,0 @@
# Kubernetes cluster
Kubernetes hosts run on [PVE Cluster](https://wiki.k-space.ee/en/hosting/proxmox). Hosts are listed in Ansible [inventory](ansible/inventory.yml).
## `kubectl`
- Authorization [ACLs](cluster-role-bindings.yml)
- [Troubleshooting `no such host`](#systemd-resolved-issues)
Authenticate to auth.k-space.ee:
```bash
kubectl krew install oidc-login
mkdir -p ~/.kube
cat << EOF > ~/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1EVXdNakEzTXpVMU1Wb1hEVE15TURReU9UQTNNelUxTVZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS2J2CjY3UFlXVHJMc3ZCQTZuWHUvcm55SlVhNnppTnNWTVN6N2w4ekhxM2JuQnhqWVNPUDJhN1RXTnpUTmZDanZBWngKTmlNbXJya1hpb2dYQWpVVkhSUWZlYm81TFIrb0JBOTdLWlcrN01UMFVJRXBuWVVaaTdBRHlaS01vcEJFUXlMNwp1SlU5UDhnNUR1T29FRHZieGJSMXFuV1JZRXpteFNmSFpocllpMVA3bFd4emkxR243eGRETFZaMjZjNm0xR3Y1CnViRjZyaFBXK1JSVkhiQzFKakJGeTBwRXdhYlUvUTd0Z2dic0JQUjk5NVZvMktCeElBelRmbHhVanlYVkJ3MjEKU2d3ZGI1amlpemxEM0NSbVdZZ0ZrRzd0NTVZeGF3ZmpaQjh5bW4xYjhUVjkwN3dRcG8veU8zM3RaaEE3L3BFUwpBSDJYeDk5bkpMbFVGVUtSY1A4Q0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZKNnZKeVk1UlJ1aklQWGxIK2ZvU3g2QzFRT2RNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBQ04zcGtCTVM3ekkrbUhvOWdTZQp6SzdXdjl3bXlCTVE5Q3crQXBSNnRBQXg2T1VIN0d1enc5TTV2bXNkYjkrYXBKMHBlZFB4SUg3YXZ1aG9SUXNMCkxqTzRSVm9BMG9aNDBZV3J3UStBR0dvdkZuaWNleXRNcFVSNEZjRXc0ZDRmcGl6V3d0TVNlRlRIUXR6WG84V2MKNFJGWC9xUXNVR1NWa01PaUcvcVVrSFpXQVgyckdhWXZ1Tkw2eHdSRnh5ZHpsRTFSUk56TkNvQzVpTXhjaVRNagpackEvK0pqVEFWU2FuNXZnODFOSmthZEphbmNPWmEwS3JEdkZzd1JJSG5CMGpMLzh3VmZXSTV6czZURU1VZUk1ClF6dU01QXUxUFZ4VXZJUGhlMHl6UXZjWDV5RlhnMkJGU3MzKzJBajlNcENWVTZNY2dSSTl5TTRicitFTUlHL0kKY0pjPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://master.kube.k-space.ee:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: oidc
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: oidc
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- oidc-login
- get-token
- --oidc-issuer-url=https://auth.k-space.ee/
- --oidc-client-id=passmower.kubelogin
- --oidc-use-pkce
- --oidc-extra-scope=profile,email,groups
- --listen-address=127.0.0.1:27890
command: kubectl
env: null
provideClusterInfo: false
EOF
# Test it:
kubectl get nodes # opens browser for authentication
```
### systemd-resolved issues
```sh
Unable to connect to the server: dial tcp: lookup master.kube.k-space.ee on 127.0.0.53:53: no such host
```
```
Network → VPN → `IPv4` → Other nameservers (Muud nimeserverid): `172.21.0.1`
Network → VPN → `IPv6` → Other nameservers (Muud nimeserverid): `2001:bb8:4008:21::1`
Network → VPN → `IPv4` → Search domains (Otsingudomeenid): `kube.k-space.ee`
Network → VPN → `IPv6` → Search domains (Otsingudomeenid): `kube.k-space.ee`
```
## Cluster formation
Created Ubuntu 22.04 VM-s on Proxmox with local storage.
Added some ARM64 workers by using Ubuntu 22.04 server on Raspberry Pi.
After machines have booted up and you can reach them via SSH:
```
# Disable Ubuntu caching DNS resolver
systemctl disable systemd-resolved.service
systemctl stop systemd-resolved
rm -fv /etc/resolv.conf
cat > /etc/resolv.conf << EOF
nameserver 1.1.1.1
nameserver 8.8.8.8
EOF
# Disable multipathd as Longhorn handles that itself
systemctl mask multipathd snapd
systemctl disable --now multipathd snapd bluetooth ModemManager hciuart wpa_supplicant packagekit
# Permit root login
sed -i -e 's/PermitRootLogin no/PermitRootLogin without-password/' /etc/ssh/sshd_config
systemctl reload ssh
cat ~ubuntu/.ssh/authorized_keys > /root/.ssh/authorized_keys
userdel -f ubuntu
apt-get install -yqq linux-image-generic
apt-get remove -yq cloud-init linux-image-*-kvm
```
On master:
```
kubeadm init --token-ttl=120m --pod-network-cidr=10.244.0.0/16 --control-plane-endpoint "master.kube.k-space.ee:6443" --upload-certs --apiserver-cert-extra-sans master.kube.k-space.ee --node-name master1.kube.k-space.ee
```
For the `kubeadm join` command specify FQDN via `--node-name $(hostname -f)`.
Set AZ labels:
```
for j in $(seq 1 9); do
for t in master mon worker storage; do
kubectl label nodes ${t}${j}.kube.k-space.ee topology.kubernetes.io/zone=node${j}
done
done
```
After forming the cluster add taints:
```bash
for j in $(seq 1 9); do
kubectl label nodes worker${j}.kube.k-space.ee node-role.kubernetes.io/worker=''
done
for j in $(seq 1 4); do
kubectl taint nodes mon${j}.kube.k-space.ee dedicated=monitoring:NoSchedule
kubectl label nodes mon${j}.kube.k-space.ee dedicated=monitoring
done
for j in $(seq 1 4); do
kubectl taint nodes storage${j}.kube.k-space.ee dedicated=storage:NoSchedule
kubectl label nodes storage${j}.kube.k-space.ee dedicated=storage
done
```
For `arm64` nodes add suitable taint to prevent scheduling non-multiarch images on them:
```bash
kubectl taint nodes worker9.kube.k-space.ee arch=arm64:NoSchedule
```
For door controllers:
```
for j in ground front back; do
kubectl taint nodes door-${j}.kube.k-space.ee dedicated=door:NoSchedule
kubectl label nodes door-${j}.kube.k-space.ee dedicated=door
kubectl taint nodes door-${j}.kube.k-space.ee arch=arm64:NoSchedule
done
```
To reduce wear on storage:
```
echo StandardOutput=null >> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
systemctl daemon-reload
systemctl restart kubelet
```
## Technology mapping
Our self-hosted Kubernetes stack compared to AWS based deployments:
| Hipster startup | Self-hosted hackerspace | Purpose |
|-------------------|-------------------------------------|---------------------------------------------------------------------|
| AWS ALB | Traefik | Reverse proxy also known as ingress controller in Kubernetes jargon |
| AWS AMP | Prometheus Operator | Monitoring and alerting |
| AWS CloudTrail | ECK Operator | Log aggregation |
| AWS DocumentDB | MongoDB Community Operator | Highly available NoSQL database |
| AWS EBS | Longhorn | Block storage for arbitrary applications needing persistent storage |
| AWS EC2 | Proxmox | Virtualization layer |
| AWS ECR | Harbor | Docker registry |
| AWS EKS | kubeadm | Provision Kubernetes master nodes |
| AWS NLB | MetalLB | L2/L3 level load balancing |
| AWS RDS for MySQL | MySQL Operator | Provision highly available relational databases |
| AWS Route53 | Bind and RFC2136 | DNS records and Let's Encrypt DNS validation |
| AWS S3 | Minio Operator | Highly available object storage |
| AWS VPC | Calico | Overlay network |
| Dex | Passmower | ACL mapping and OIDC provider which integrates with GitHub/Samba |
| GitHub Actions | Woodpecker | Build Docker images |
| GitHub | Gitea | Source code management, issue tracking |
| GitHub OAuth2 | Samba (Active Directory compatible) | Source of truth for authentication and authorization |
| Gmail | Wildduck | E-mail |

View File

@ -10,4 +10,3 @@ this Git repository happen:
* Song Meo <songmeo@k-space.ee>
* Rasmus Kallas <rasmus@k-space.ee>
* Kristjan Kuusk <kkuusk@k-space.ee>
* Erki Aas <eaas@k-space.ee>

285
README.md
View File

@ -1,52 +1,261 @@
# k-space.ee infrastructure
Kubernetes manifests, Ansible [playbooks](ansible/README.md), and documentation for K-SPACE services.
# Kubernetes cluster manifests
<!-- TODO: Docs for adding to ArgoCD (auto-)sync -->
- Repo is deployed with [ArgoCD](https://argocd.k-space.ee). For `kubectl` access, see [CLUSTER.md](CLUSTER.md#kubectl).
- Debugging Kubernetes [on Wiki](https://wiki.k-space.ee/en/hosting/debugging-kubernetes)
- Need help? → [`#kube`](https://k-space-ee.slack.com/archives/C02EYV1NTM2)
## Introduction
Jump to docs: [inventory-app](hackerspace/README.md) / [cameras](camtiler/README.md) / [doors](https://wiki.k-space.ee/en/hosting/doors) / [list of apps](https://auth.k-space.ee) // [all infra](ansible/inventory.yml) / [network](https://wiki.k-space.ee/en/hosting/network/sensitive) / [retro](https://wiki.k-space.ee/en/hosting/retro) / [non-infra](https://wiki.k-space.ee)
This is the Kubernetes manifests of services running on k-space.ee domains:
Tip: Search the repo for `kind: xyz` for examples.
- [Authelia](https://auth.k-space.ee) for authentication
- [Drone.io](https://drone.k-space.ee) for building Docker images
- [Harbor](https://harbor.k-space.ee) for hosting Docker images
- [ArgoCD](https://argocd.k-space.ee) for deploying Kubernetes manifests and
Helm charts into the cluster
- [camtiler](https://cams.k-space.ee) for cameras
- [Longhorn Dashboard](https://longhorn.k-space.ee) for administering
Longhorn storage
- [Kubernetes Dashboard](https://kubernetes-dashboard.k-space.ee/) for read-only overview
of the Kubernetes cluster
- [Wildduck Webmail](https://webmail.k-space.ee/)
## Supporting services
- Build [Git](https://git.k-space.ee) repositories with [Woodpecker](https://woodpecker.k-space.ee).
- Passmower: Authz with `kind: OIDCClient` (or `kind: OIDCMiddlewareClient`[^authz]).
- Traefik[^nonginx]: Expose services with `kind: Service` + `kind: Ingress` (TLS and DNS **included**).
Most endpoints are protected by OIDC autentication or Authelia SSO middleware.
### Additional
- bind: Manage _additional_ DNS records with `kind: DNSEndpoint`.
- [Prometheus](https://wiki.k-space.ee/en/hosting/monitoring): Collect metrics with `kind: PodMonitor` (alerts with `kind: PrometheusRule`).
- [Slack bots](SLACK.md) and Kubernetes [CLUSTER.md](CLUSTER.md) itself.
<!-- TODO: Redirects: external-dns.alpha.kubernetes.io/hostname + in -extras.yaml: IngressRoute and Middleware -->
[^nonginx]: No nginx annotations! Use `kind: Ingress` instead. `IngressRoute` is not used as it doesn't support [`external-dns`](bind/README.md) out of the box.
[^authz]: Applications should use OpenID Connect (`kind: OIDCClient`) for authentication, whereever possible. If not possible, use `kind: OIDCMiddlewareClient` client, which will provide authentication via a Traefik middleware (`traefik.ingress.kubernetes.io/router.middlewares: passmower-proxmox@kubernetescrd`). Sometimes you might use both for extra security.
## Cluster access
### Network
General discussion is happening in the `#kube` Slack channel.
All nodes are in Infra VLAN 21. Routing is implemented with BGP, all nodes and the router make a full-mesh. Both Serice LB IPs and Pod IPs are advertised to the router. Router does NAT for outbound pod traffic.
See the [Calico installation](tigera-operator/application.yml) for Kube side and Routing / BGP in the router.
Static routes for 193.40.103.36/30 have been added in pve nodes to make them communicating with Passmower via Traefik more stable - otherwise packets coming back to the PVE are routed directly via VLAN 21 internal IPs by the worker nodes, breaking TCP.
For bootstrap access obtain `/etc/kubernetes/admin.conf` from one of the master
nodes and place it under `~/.kube/config` on your machine.
<!-- Linked to by https://wiki.k-space.ee/e/en/hosting/storage -->
### Databases / -stores:
- KeyDB: `kind: KeydbClaim` (replaces Redis[^redisdead])
- Dragonfly: `kind: Dragonfly` (replaces Redis[^redisdead])
- Longhorn: `storageClassName: longhorn` (filesystem storage)
- Mongo[^mongoproblems]: `kind: MongoDBCommunity` (NAS* `inventory-mongodb`)
- Minio S3: `kind: MinioBucketClaim` with `class: dedicated` (NAS*: `class: external`)
- MariaDB*: search for `mysql`, `mariadb`[^mariadb] (replaces MySQL)
- Postgres*: hardcoded to [harbor/application.yml](harbor/application.yml)
Once Authelia is working, OIDC access for others can be enabled with
running following on Kubernetes masters:
\* External, hosted directly on [nas.k-space.ee](https://wiki.k-space.ee/en/hosting/storage)
```bash
patch /etc/kubernetes/manifests/kube-apiserver.yaml - << EOF
@@ -23,6 +23,10 @@
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379
+ - --oidc-issuer-url=https://auth.k-space.ee
+ - --oidc-client-id=kubelogin
+ - --oidc-username-claim=preferred_username
+ - --oidc-groups-claim=groups
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
EOF
sudo systemctl daemon-reload
systemctl restart kubelet
```
[^mariadb]: As of 2024-07-30 used by auth, authelia, bitwarden, etherpad, freescout, git, grafana, nextcloud, wiki, woodpecker
Afterwards following can be used to talk to the Kubernetes cluster using
OIDC credentials:
[^redisdead]: Redis has been replaced as redis-operatori couldn't handle itself: didn't reconcile after reboots, master URI was empty, and clients complained about missing masters. ArgoCD still hosts its own Redis.
```bash
kubectl krew install oidc-login
mkdir -p ~/.kube
cat << EOF > ~/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1EVXdNakEzTXpVMU1Wb1hEVE15TURReU9UQTNNelUxTVZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS2J2CjY3UFlXVHJMc3ZCQTZuWHUvcm55SlVhNnppTnNWTVN6N2w4ekhxM2JuQnhqWVNPUDJhN1RXTnpUTmZDanZBWngKTmlNbXJya1hpb2dYQWpVVkhSUWZlYm81TFIrb0JBOTdLWlcrN01UMFVJRXBuWVVaaTdBRHlaS01vcEJFUXlMNwp1SlU5UDhnNUR1T29FRHZieGJSMXFuV1JZRXpteFNmSFpocllpMVA3bFd4emkxR243eGRETFZaMjZjNm0xR3Y1CnViRjZyaFBXK1JSVkhiQzFKakJGeTBwRXdhYlUvUTd0Z2dic0JQUjk5NVZvMktCeElBelRmbHhVanlYVkJ3MjEKU2d3ZGI1amlpemxEM0NSbVdZZ0ZrRzd0NTVZeGF3ZmpaQjh5bW4xYjhUVjkwN3dRcG8veU8zM3RaaEE3L3BFUwpBSDJYeDk5bkpMbFVGVUtSY1A4Q0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZKNnZKeVk1UlJ1aklQWGxIK2ZvU3g2QzFRT2RNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBQ04zcGtCTVM3ekkrbUhvOWdTZQp6SzdXdjl3bXlCTVE5Q3crQXBSNnRBQXg2T1VIN0d1enc5TTV2bXNkYjkrYXBKMHBlZFB4SUg3YXZ1aG9SUXNMCkxqTzRSVm9BMG9aNDBZV3J3UStBR0dvdkZuaWNleXRNcFVSNEZjRXc0ZDRmcGl6V3d0TVNlRlRIUXR6WG84V2MKNFJGWC9xUXNVR1NWa01PaUcvcVVrSFpXQVgyckdhWXZ1Tkw2eHdSRnh5ZHpsRTFSUk56TkNvQzVpTXhjaVRNagpackEvK0pqVEFWU2FuNXZnODFOSmthZEphbmNPWmEwS3JEdkZzd1JJSG5CMGpMLzh3VmZXSTV6czZURU1VZUk1ClF6dU01QXUxUFZ4VXZJUGhlMHl6UXZjWDV5RlhnMkJGU3MzKzJBajlNcENWVTZNY2dSSTl5TTRicitFTUlHL0kKY0pjPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://master.kube.k-space.ee:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: oidc
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: oidc
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- oidc-login
- get-token
- --oidc-issuer-url=https://auth.k-space.ee
- --oidc-client-id=kubelogin
- --oidc-use-pkce
- --oidc-extra-scope=profile,email,groups
- --listen-address=127.0.0.1:27890
command: kubectl
env: null
provideClusterInfo: false
EOF
```
[^mongoproblems]: Mongo problems: Incompatible with rawfile csi (wiredtiger.wt corrupts), complicated resizing (PVCs from statefulset PVC template).
For access control mapping see [cluster-role-bindings.yml](cluster-role-bindings.yml)
***
_This page is referenced by wiki [front page](https://wiki.k-space.ee) as **the** technical documentation for infra._
# Technology mapping
Our self-hosted Kubernetes stack compared to AWS based deployments:
| Hipster startup | Self-hosted hackerspace | Purpose |
|-------------------|-------------------------------------|---------------------------------------------------------------------|
| AWS ALB | Traefik | Reverse proxy also known as ingress controller in Kubernetes jargon |
| AWS AMP | Prometheus Operator | Monitoring and alerting |
| AWS CloudTrail | ECK Operator | Log aggregation |
| AWS DocumentDB | MongoDB Community Operator | Highly available NoSQL database |
| AWS EBS | Longhorn | Block storage for arbitrary applications needing persistent storage |
| AWS EC2 | Proxmox | Virtualization layer |
| AWS ECR | Harbor | Docker registry |
| AWS EKS | kubeadm | Provision Kubernetes master nodes |
| AWS NLB | MetalLB | L2/L3 level load balancing |
| AWS RDS for MySQL | MySQL Operator | Provision highly available relational databases |
| AWS Route53 | Bind and RFC2136 | DNS records and Let's Encrypt DNS validation |
| AWS S3 | Minio Operator | Highly available object storage |
| AWS VPC | Calico | Overlay network |
| Dex | Authelia | ACL mapping and OIDC provider which integrates with GitHub/Samba |
| GitHub Actions | Drone | Build Docker images |
| GitHub | Gitea | Source code management, issue tracking |
| GitHub OAuth2 | Samba (Active Directory compatible) | Source of truth for authentication and authorization |
| Gmail | Wildduck | E-mail |
External dependencies running as classic virtual machines:
- Samba as Authelia's source of truth
- Bind as DNS server
## Adding applications
Deploy applications via [ArgoCD](https://argocd.k-space.ee)
We use Treafik with Authelia for Ingress.
Applications where possible and where applicable should use `Remote-User`
authentication. This prevents application exposure on public Internet.
Otherwise use OpenID Connect for authentication,
see Argo itself as an example how that is done.
See `kspace-camtiler/ingress.yml` for commented Ingress example.
Note that we do not use IngressRoute objects because they don't
support `external-dns` out of the box.
Do NOT add nginx annotations, we use Traefik.
Do NOT manually add DNS records, they are added by `external-dns`.
Do NOT manually create Certificate objects,
these should be handled by `tls:` section in Ingress.
## Cluster formation
Create Ubuntu 20.04 VM-s on Proxmox with local storage.
After machines have booted up and you can reach them via SSH:
```bash
# Enable required kernel modules
cat > /etc/modules << EOF
overlay
br_netfilter
EOF
cat /etc/modules | xargs -L 1 -t modprobe
# Finetune sysctl:
cat > /etc/sysctl.d/99-k8s.conf << EOF
net.ipv4.conf.all.accept_redirects = 0
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sysctl --system
# Disable Ubuntu caching DNS resolver
systemctl disable systemd-resolved.service
systemctl stop systemd-resolved
rm -fv /etc/resolv.conf
cat > /etc/resolv.conf << EOF
nameserver 1.1.1.1
nameserver 8.8.8.8
EOF
# Disable multipathd as Longhorn handles that itself
systemctl mask multipathd
systemctl disable multipathd
systemctl stop multipathd
# Disable Snapcraft
systemctl mask snapd
systemctl disable snapd
systemctl stop snapd
# Permit root login
sed -i -e 's/PermitRootLogin no/PermitRootLogin without-password/' /etc/ssh/sshd_config
systemctl reload ssh
cat << EOF > /root/.ssh/authorized_keys
sk-ecdsa-sha2-nistp256@openssh.com AAAAInNrLWVjZHNhLXNoYTItbmlzdHAyNTZAb3BlbnNzaC5jb20AAAAIbmlzdHAyNTYAAABBBD4/e9SWYWYoNZMkkF+NirhbmHuUgjoCap42kAq0pLIXFwIqgVTCre03VPoChIwBClc8RspLKqr5W3j0fG8QwnQAAAAEc3NoOg== lauri@lauri-x13
EOF
userdel -f ubuntu
apt-get remove -yq cloud-init
```
Install packages, for Raspbian set `OS=Debian_11`
```bash
OS=xUbuntu_20.04
VERSION=1.23
cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /
EOF
cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list
deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /
EOF
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers.gpg add -
curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers-cri-o.gpg add -
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list
apt-get update
apt-get install -yqq apt-transport-https curl cri-o cri-o-runc kubelet=1.23.5-00 kubectl=1.23.5-00 kubeadm=1.23.5-00
sudo systemctl daemon-reload
sudo systemctl enable crio --now
apt-mark hold kubelet kubeadm kubectl
sed -i -e 's/unqualified-search-registries = .*/unqualified-search-registries = ["docker.io"]/' /etc/containers/registries.conf
```
On master:
```
kubeadm init --token-ttl=120m --pod-network-cidr=10.244.0.0/16 --control-plane-endpoint "master.kube.k-space.ee:6443" --upload-certs --apiserver-cert-extra-sans master.kube.k-space.ee --node-name master1.kube.k-space.ee
```
For the `kubeadm join` command specify FQDN via `--node-name $(hostname -f)`.
After forming the cluster add taints:
```bash
for j in $(seq 1 9); do
kubectl label nodes worker${j}.kube.k-space.ee node-role.kubernetes.io/worker=''
done
for j in $(seq 1 3); do
kubectl taint nodes mon${j}.kube.k-space.ee dedicated=monitoring:NoSchedule
kubectl label nodes mon${j}.kube.k-space.ee dedicated=monitoring
done
for j in $(seq 1 4); do
kubectl taint nodes storage${j}.kube.k-space.ee dedicated=storage:NoSchedule
kubectl label nodes storage${j}.kube.k-space.ee dedicated=storage
done
```
On Raspberry Pi you need to take additonal steps:
* Manually enable cgroups by appending
`cgroup_memory=1 cgroup_enable=memory` to `/boot/cmdline.txt`,
* Disable swap with `swapoff -a; apt-get purge -y dphys-swapfile`
* For mounting Longhorn volumes on Rasbian install `open-iscsi`
For `arm64` nodes add suitable taint to prevent scheduling non-multiarch images on them:
```bash
kubectl taint nodes worker9.kube.k-space.ee arch=arm64:NoSchedule
```

View File

@ -1,28 +0,0 @@
## Slack bots
### Doorboy3
https://api.slack.com/apps/A05NDB6FVJQ
Slack app author: rasmus
Managed by inventory-app:
- Incoming (open-commands) to `/api/slack/doorboy`, inventory-app authorizes based on command originating from #members or #work-shop && oidc access group (floor, workshop).
- Posts logs to a private channel. Restricted to 193.40.103.0/24.
Secrets as `SLACK_DOORLOG_CALLBACK` and `SLACK_VERIFICATION_TOKEN`.
### oidc-gateway
https://api.slack.com/apps/A05DART9PP1
Slack app author: eaas
Managed by passmower:
- Links e-mail to slackId.
- Login via Slack (not enabled).
Secrets as `slackId` and `slack-client`.
### podi-podi uuenduste spämmikoobas
https://api.slack.com/apps/A033RE9TUFK
Slack app author: rasmus
Posts Prometheus alerts to a private channel.
Secret as `slack-secrets`.

View File

@ -1,9 +1,7 @@
# Workflow
Most applications in our Kubernetes cluster are managed by ArgoCD.
Most notably operators are NOT managed by ArgoCD.
Adding to `applications/`: `kubectl apply -f newapp.yaml`
# Deployment
@ -13,15 +11,16 @@ To deploy ArgoCD:
helm repo add argo-cd https://argoproj.github.io/argo-helm
kubectl create secret -n argocd generic argocd-secret # Initialize empty secret for sessions
helm template -n argocd --release-name k6 argo-cd/argo-cd --include-crds -f values.yaml > argocd.yml
kubectl apply -f argocd.yml -f application-extras.yml -n argocd
kubectl apply -f argocd.yml -n argocd
kubectl -n argocd rollout restart deployment/k6-argocd-redis
kubectl -n argocd rollout restart deployment/k6-argocd-repo-server
kubectl -n argocd rollout restart deployment/k6-argocd-server
kubectl -n argocd rollout restart deployment/k6-argocd-notifications-controller
kubectl -n argocd rollout restart statefulset/k6-argocd-application-controller
kubectl label -n argocd secret oidc-client-argocd-owner-secrets app.kubernetes.io/part-of=argocd
```
Note: Refer to Authelia README for OIDC secret setup
# Setting up Git secrets
@ -41,50 +40,12 @@ kubectl -n argocd create secret generic gitea-kube-members \
--from-literal=type=git \
--from-literal=url=git@git.k-space.ee:k-space/kube-members \
--from-file=sshPrivateKey=id_ecdsa
kubectl -n argocd create secret generic gitea-members \
--from-literal=type=git \
--from-literal=url=git@git.k-space.ee:k-space/kube-members \
--from-file=sshPrivateKey=id_ecdsa
kubectl label -n argocd secret gitea-kube argocd.argoproj.io/secret-type=repository
kubectl label -n argocd secret gitea-kube-staging argocd.argoproj.io/secret-type=repository
kubectl label -n argocd secret gitea-kube-members argocd.argoproj.io/secret-type=repository
kubectl label -n argocd secret gitea-members argocd.argoproj.io/secret-type=repository
rm -fv id_ecdsa
```
Have Gitea admin reset password for user `argocd` and log in with that account.
Add the SSH key for user `argocd` from file `id_ecdsa.pub`.
Delete any other SSH keys associated with Gitea user `argocd`.
# Managing applications
To update apps:
```
for j in asterisk bind camtiler etherpad freescout gitea grafana hackerspace nextcloud nyancat rosdump traefik wiki wildduck woodpecker; do
cat << EOF >> applications/$j.yaml
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: $j
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: $j
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: $j
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true
EOF
done
find applications -name "*.yaml" -exec kubectl apply -n argocd -f {} \;
```

View File

@ -1,37 +0,0 @@
---
apiVersion: codemowers.cloud/v1beta1
kind: OIDCClient
metadata:
name: argocd
namespace: argocd
spec:
displayName: Argo CD
uri: https://argocd.k-space.ee
redirectUris:
- https://argocd.k-space.ee/auth/callback
allowedGroups:
- k-space:kubernetes:admins
grantTypes:
- authorization_code
- refresh_token
responseTypes:
- code
availableScopes:
- openid
- profile
pkce: false
---
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
namespace: argocd
name: k-space.ee
spec:
clusterResourceWhitelist:
- group: '*'
kind: '*'
destinations:
- namespace: '*'
server: '*'
sourceRepos:
- '*'

View File

@ -1,20 +0,0 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: asterisk
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: asterisk
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: asterisk
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@ -1,20 +1,17 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: whoami
name: authelia
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: whoami
path: authelia
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: whoami
namespace: authelia
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@ -1,20 +0,0 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: bind
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: bind
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: bind
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@ -1,15 +0,0 @@
# ---
# apiVersion: argoproj.io/v1alpha1
# kind: Application
# metadata:
# name: camtiler
# namespace: argocd
# spec:
# project: k-space.ee
# source:
# repoURL: 'git@git.k-space.ee:k-space/kube.git'
# path: camtiler
# targetRevision: HEAD
# destination:
# server: 'https://kubernetes.default.svc'
# namespace: camtiler

View File

@ -1,20 +1,17 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: wiki
name: camtiler
namespace: argocd
spec:
project: k-space.ee
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: wiki
path: camtiler
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: wiki
namespace: camtiler
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@ -0,0 +1,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: drone-execution
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: drone-execution
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: drone-execution
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -0,0 +1,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: drone
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: drone
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: drone
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -0,0 +1,22 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: elastic-system
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: elastic-system
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: elastic-system
syncPolicy:
syncOptions:
- CreateNamespace=true
ignoreDifferences:
- group: admissionregistration.k8s.io
kind: ValidatingWebhookConfiguration
jqPathExpressions:
- '.webhooks[]?.clientConfig.caBundle'

View File

@ -1,11 +1,10 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: etherpad
namespace: argocd
spec:
project: k-space.ee
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: etherpad
@ -14,7 +13,5 @@ spec:
server: 'https://kubernetes.default.svc'
namespace: etherpad
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true
- CreateNamespace=true

View File

@ -0,0 +1,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: external-dns
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: external-dns
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: external-dns
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -1,20 +0,0 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: freescout
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: freescout
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: freescout
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@ -1,20 +0,0 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: gitea
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: gitea
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: gitea
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@ -1,20 +0,0 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: grafana
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: grafana
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: grafana
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@ -1,20 +0,0 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: hackerspace
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: hackerspace
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: hackerspace
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@ -0,0 +1,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: harbor
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: harbor
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: harbor
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -0,0 +1,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: keel
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: keel
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: keel
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -1,4 +1,3 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
@ -14,7 +13,5 @@ spec:
server: 'https://kubernetes.default.svc'
namespace: kubernetes-dashboard
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@ -0,0 +1,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: logging
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: logging
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: logging
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -1,20 +0,0 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: members
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/members.git'
path: members
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: passmower
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@ -0,0 +1,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: members
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube-members.git'
path: .
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: members
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -0,0 +1,22 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: metallb-system
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: metallb-system
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: metallb-system
syncPolicy:
syncOptions:
- CreateNamespace=true
ignoreDifferences:
- group: apiextensions.k8s.io
kind: CustomResourceDefinition
jqPathExpressions:
- '.spec.conversion.webhook.clientConfig.caBundle'

View File

@ -1,20 +0,0 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: minio-clusters
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: minio-clusters
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: minio-clusters
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@ -1,20 +0,0 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: monitoring
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: monitoring
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: monitoring
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@ -1,20 +0,0 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: mysql-clusters
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: mysql-clusters
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: mysql-clusters
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@ -0,0 +1,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: mysql-operator
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: mysql-operator
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: mysql-operator
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -1,20 +0,0 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: nextcloud
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: nextcloud
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: nextcloud
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@ -1,20 +0,0 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: nyancat
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: nyancat
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: nyancat
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@ -0,0 +1,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: phpmyadmin
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: phpmyadmin
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: phpmyadmin
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -1,20 +0,0 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: postgres-clusters
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: postgres-clusters
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: postgres-clusters
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@ -1,18 +1,14 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: argocd-applications
name: prometheus-operator
namespace: argocd
spec:
project: k-space.ee
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: argocd/applications
path: prometheus-operator
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: argocd
syncPolicy:
automated:
prune: false
namespace: prometheus-operator

View File

@ -1,20 +0,0 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: redis-clusters
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: redis-clusters
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: redis-clusters
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@ -1,11 +1,10 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: reloader
namespace: argocd
spec:
project: k-space.ee
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: reloader
@ -14,7 +13,5 @@ spec:
server: 'https://kubernetes.default.svc'
namespace: reloader
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true
- CreateNamespace=true

View File

@ -1,11 +1,10 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: rosdump
namespace: argocd
spec:
project: k-space.ee
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: rosdump
@ -14,7 +13,5 @@ spec:
server: 'https://kubernetes.default.svc'
namespace: rosdump
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true
- CreateNamespace=true

View File

@ -1,20 +0,0 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: signs
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: signs
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: signs
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@ -1,20 +0,0 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: traefik
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: traefik
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: traefik
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@ -1,11 +1,10 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: wildduck
namespace: argocd
spec:
project: k-space.ee
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: wildduck
@ -14,7 +13,5 @@ spec:
server: 'https://kubernetes.default.svc'
namespace: wildduck
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@ -1,20 +0,0 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: woodpecker
namespace: argocd
spec:
project: k-space.ee
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: woodpecker
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: woodpecker
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@ -1,50 +0,0 @@
---
apiVersion: codemowers.cloud/v1beta1
kind: SecretClaim
metadata:
name: argocd-redis
namespace: argocd
spec:
size: 32
mapping:
- key: redis-password
value: "%(plaintext)s"
- key: REDIS_URI
value: "redis://:%(plaintext)s@argocd-redis"
---
apiVersion: dragonflydb.io/v1alpha1
kind: Dragonfly
metadata:
name: argocd-redis
namespace: argocd
spec:
authentication:
passwordFromSecret:
key: redis-password
name: argocd-redis
replicas: 3
resources:
limits:
cpu: 1000m
memory: 1Gi
topologySpreadConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: argocd-redis
app.kubernetes.io/part-of: dragonfly
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: argocd-redis
namespace: argocd
spec:
selector:
matchLabels:
app: argocd-redis
app.kubernetes.io/part-of: dragonfly
podMetricsEndpoints:
- port: admin

View File

@ -1,23 +1,22 @@
global:
logLevel: warn
domain: argocd.k-space.ee
# We use Authelia OIDC instead of Dex
dex:
enabled: false
redis:
enabled: false
# Maybe one day switch to Redis HA?
redis-ha:
enabled: false
externalRedis:
host: argocd-redis
existingSecret: argocd-redis
server:
# HTTPS is implemented by Traefik
extraArgs:
- --insecure
ingress:
enabled: true
annotations:
cert-manager.io/cluster-issuer: default
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
@ -25,7 +24,56 @@ server:
- argocd.k-space.ee
tls:
- hosts:
- "*.k-space.ee"
- argocd.k-space.ee
secretName: argocd-server-tls
configEnabled: true
config:
admin.enabled: "false"
url: https://argocd.k-space.ee
application.instanceLabelKey: argocd.argoproj.io/instance
oidc.config: |
name: Authelia
issuer: https://auth.k-space.ee
clientID: argocd
cliClientID: argocd
clientSecret: $oidc.config.clientSecret
requestedIDTokenClaims:
groups:
essential: true
requestedScopes:
- openid
- profile
- email
- groups
resource.customizations: |
# https://github.com/argoproj/argo-cd/issues/1704
networking.k8s.io/Ingress:
health.lua: |
hs = {}
hs.status = "Healthy"
return hs
# Members of ArgoCD Admins group in AD/Samba are allowed to administer Argo
rbacConfig:
policy.default: role:readonly
policy.csv: |
# Map AD groups to ArgoCD roles
g, Developers, role:developers
g, ArgoCD Admins, role:admin
# Allow developers to read objects
p, role:developers, applications, get, */*, allow
p, role:developers, certificates, get, *, allow
p, role:developers, clusters, get, *, allow
p, role:developers, repositories, get, *, allow
p, role:developers, projects, get, *, allow
p, role:developers, accounts, get, *, allow
p, role:developers, gpgkeys, get, *, allow
p, role:developers, logs, get, */*, allow
p, role:developers, applications, restart, default/camtiler, allow
p, role:developers, applications, override, default/camtiler, allow
p, role:developers, applications, action/apps/Deployment/restart, default/camtiler, allow
p, role:developers, applications, sync, default/camtiler, allow
p, role:developers, applications, update, default/camtiler, allow
metrics:
enabled: true
@ -47,59 +95,11 @@ controller:
enabled: true
configs:
params:
server.insecure: true
rbac:
policy.default: role:admin
policy.csv: |
# Map AD groups to ArgoCD roles
g, Developers, role:developers
g, ArgoCD Admins, role:admin
# Allow developers to read objects
p, role:developers, applications, get, */*, allow
p, role:developers, certificates, get, *, allow
p, role:developers, clusters, get, *, allow
p, role:developers, repositories, get, *, allow
p, role:developers, projects, get, *, allow
p, role:developers, accounts, get, *, allow
p, role:developers, gpgkeys, get, *, allow
p, role:developers, logs, get, */*, allow
p, role:developers, applications, restart, default/camtiler, allow
p, role:developers, applications, override, default/camtiler, allow
p, role:developers, applications, action/apps/Deployment/restart, default/camtiler, allow
p, role:developers, applications, sync, default/camtiler, allow
p, role:developers, applications, update, default/camtiler, allow
cm:
admin.enabled: "false"
resource.customizations: |
# https://github.com/argoproj/argo-cd/issues/1704
networking.k8s.io/Ingress:
health.lua: |
hs = {}
hs.status = "Healthy"
return hs
apiextensions.k8s.io/CustomResourceDefinition:
ignoreDifferences: |
jsonPointers:
- "x-kubernetes-validations"
oidc.config: |
name: OpenID Connect
issuer: https://auth.k-space.ee/
clientID: $oidc-client-argocd-owner-secrets:OIDC_CLIENT_ID
cliClientID: $oidc-client-argocd-owner-secrets:OIDC_CLIENT_ID
clientSecret: $oidc-client-argocd-owner-secrets:OIDC_CLIENT_SECRET
requestedIDTokenClaims:
groups:
essential: true
requestedScopes:
- openid
- profile
- email
- groups
secret:
createSecret: false
ssh:
knownHosts: |
knownHosts:
data:
ssh_known_hosts: |
# Copy-pasted from `ssh-keyscan git.k-space.ee`
git.k-space.ee ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCF1+/TDRXuGwsu4SZQQwQuJusb7W1OciGAQp/ZbTTvKD+0p7fV6dXyUlWjdFmITrFNYDreDnMiOS+FvE62d2Z0=
git.k-space.ee ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDsLyRuubdIUnTKEqOipu+9x+FforrC8+oxulVrl0ECgdIRBQnLQXIspTNwuC3MKJ4z+DPbndSt8zdN33xWys8UNEs3V5/W6zsaW20tKiaX75WK5eOL4lIDJi/+E97+c0aZBXamhxTrgkRVJ5fcAkY6C5cKEmVM5tlke3v3ihLq78/LpJYv+P947NdnthYE2oc+XGp/elZ0LNfWRPnd///+ykbwWirvQm+iiDz7PMVKkb+Q7l3vw4+zneKJWAyFNrm+aewyJV9lFZZJuHliwlHGTriSf6zhMAWyJzvYqDAN6iT5yi9KGKw60J6vj2GLuK4ULVblTyP9k9+3iELKSWW5

1
asterisk/.gitignore vendored
View File

@ -1 +0,0 @@
conf

View File

@ -1,11 +0,0 @@
# Asterisk
Asterisk is used as
This application is managed by [ArgoCD](https://argocd.k-space.ee/applications/argocd/asterisk)
Should ArgoCD be down manifests here can be applied with:
```
kubectl apply -n asterisk -f application.yaml
```

View File

@ -1,124 +0,0 @@
---
apiVersion: v1
kind: Service
metadata:
name: asterisk
annotations:
external-dns.alpha.kubernetes.io/hostname: voip.k-space.ee
spec:
type: LoadBalancer
externalTrafficPolicy: Local
selector:
app: asterisk
ports:
- name: asterisk
protocol: UDP
port: 5060
- name: sip-data-10000
protocol: UDP
port: 10000
- name: sip-data-10001
protocol: UDP
port: 10001
- name: sip-data-10002
protocol: UDP
port: 10002
- name: sip-data-10003
protocol: UDP
port: 10003
- name: sip-data-10004
protocol: UDP
port: 10004
- name: sip-data-10005
protocol: UDP
port: 10005
- name: sip-data-10006
protocol: UDP
port: 10006
- name: sip-data-10007
protocol: UDP
port: 10007
- name: sip-data-10008
protocol: UDP
port: 10008
- name: sip-data-10009
protocol: UDP
port: 10009
- name: sip-data-10010
protocol: UDP
port: 10010
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: asterisk
labels:
app: asterisk
spec:
selector:
matchLabels:
app: asterisk
replicas: 1
template:
metadata:
labels:
app: asterisk
spec:
containers:
- name: asterisk
image: harbor.k-space.ee/k-space/asterisk
command:
- /usr/sbin/asterisk
args:
- -TWBpvvvdddf
volumeMounts:
- name: config
mountPath: /etc/asterisk
ports:
- containerPort: 8088
name: metrics
volumes:
- name: config
secret:
secretName: asterisk-secrets
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: asterisk
spec:
selector:
matchLabels:
app: asterisk
podMetricsEndpoints:
- port: metrics
---
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: asterisk
spec:
groups:
- name: asterisk
rules:
- alert: AsteriskPhoneNotRegistered
expr: asterisk_endpoints_state{resource=~"1.*"} < 2
for: 5m
labels:
severity: critical
annotations:
summary: "{{ $labels.resource }} is not registered."
- alert: AsteriskOutboundNumberNotRegistered
expr: asterisk_pjsip_outbound_registration_status == 0
for: 5m
labels:
severity: critical
annotations:
summary: "{{ $labels.username }} is not registered with provider."
- alert: AsteriskCallsPerMinuteLimitExceed
expr: asterisk_channels_duration_seconds > 10*60
for: 20m
labels:
severity: warning
annotations:
summary: "Call at channel {{ $labels.name }} is taking longer than 10m."

View File

@ -1,39 +0,0 @@
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: asterisk
spec:
podSelector:
matchLabels:
app: asterisk
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus
- from:
- ipBlock:
cidr: 100.101.0.0/16
- from:
- ipBlock:
cidr: 100.102.0.0/16
- from:
- ipBlock:
cidr: 81.90.125.224/32 # Lauri home
- from:
- ipBlock:
cidr: 172.20.8.241/32 # Erki A
- from:
- ipBlock:
cidr: 212.47.211.10/32 # Elisa SIP
egress:
- to:
- ipBlock:
cidr: 212.47.211.10/32 # Elisa SIP

2
authelia/.gitignore vendored Normal file
View File

@ -0,0 +1,2 @@
application-secrets.y*ml
oidc-secrets.y*ml

171
authelia/README.md Normal file
View File

@ -0,0 +1,171 @@
# Authelia
## Background
Authelia works in conjunction with Traefik to provide SSO with
credentials stored in Samba (Active Directory compatible) directory tree.
Samba resides outside Kubernetes cluster as it's difficuilt to containerize
while keeping it usable from outside the cluster due to Samba's networking.
The MariaDB instance is used to store MFA tokens.
KeyDB is used to store session info.
## Deployment
Inspect changes with `git diff` and proceed to deploy:
```
kubectl apply -n authelia -f application.yml
kubectl create secret generic -n authelia mysql-secrets \
--from-literal=rootPassword=$(cat /dev/urandom | base64 | head -c 30)
kubectl create secret generic -n authelia mariadb-secrets \
--from-literal=MYSQL_ROOT_PASSWORD=$(cat /dev/urandom | base64 | head -c 30) \
--from-literal=MYSQL_PASSWORD=$(cat /dev/urandom | base64 | head -c 30)
kubectl -n authelia rollout restart deployment/authelia
```
To change secrets create `secret.yml`:
```
---
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: application-secrets
data:
JWT_TOKEN: ...
SESSION_ENCRYPTION_KEY: ...
STORAGE_PASSWORD: ...
STORAGE_ENCRYPTION_KEY: ...
LDAP_PASSWORD: ...
STORAGE_PASSWORD: ...
SMTP_PASSWORD: ...
```
Apply with:
```
kubectl apply -n authelia -f application-secrets.yml
kubectl annotate -n authelia secret application-secrets reloader.stakater.com/match=true
```
## OIDC secrets
OIDC secrets are separated from the main configuration until
Authelia will add CRD-s for these.
Generally speaking for untrusted applications, that is stuff that is running
outside the Kubernetes cluster eg web browser based (JS) and
local command line clients one
should use `public: true` and omit `secret: ...`.
Populate `oidc-secrets.yml` with approximately following:
```
identity_providers:
oidc:
clients:
- id: kubelogin
description: Kubernetes cluster
secret: ...
authorization_policy: two_factor
redirect_uris:
- http://localhost:27890
scopes:
- openid
- groups
- email
- profile
- id: proxmox
description: Proxmox Virtual Environment
secret: ...
authorization_policy: two_factor
redirect_uris:
- https://pve.k-space.ee
scopes:
- openid
- groups
- email
- profile
- id: argocd
description: ArgoCD
secret: ...
authorization_policy: two_factor
redirect_uris:
- https://argocd.k-space.ee/auth/callback
scopes:
- openid
- groups
- email
- profile
- id: harbor
description: Harbor
secret: ...
authorization_policy: two_factor
redirect_uris:
- https://harbor.k-space.ee/c/oidc/callback
scopes:
- openid
- groups
- email
- profile
- id: gitea
description: Gitea
secret: ...
authorization_policy: one_factor
redirect_uris:
- https://git.k-space.ee/user/oauth2/authelia/callback
scopes:
- openid
- profile
- email
- groups
grant_types:
- refresh_token
- authorization_code
response_types:
- code
userinfo_signing_algorithm: none
- id: grafana
description: Grafana
secret: ...
authorization_policy: one_factor
redirect_uris:
- https://grafana.k-space.ee/login/generic_oauth
scopes:
- openid
- groups
- email
- profile
```
To upload the file to Kubernetes secrets:
```
kubectl -n authelia delete secret oidc-secrets
kubectl -n authelia create secret generic oidc-secrets \
--from-file=oidc-secrets.yml=oidc-secrets.yml
kubectl annotate -n authelia secret oidc-secrets reloader.stakater.com/match=true
kubectl -n authelia rollout restart deployment/authelia
```
Synchronize OIDC secrets:
```
kubectl -n argocd delete secret argocd-secret
kubectl -n argocd create secret generic argocd-secret \
--from-literal=server.secretkey=$(cat /dev/urandom | base64 | head -c 30) \
--from-literal=oidc.config.clientSecret=$( \
kubectl get secret -n authelia oidc-secrets -o json \
| jq '.data."oidc-secrets.yml"' -r | base64 -d | yq -o json \
| jq '.identity_providers.oidc.clients[] | select(.id == "argocd") | .secret' -r)
kubectl -n monitoring delete secret oidc-secret
kubectl -n monitoring create secret generic oidc-secret \
--from-literal=GF_AUTH_GENERIC_OAUTH_CLIENT_SECRET=$( \
kubectl get secret -n authelia oidc-secrets -o json \
| jq '.data."oidc-secrets.yml"' -r | base64 -d | yq -o json \
| jq '.identity_providers.oidc.clients[] | select(.id == "grafana") | .secret' -r)
```

416
authelia/application.yml Normal file
View File

@ -0,0 +1,416 @@
---
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: authelia-certificates
labels:
app.kubernetes.io/name: authelia
data:
ldaps.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUZJekNDQXd1Z0F3SUJBZ0lVRzNaYnI0MGVVMlRHak1Lek5XaDhOTDJkRDRZd0RRWUpLb1pJaHZjTkFRRUwKQlFBd0lURWZNQjBHQTFVRUF3d1dVMkZ0WW1FZ1lYUWdZV1F1YXkxemNHRmpaUzVsWlRBZUZ3MHlNVEV5TVRRdwpOekk0TlRGYUZ3MHlOakV5TVRNd056STROVEZhTUNFeEh6QWRCZ05WQkFNTUZsTmhiV0poSUdGMElHRmtMbXN0CmMzQmhZMlV1WldVd2dnSWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUNEd0F3Z2dJS0FvSUNBUURub3hEZFlicjAKVFJHOEErdk0xT0I1Rzg4Z05YL1pXeFNLb0VaM2p0ekF0NEc3blV0aUhoVzI1cUhEeXZGVGEzTzJiMUFVSEhzbwpVQXpVTWNVV1FRb3J2RjF4L1VsYitpcnk0QkxFTklaVTdYMVpxb2ZKYXgwZTcrbit1YVM3R015dnB4VXliNGlYCkd3djdZZEh5SmM4WjZROHd2MTdNV2F2ejNaOE5CWFdoeG1xc3ljTlphVkl2S1lNRVpGazNUTnA3T20vSTFpdkYKWDJuNVNtb2d2NmdBVmpVODhSeWc2NlRFVStiaGY5QWdiU0VxWjhMaVd6c20xdHc0WnJXMDVVK25JVjRzTHdlaQp2SXppblFMYmFMTkc2ZUl0cUtQZGVsWWhRNHlCeHM3QXpTOCtieVVBZk9jRktzUTI5alFVdUxNbE1pUmt6MjV5Cnc5UUZxSGVuRjNIYXJqU1JTL3ZZV3J3K0RNbmo2Tit3QVdtd21SR3NzVmxPMjFSLzAzNThBK0h5VzhyLzlsTm8KV1FoMmt3VGRPdjdxMzFwRmZQQUhHUkFITnZUN0dRKzVCeFFjdG83cG1GQ2t2OTdpbmhiZG50d2ViSmM1VWI3NQpBeHNWVC9uNk9aTjJSU09NV0RKY1pjVkpXYjQxdTNTL2lBVHlvbDBuOEZMRlRRZm9xdXdvVkQ1UnpwU0NsVm50Cjd1eENyaGNsYXhTYnhUUDhqa29ERXQzc1NycWoySm5PNlhtQ3R2VlZkMmQvWVZQQ21qQm54TWc1bld1WEwwZTgKNkh3MTd5TGtYeFgzVERkdjF2VThvYTdpTmZyNmc3Vlcrd2ZsUkJoVW5WRUluNXZEdm80STVSdWRXaEJxcHN6VQo3bGQrUDVjZE5GWEdjUlRQdFFlbXkxUllKMG5ZejkybGtRSURBUUFCbzFNd1VUQWRCZ05WSFE0RUZnUVVjZ1JrCnZ4U3V1QnNFaktzbXQvN3dpRHIxbHVRd0h3WURWUjBqQkJnd0ZvQVVjZ1JrdnhTdXVCc0VqS3NtdC83d2lEcjEKbHVRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBTkJna3Foa2lHOXcwQkFRc0ZBQU9DQWdFQVNlNXM1aU04QjQ2agp6bXZMOUQ4dUJrQ0VIOW9mMnc1VFluL1NPZkFRVnhBOGxBYndORitlWmgyakdGSUN6citNYmlTMlhZdkxJNnVrClZ5cFJrN28vdExmdmY0alpqZnRpeEliWEM1MjYrUk1xOEcvV2xGbzJnWFZ0eW5BcXp5bXJVYjV1MVZJcG53QWYKNTBzNHlDOURFUXF1aGErYzJCWTBRQ3ZySnUvYy9KTUs3QTdYOFdRSzVDUy8wZkNPdzBPY2xkZzA0c3VWVlU2eQp0MEZmV0kvTlhURFFrU2JWVXN5OElmaXd4a0o5NmNsTjFNWVArQ015Mkh1eWF0aTZySnhVZFBEbS9tYzdRWXNPClNTSzQyNXJQOFFZMmduNlNXUXJXdUJic2dLSEpoVzRBYjdTTldkb0Q0QytwVDA2V1MzVXphMnhZd09TV1IvTWMKR1V5YXRwLzlxR05tOWM1d2RFQ3FtdkVQc2twQkp5ZWR6MUk2V2lxdjRuK0UvRk9qRGl0VVpFd3BFZXRUQktXZgoyRnZRa1pGRmpRU3VIdG5KT040cVRvWmlaNW4vbis4Z1k2Z1Y5Wnd0NHM5OGlpdnUwTFc4UlZGSTNkS0tiYm5lCkY1KzltNE9vMjF0SlU2QThXVGpqVXpLUnFKdEZSa1JpWGtOTGRoY2MrdTdMOFFlZTFOUjIyalg5N2NaVDNGUGoKYmpOUlpId3k5K1dhMG1zcC9EYUR5RnlaOStPUUhReUJJazdCSS9LdU0rT2dta3dlSHBNSE5CMUs1NHZQenZKawpHaFN1QUNIeTRybmdvQTBvMzNhZzJ6a3lEY3NocVRtK2Q3UXFWOWUzU2pONFpUUXlTeWNpa0I1bFJKVHAydVFkCk5jVjBtcG5nREl1aFVlSFRKWkJ0SVZCZnp4bHdHd2c9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
---
apiVersion: v1
kind: ConfigMap
metadata:
name: authelia-config
labels:
app.kubernetes.io/name: authelia
annotations:
reloader.stakater.com/match: "true"
data:
authelia-config.yml: |
---
log:
level: warn
certificates_directory: /certificates
theme: light
default_redirection_url: https://members.k-space.ee
totp:
issuer: K-SPACE
authentication_backend:
ldap:
implementation: activedirectory
url: ldaps://ad.k-space.ee
base_dn: dc=ad,dc=k-space,dc=ee
username_attribute: sAMAccountName
additional_users_dn: ou=Membership
users_filter: (&({username_attribute}={input})(objectCategory=person)(objectClass=user))
additional_groups_dn: cn=Users
groups_filter: (&(member={dn})(objectclass=group))
group_name_attribute: cn
mail_attribute: mail
display_name_attribute: displayName
user: cn=authelia,cn=Users,dc=ad,dc=k-space,dc=ee
session:
domain: k-space.ee
same_site: lax
expiration: 1M
inactivity: 120h
remember_me_duration: "0"
redis:
host: redis
port: 6379
regulation:
ban_time: 5m
find_time: 2m
max_retries: 3
storage:
mysql:
host: mariadb
database: authelia
username: authelia
notifier:
disable_startup_check: true
smtp:
host: mail.k-space.ee
port: 465
username: authelia
sender: authelia@k-space.ee
subject: "[Authelia] {title}"
startup_check_address: lauri@k-space.ee
access_control:
default_policy: deny
rules:
# Longhorn dashboard
- domain: longhorn.k-space.ee
policy: two_factor
subject: group:Longhorn Admins
- domain: longhorn.k-space.ee
policy: deny
# Members site
- domain: members.k-space.ee
policy: bypass
resources:
- ^/?$
- domain: members.k-space.ee
policy: two_factor
resources:
- ^/login/authelia/?$
- domain: members.k-space.ee
policy: bypass
# Webmail
- domain: webmail.k-space.ee
policy: two_factor
# Etherpad
- domain: pad.k-space.ee
policy: two_factor
resources:
- ^/p/board-
subject: group:Board Members
- domain: pad.k-space.ee
policy: deny
resources:
- ^/p/board-
- domain: pad.k-space.ee
policy: two_factor
resources:
- ^/p/members-
- domain: pad.k-space.ee
policy: deny
resources:
- ^/p/members-
- domain: pad.k-space.ee
policy: bypass
# phpMyAdmin
- domain: phpmyadmin.k-space.ee
policy: two_factor
# Require login for everything else protected by traefik-sso middleware
- domain: '*.k-space.ee'
policy: one_factor
...
---
apiVersion: v1
kind: Service
metadata:
name: authelia
labels:
app.kubernetes.io/name: authelia
spec:
type: ClusterIP
sessionAffinity: None
selector:
app.kubernetes.io/name: authelia
ports:
- name: http
protocol: TCP
port: 80
targetPort: http
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: authelia
labels:
app.kubernetes.io/name: authelia
annotations:
reloader.stakater.com/search: "true"
spec:
selector:
matchLabels:
app.kubernetes.io/name: authelia
replicas: 2
revisionHistoryLimit: 0
template:
metadata:
labels:
app.kubernetes.io/name: authelia
spec:
enableServiceLinks: false
containers:
- name: authelia
image: authelia/authelia:4
command:
- authelia
- --config=/config/authelia-config.yml
- --config=/config/oidc-secrets.yml
resources:
limits:
cpu: "4.00"
memory: 125Mi
requests:
cpu: "0.25"
memory: 50Mi
env:
- name: AUTHELIA_SERVER_DISABLE_HEALTHCHECK
value: "true"
- name: AUTHELIA_JWT_SECRET_FILE
value: /secrets/JWT_TOKEN
- name: AUTHELIA_SESSION_SECRET_FILE
value: /secrets/SESSION_ENCRYPTION_KEY
- name: AUTHELIA_AUTHENTICATION_BACKEND_LDAP_PASSWORD_FILE
value: /secrets/LDAP_PASSWORD
- name: AUTHELIA_SESSION_REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: redis-secrets
key: REDIS_PASSWORD
- name: AUTHELIA_STORAGE_ENCRYPTION_KEY_FILE
value: /secrets/STORAGE_ENCRYPTION_KEY
- name: AUTHELIA_STORAGE_MYSQL_PASSWORD_FILE
value: /mariadb-secrets/MYSQL_PASSWORD
- name: AUTHELIA_IDENTITY_PROVIDERS_OIDC_HMAC_SECRET_FILE
value: /secrets/OIDC_HMAC_SECRET
- name: AUTHELIA_IDENTITY_PROVIDERS_OIDC_ISSUER_PRIVATE_KEY_FILE
value: /secrets/OIDC_PRIVATE_KEY
- name: AUTHELIA_NOTIFIER_SMTP_PASSWORD_FILE
value: /secrets/SMTP_PASSWORD
- name: TZ
value: Europe/Tallinn
startupProbe:
failureThreshold: 6
httpGet:
path: /api/health
port: http
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 5
livenessProbe:
failureThreshold: 5
httpGet:
path: /api/health
port: http
scheme: HTTP
initialDelaySeconds: 0
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 5
readinessProbe:
failureThreshold: 5
httpGet:
path: /api/health
port: http
scheme: HTTP
initialDelaySeconds: 0
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 5
ports:
- name: http
containerPort: 9091
protocol: TCP
volumeMounts:
- mountPath: /config/authelia-config.yml
name: authelia-config
readOnly: true
subPath: authelia-config.yml
- mountPath: /config/oidc-secrets.yml
name: oidc-secrets
readOnly: true
subPath: oidc-secrets.yml
- mountPath: /secrets
name: secrets
readOnly: true
- mountPath: /certificates
name: certificates
readOnly: true
- mountPath: /mariadb-secrets
name: mariadb-secrets
readOnly: true
volumes:
- name: authelia-config
configMap:
name: authelia-config
- name: secrets
secret:
secretName: application-secrets
items:
- key: JWT_TOKEN
path: JWT_TOKEN
- key: SESSION_ENCRYPTION_KEY
path: SESSION_ENCRYPTION_KEY
- key: STORAGE_ENCRYPTION_KEY
path: STORAGE_ENCRYPTION_KEY
- key: STORAGE_PASSWORD
path: STORAGE_PASSWORD
- key: LDAP_PASSWORD
path: LDAP_PASSWORD
- key: OIDC_PRIVATE_KEY
path: OIDC_PRIVATE_KEY
- key: OIDC_HMAC_SECRET
path: OIDC_HMAC_SECRET
- key: SMTP_PASSWORD
path: SMTP_PASSWORD
- name: certificates
secret:
secretName: authelia-certificates
- name: mariadb-secrets
secret:
secretName: mariadb-secrets
- name: redis-secrets
secret:
secretName: redis-secrets
- name: oidc-secrets
secret:
secretName: oidc-secrets
items:
- key: oidc-secrets.yml
path: oidc-secrets.yml
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: authelia
labels:
app.kubernetes.io/name: authelia
annotations:
cert-manager.io/cluster-issuer: default
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
kubernetes.io/tls-acme: "true"
traefik.ingress.kubernetes.io/router.entryPoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: authelia-chain-k6-authelia@kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true"
spec:
rules:
- host: auth.k-space.ee
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: authelia
port:
number: 80
tls:
- hosts:
- auth.k-space.ee
secretName: authelia-tls
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: forwardauth-k6-authelia
labels:
app.kubernetes.io/name: authelia
spec:
forwardAuth:
address: http://authelia.authelia.svc.cluster.local/api/verify?rd=https://auth.k-space.ee/
trustForwardHeader: true
authResponseHeaders:
- Remote-User
- Remote-Name
- Remote-Email
- Remote-Groups
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: headers-k6-authelia
labels:
app.kubernetes.io/name: authelia
spec:
headers:
browserXssFilter: true
customFrameOptionsValue: "SAMEORIGIN"
customResponseHeaders:
Cache-Control: "no-store"
Pragma: "no-cache"
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: chain-k6-authelia-auth
labels:
app.kubernetes.io/name: authelia
spec:
chain:
middlewares:
- name: forwardauth-k6-authelia
namespace: authelia
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: chain-k6-authelia
labels:
app.kubernetes.io/name: authelia
spec:
chain:
middlewares:
- name: headers-k6-authelia
namespace: authelia
---
apiVersion: mysql.oracle.com/v2
kind: InnoDBCluster
metadata:
name: mysql-cluster
spec:
secretName: mysql-secrets
instances: 3
router:
instances: 2
tlsUseSelfSigned: true
datadirVolumeClaimTemplate:
storageClassName: local-path
accessModes:
- ReadWriteOnce
resources:
requests:
storage: "1Gi"
podSpec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/managed-by
operator: In
values:
- mysql-operator
topologyKey: kubernetes.io/hostname
nodeSelector:
dedicated: storage
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule
---
apiVersion: codemowers.io/v1alpha1
kind: KeyDBCluster
metadata:
name: redis
spec:
replicas: 3

1
authelia/mariadb.yml Symbolic link
View File

@ -0,0 +1 @@
../shared/mariadb.yml

1
bind/.gitignore vendored
View File

@ -1 +0,0 @@
*.key

View File

@ -1,124 +0,0 @@
# Bind namespace
The Bind secondary servers and `external-dns` service pods are running in this namespace.
The `external-dns` pods are used to declaratively update DNS records on the
[Bind primary](https://git.k-space.ee/k-space/ansible/src/branch/main/authoritative-nameserver.yaml).
The Bind primary `ns1.k-space.ee` resides outside Kubernetes at `193.40.103.2` and
it's internally reachable via `172.20.0.2`.
Bind secondaries perform AXFR (zone transfer) from `ns1.k-space.ee` using
shared secret autentication.
The primary triggers notification events to `172.20.53.{1..3}`
which are internally exposed IP-s of the secondaries.
Bind secondaries are hosted inside Kubernetes, load balanced behind `62.65.250.2` and
under normal circumstances managed by [ArgoCD](https://argocd.k-space.ee/applications/argocd/bind).
Note that [cert-manager](https://git.k-space.ee/k-space/kube/src/branch/master/cert-manager/) also performs DNS updates on the Bind primary.
# For user
`Ingresses` and `DNSEndpoint` resources under `k-space.ee`, `kspace.ee`, `k6.ee`
domains are picked up automatically by `external-dns` and updated on the Bind primary.
To find usage examples in this repository use
`grep -r -A25 "^kind: Ingress" .` and
`grep -R -r -A100 "^kind: DNSEndpoint" .`
# For administrator
Ingresses and DNSEndpoints referring to `k-space.ee`, `kspace.ee`, `k6.ee`
are picked up automatically by `external-dns` and updated on primary.
The primary triggers notification events to `172.21.53.{1..3}`
which are internally exposed IP-s of the secondaries.
# Secrets
To configure TSIG secrets:
```
kubectl create secret generic -n bind bind-readonly-secret \
--from-file=readonly.key
kubectl create secret generic -n bind bind-readwrite-secret \
--from-file=readwrite.key
kubectl create secret generic -n bind external-dns
kubectl -n bind delete secret tsig-secret
kubectl -n bind create secret generic tsig-secret \
--from-literal=TSIG_SECRET=$(cat readwrite.key | grep secret | cut -d '"' -f 2)
kubectl -n cert-manager delete secret tsig-secret
kubectl -n cert-manager create secret generic tsig-secret \
--from-literal=TSIG_SECRET=$(cat readwrite.key | grep secret | cut -d '"' -f 2)
```
# Serving additional zones
## Bind primary configuration
To serve additional domains from this Bind setup add following
section to `named.conf.local` on primary `ns1.k-space.ee`:
```
key "foobar" {
algorithm hmac-sha512;
secret "...";
};
zone "foobar.com" {
type master;
file "/var/lib/bind/db.foobar.com";
allow-update { !rejected; key foobar; };
allow-transfer { !rejected; key readonly; key foobar; };
notify explicit; also-notify { 172.21.53.1; 172.21.53.2; 172.21.53.3; };
};
```
Initiate empty zonefile in `/var/lib/bind/db.foobar.com` on the primary `ns1.k-space.ee`:
```
foobar.com IN SOA ns1.foobar.com. hostmaster.foobar.com. (1 300 300 2592000 300)
NS ns1.foobar.com.
NS ns2.foobar.com.
ns1.foobar.com. A 193.40.103.2
ns2.foobar.com. A 62.65.250.2
```
Reload Bind config:
```
named-checkconf
systemctl reload bind9
```
## Bind secondary config
Add section to `bind-secondary-config-local` under key `named.conf.local`:
```
zone "foobar.com" { type slave; masters { 172.20.0.2 key readonly; }; };
```
And restart secondaries:
```
kubectl rollout restart -n bind statefulset/bind-secondary
```
## Registrar config
At your DNS registrar point your glue records to:
```
foobar.com. NS ns1.foobar.com.
foobar.com. NS ns2.foobar.com.
ns1.foobar.com. A 193.40.103.2
ns2.foobar.com. A 62.65.250.2
```
## Updating DNS records
With the configured TSIG key `foobar` you can now:
* Obtain Let's Encrypt certificates with DNS challenge.
Inside Kubernetes use `cert-manager` with RFC2136 provider.
* Update DNS records.
Inside Kubernetes use `external-dns` with RFC2136 provider.

View File

@ -1,179 +0,0 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: bind-secondary-config-local
namespace: bind
data:
named.conf.local: |
zone "codemowers.ee" { type slave; masters { 172.20.0.2 key readonly; }; };
zone "codemowers.eu" { type slave; masters { 172.20.0.2 key readonly; }; };
zone "codemowers.cloud" { type slave; masters { 172.20.0.2 key readonly; }; };
---
apiVersion: v1
kind: ConfigMap
metadata:
name: bind-secondary-config
namespace: bind
data:
named.conf: |
include "/etc/bind/named.conf.local";
include "/etc/bind/readonly.key";
options {
recursion no;
pid-file "/var/bind/named.pid";
allow-query { 0.0.0.0/0; };
allow-notify { 172.20.0.2; };
allow-transfer { none; };
check-names slave ignore;
notify no;
};
zone "k-space.ee" { type slave; masters { 172.20.0.2 key readonly; }; };
zone "k6.ee" { type slave; masters { 172.20.0.2 key readonly; }; };
zone "kspace.ee" { type slave; masters { 172.20.0.2 key readonly; }; };
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: bind-secondary
namespace: bind
spec:
revisionHistoryLimit: 0
replicas: 3
selector:
matchLabels:
app: bind-secondary
template:
metadata:
labels:
app: bind-secondary
spec:
containers:
- name: bind-secondary
image: internetsystemsconsortium/bind9:9.20
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 1m
memory: 35Mi
workingDir: /var/bind
command:
- named
- -g
- -c
- /etc/bind/named.conf
volumeMounts:
- name: bind-secondary-config
mountPath: /etc/bind
readOnly: true
- name: bind-data
mountPath: /var/bind
volumes:
- name: bind-secondary-config
projected:
sources:
- configMap:
name: bind-secondary-config
- configMap:
name: bind-secondary-config-local
optional: true
- secret:
name: bind-readonly-secret
- name: bind-data
emptyDir: {}
topologySpreadConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: bind-secondary
---
apiVersion: v1
kind: Service
metadata:
name: bind-secondary
namespace: bind
spec:
type: LoadBalancer
externalTrafficPolicy: Local
loadBalancerIP: 62.65.250.2
selector:
app: bind-secondary
ports:
- protocol: TCP
port: 53
name: dns-tcp
targetPort: 53
- protocol: UDP
port: 53
name: dns-udp
targetPort: 53
---
apiVersion: v1
kind: Service
metadata:
name: bind-secondary-0
namespace: bind
spec:
type: LoadBalancer
externalTrafficPolicy: Local
loadBalancerIP: 172.21.53.1
selector:
app: bind-secondary
statefulset.kubernetes.io/pod-name: bind-secondary-0
ports:
- protocol: TCP
port: 53
name: dns-tcp
targetPort: 53
- protocol: UDP
port: 53
name: dns-udp
targetPort: 53
---
apiVersion: v1
kind: Service
metadata:
name: bind-secondary-1
namespace: bind
spec:
type: LoadBalancer
externalTrafficPolicy: Local
loadBalancerIP: 172.21.53.2
selector:
app: bind-secondary
statefulset.kubernetes.io/pod-name: bind-secondary-1
ports:
- protocol: TCP
port: 53
name: dns-tcp
targetPort: 53
- protocol: UDP
port: 53
name: dns-udp
targetPort: 53
---
apiVersion: v1
kind: Service
metadata:
name: bind-secondary-2
namespace: bind
spec:
type: LoadBalancer
externalTrafficPolicy: Local
loadBalancerIP: 172.21.53.3
selector:
app: bind-secondary
statefulset.kubernetes.io/pod-name: bind-secondary-2
ports:
- protocol: TCP
port: 53
name: dns-tcp
targetPort: 53
- protocol: UDP
port: 53
name: dns-udp
targetPort: 53

View File

@ -1,48 +0,0 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns-k-space
namespace: bind
spec:
revisionHistoryLimit: 0
selector:
matchLabels: &selectorLabels
app.kubernetes.io/name: external-dns
domain: k-space.ee
template:
metadata:
labels: *selectorLabels
spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: registry.k8s.io/external-dns/external-dns:v0.14.2
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 2m
memory: 35Mi
envFrom:
- secretRef:
name: tsig-secret
args:
- --events
- --registry=txt
- --txt-prefix=external-dns-
- --txt-owner-id=k8s
- --provider=rfc2136
- --source=ingress
- --source=service
- --source=crd
- --domain-filter=k-space.ee
- --rfc2136-tsig-axfr
- --rfc2136-host=172.20.0.2
- --rfc2136-port=53
- --rfc2136-zone=k-space.ee
- --rfc2136-tsig-keyname=readwrite
- --rfc2136-tsig-secret-alg=hmac-sha512
- --rfc2136-tsig-secret=$(TSIG_SECRET)
# https://github.com/kubernetes-sigs/external-dns/issues/2446

View File

@ -1,80 +0,0 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns-k6
namespace: bind
spec:
revisionHistoryLimit: 0
selector:
matchLabels: &selectorLabels
app.kubernetes.io/name: external-dns
domain: k6.ee
template:
metadata:
labels: *selectorLabels
spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: registry.k8s.io/external-dns/external-dns:v0.14.2
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 2m
memory: 35Mi
envFrom:
- secretRef:
name: tsig-secret
args:
- --log-level=debug
- --events
- --registry=noop
- --provider=rfc2136
- --source=service
- --source=crd
- --domain-filter=k6.ee
- --rfc2136-tsig-axfr
- --rfc2136-host=172.20.0.2
- --rfc2136-port=53
- --rfc2136-zone=k6.ee
- --rfc2136-tsig-keyname=readwrite
- --rfc2136-tsig-secret-alg=hmac-sha512
- --rfc2136-tsig-secret=$(TSIG_SECRET)
# https://github.com/kubernetes-sigs/external-dns/issues/2446
---
apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
name: k6
namespace: bind
spec:
endpoints:
- dnsName: k6.ee
recordTTL: 300
recordType: SOA
targets:
- "ns1.k-space.ee. hostmaster.k-space.ee. (1 300 300 300 300)"
- dnsName: k6.ee
recordTTL: 300
recordType: NS
targets:
- ns1.k-space.ee
- ns2.k-space.ee
- dnsName: ns1.k-space.ee
recordTTL: 300
recordType: A
targets:
- 193.40.103.2
- dnsName: ns2.k-space.ee
recordTTL: 300
recordType: A
targets:
- 62.65.250.2
- dnsName: k-space.ee
recordTTL: 300
recordType: MX
targets:
- 10 mail.k-space.ee

View File

@ -1,75 +0,0 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns-kspace
namespace: bind
spec:
revisionHistoryLimit: 0
selector:
matchLabels: &selectorLabels
app.kubernetes.io/name: external-dns
domain: kspace.ee
template:
metadata:
labels: *selectorLabels
spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: registry.k8s.io/external-dns/external-dns:v0.14.2
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 2m
memory: 35Mi
envFrom:
- secretRef:
name: tsig-secret
args:
- --events
- --registry=noop
- --provider=rfc2136
- --source=ingress
- --source=service
- --source=crd
- --domain-filter=kspace.ee
- --rfc2136-tsig-axfr
- --rfc2136-host=172.20.0.2
- --rfc2136-port=53
- --rfc2136-zone=kspace.ee
- --rfc2136-tsig-keyname=readwrite
- --rfc2136-tsig-secret-alg=hmac-sha512
- --rfc2136-tsig-secret=$(TSIG_SECRET)
# https://github.com/kubernetes-sigs/external-dns/issues/2446
---
apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
name: kspace
namespace: bind
spec:
endpoints:
- dnsName: kspace.ee
recordTTL: 300
recordType: SOA
targets:
- "ns1.k-space.ee. hostmaster.k-space.ee. (1 300 300 300 300)"
- dnsName: kspace.ee
recordTTL: 300
recordType: NS
targets:
- ns1.k-space.ee
- ns2.k-space.ee
- dnsName: ns1.k-space.ee
recordTTL: 300
recordType: A
targets:
- 193.40.103.2
- dnsName: ns2.k-space.ee
recordTTL: 300
recordType: A
targets:
- 62.65.250.2

View File

@ -1,60 +0,0 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: external-dns
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- pods
- nodes
verbs:
- get
- watch
- list
- apiGroups:
- extensions
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- externaldns.k8s.io
resources:
- dnsendpoints
verbs:
- get
- watch
- list
- apiGroups:
- externaldns.k8s.io
resources:
- dnsendpoints/status
verbs:
- update
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: external-dns
namespace: bind
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: external-dns-viewer
namespace: bind
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: external-dns
subjects:
- kind: ServiceAccount
name: external-dns
namespace: bind

View File

@ -1,87 +1,29 @@
# Cameras
Camtiler is the umbrella name for our homegrown camera surveilance system.
To apply changes:
Everything besides [Camera](#camera)s is deployed with Kubernetes.
## Components
![cameras.graphviz.svg](cameras.graphviz.svg)
<!-- Manually rendered with https://dreampuf.github.io/GraphvizOnline
digraph G {
"camera-operator" -> "camera-motion-detect" [label="deploys"]
"camera-tiler" -> "cam.k-space.ee/tiled"
camera -> "camera-tiler"
camera -> "camera-motion-detect" -> mongo
"camera-motion-detect" -> "Minio S3"
"cam.k-space.ee" -> mongo [label="queries events", decorate=true]
mongo -> "camtiler-event-broker" [label="transforms object to add (signed) URL to S3", ]
"camtiler-event-broker" -> "cam.k-space.ee"
"Minio S3" -> "cam.k-space.ee" [label="using signed URL from camtiler-event-broker", decorate=true]
camera [label="📸 camera"]
}
-->
### 📸 Camera
Cameras are listed in [application.yml](application.yml) as `kind: Camera`.
Two types of camera hosts:
- GL-AR150 with [openwrt-camera-images](https://git.k-space.ee/k-space/openwrt-camera-image).
- [Doors](https://wiki.k-space.ee/e/en/hosting/doors) (Raspberry Pi) with mjpg-streamer.
### camera-tiler (cam.k-space.ee/tiled)
Out-of-bound, connects to cameras and streams to web browser.
One instance per every camera
#### camera-operator
Functionally the same as a kubernetes deployment for camera-tiler.
Operator/deployer for camera-tiler.
### camera-motion-detect
Connects to cameras, on motion writes events to Mongo and frames to S3.
### cam.k-space.ee (logmower)
Fetches motion-detect events from mongo. Fetches referenced images from S3 (minio).
#### camtiler-event-broker
MitM between motion-detect -> mongo. Appends S3 URLs to the response.
## Kubernetes commands
Apply changes:
```
kubectl apply -n camtiler \
-f application.yml \
-f minio.yml \
-f mongoexpress.yml \
-f mongodb-support.yml \
-f camera-tiler.yml \
-f logmower.yml \
-f ingress.yml \
-f network-policies.yml \
-f networkpolicy-base.yml
kubectl apply -n camtiler -f application.yml -f persistence.yml -f mongoexpress.yml -f mongodb-support.yml -f networkpolicy-base.yml -f minio-support.yml
```
Deploy changes:
To deploy changes:
```
kubectl -n camtiler rollout restart deployment.apps/camtiler
```
Initialize secrets:
To initialize secrets:
```
kubectl create secret generic -n camtiler mongodb-application-readwrite-password --from-literal="password=$(cat /dev/urandom | base64 | head -c 30)"
kubectl create secret generic -n camtiler mongodb-application-readonly-password --from-literal="password=$(cat /dev/urandom | base64 | head -c 30)"
kubectl create secret generic -n camtiler minio-secrets \
kubectl create secret generic -n camtiler minio-secret \
--from-literal=accesskey=application \
--from-literal=secretkey=$(cat /dev/urandom | base64 | head -c 30)
kubectl create secret generic -n camtiler minio-env-configuration \
--from-literal="MINIO_BROWSER=off" \
--from-literal="MINIO_ROOT_USER=root" \
--from-literal="MINIO_ROOT_PASSWORD=$(cat /dev/urandom | base64 | head -c 30)"
--from-literal="MINIO_ROOT_PASSWORD=$(cat /dev/urandom | base64 | head -c 30)" \
--from-literal="MINIO_STORAGE_CLASS_STANDARD=EC:4"
kubectl -n camtiler create secret generic camera-secrets \
--from-literal=username=... \
--from-literal=password=...
```
Restart all deployments:
```
for j in $(kubectl get deployments -n camtiler -o name); do kubectl rollout restart -n camtiler $j; done
```

View File

@ -1,11 +1,396 @@
---
apiVersion: codemowers.cloud/v1beta1
kind: MinioBucketClaim
apiVersion: apps/v1
kind: Deployment
metadata:
name: camtiler
annotations:
keel.sh/policy: force
keel.sh/trigger: poll
spec:
capacity: 150Gi
class: dedicated
revisionHistoryLimit: 0
replicas: 2
selector:
matchLabels:
app.kubernetes.io/name: camtiler
template:
metadata:
labels:
app.kubernetes.io/name: camtiler
component: camtiler
spec:
serviceAccountName: camtiler
containers:
- name: camtiler
image: harbor.k-space.ee/k-space/camera-tiler:latest
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
ports:
- containerPort: 5001
name: "http"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: log-viewer-frontend
annotations:
keel.sh/policy: force
keel.sh/trigger: poll
spec:
revisionHistoryLimit: 0
replicas: 2
selector:
matchLabels:
app.kubernetes.io/name: log-viewer-frontend
template:
metadata:
labels:
app.kubernetes.io/name: log-viewer-frontend
spec:
containers:
- name: log-viewer-frontend
image: harbor.k-space.ee/k-space/log-viewer-frontend:latest
# securityContext:
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: log-viewer-backend
annotations:
keel.sh/policy: force
keel.sh/trigger: poll
spec:
revisionHistoryLimit: 0
replicas: 3
selector:
matchLabels:
app.kubernetes.io/name: log-viewer-backend
template:
metadata:
labels:
app.kubernetes.io/name: log-viewer-backend
spec:
containers:
- name: log-backend-backend
image: harbor.k-space.ee/k-space/log-viewer:latest
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
env:
- name: MONGO_URI
valueFrom:
secretKeyRef:
name: mongodb-application-readwrite
key: connectionString.standard
- name: MINIO_BUCKET
value: application
- name: MINIO_HOSTNAME
value: cams-s3.k-space.ee
- name: MINIO_PORT
value: "443"
- name: MINIO_SCHEME
value: "https"
- name: MINIO_SECRET_KEY
valueFrom:
secretKeyRef:
name: minio-secret
key: secretkey
- name: MINIO_ACCESS_KEY
valueFrom:
secretKeyRef:
name: minio-secret
key: accesskey
---
apiVersion: v1
kind: Service
metadata:
name: log-viewer-frontend
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: log-viewer-frontend
ports:
- protocol: TCP
port: 3003
---
apiVersion: v1
kind: Service
metadata:
name: log-viewer-backend
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: log-viewer-backend
ports:
- protocol: TCP
port: 3002
---
apiVersion: v1
kind: Service
metadata:
name: camtiler
labels:
component: camtiler
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: camtiler
component: camtiler
ports:
- protocol: TCP
port: 5001
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: camtiler
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: camtiler
rules:
- apiGroups:
- ""
resources:
- services
verbs:
- list
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: camtiler
subjects:
- kind: ServiceAccount
name: camtiler
apiGroup: ""
roleRef:
kind: Role
name: camtiler
apiGroup: ""
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: camtiler
annotations:
kubernetes.io/ingress.class: traefik
# Following specifies the certificate issuer defined in
# ../cert-manager/issuer.yml
# This is where the HTTPS certificates for the
# `tls:` section below are obtained from
cert-manager.io/cluster-issuer: default
# This tells Traefik this Ingress object is associated with the
# https:// entrypoint
# Global http:// to https:// redirect is enabled in
# ../traefik/values.yml using `globalArguments`
traefik.ingress.kubernetes.io/router.entrypoints: websecure
# Following enables Authelia intercepting middleware
# which makes sure user is authenticated and then
# proceeds to inject Remote-User header for the application
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true"
# Following tells external-dns to add CNAME entry which makes
# cams.k-space.ee point to same IP address as traefik.k-space.ee
# The A record for traefik.k-space.ee is created via annotation
# added in ../traefik/ingress.yml
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
spec:
rules:
- host: cams.k-space.ee
http:
paths:
- pathType: Prefix
path: "/tiled"
backend:
service:
name: camtiler
port:
number: 5001
- pathType: Prefix
path: "/events"
backend:
service:
name: log-viewer-backend
port:
number: 3002
- pathType: Prefix
path: "/"
backend:
service:
name: log-viewer-frontend
port:
number: 3003
tls:
- hosts:
- cams.k-space.ee
secretName: camtiler-tls
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: camera-motion-detect
spec:
podSelector:
matchLabels:
component: camdetect
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
component: camtiler
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: prometheus-operator
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus
egress:
- to:
- ipBlock:
# Permit access to cameras outside the cluster
cidr: 100.102.0.0/16
- to:
- podSelector:
matchLabels:
app: mongodb-svc
ports:
- port: 27017
- to:
- podSelector:
matchLabels:
v1.min.io/tenant: minio
ports:
- port: 9000
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: camera-tiler
spec:
podSelector:
matchLabels:
component: camtiler
policyTypes:
- Ingress
- Egress
egress:
- to:
- podSelector:
matchLabels:
component: camdetect
ports:
- port: 5000
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: prometheus-operator
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: log-viewer-backend
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: log-viewer-backend
policyTypes:
- Ingress
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: mongodb-svc
- to:
# Minio access via Traefik's public endpoint
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: log-viewer-frontend
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: log-viewer-frontend
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minio
annotations:
kubernetes.io/ingress.class: traefik
cert-manager.io/cluster-issuer: default
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
spec:
rules:
- host: cams-s3.k-space.ee
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: minio
port:
number: 80
tls:
- hosts:
- cams-s3.k-space.ee
secretName: cams-s3-tls
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
@ -97,13 +482,12 @@ spec:
metadata:
name: foobar
labels:
app.kubernetes.io/name: foobar
component: camera-motion-detect
component: camdetect
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: foobar
component: camera-motion-detect
component: camdetect
ports:
- protocol: TCP
port: 80
@ -113,16 +497,19 @@ spec:
kind: Deployment
metadata:
name: camera-foobar
# Make sure keel.sh pulls updates for this deployment
annotations:
keel.sh/policy: force
keel.sh/trigger: poll
spec:
revisionHistoryLimit: 0
replicas: 1
# Make sure we do not congest the network during rollout
strategy:
type: RollingUpdate
rollingUpdate:
# Swap following two with replicas: 2
maxSurge: 1
maxUnavailable: 0
maxSurge: 0
maxUnavailable: 1
selector:
matchLabels:
app.kubernetes.io/name: foobar
@ -130,25 +517,18 @@ spec:
metadata:
labels:
app.kubernetes.io/name: foobar
component: camera-motion-detect
component: camdetect
spec:
containers:
- name: camera-motion-detect
- name: camdetect
image: harbor.k-space.ee/k-space/camera-motion-detect:latest
starupProbe:
httpGet:
path: /healthz
port: 5000
initialDelaySeconds: 2
periodSeconds: 180
timeoutSeconds: 60
readinessProbe:
httpGet:
path: /readyz
port: 5000
initialDelaySeconds: 60
periodSeconds: 60
timeoutSeconds: 5
initialDelaySeconds: 10
periodSeconds: 180
timeoutSeconds: 60
ports:
- containerPort: 5000
name: "http"
@ -158,7 +538,7 @@ spec:
cpu: "200m"
limits:
memory: "256Mi"
cpu: "4000m"
cpu: "1"
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
@ -170,25 +550,9 @@ spec:
- name: SOURCE_NAME
value: foobar
- name: S3_BUCKET_NAME
valueFrom:
secretKeyRef:
name: miniobucket-camtiler-owner-secrets
key: BUCKET_NAME
value: application
- name: S3_ENDPOINT_URL
valueFrom:
secretKeyRef:
name: miniobucket-camtiler-owner-secrets
key: AWS_S3_ENDPOINT_URL
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: miniobucket-camtiler-owner-secrets
key: AWS_SECRET_ACCESS_KEY
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: miniobucket-camtiler-owner-secrets
key: AWS_ACCESS_KEY_ID
value: http://minio
- name: BASIC_AUTH_PASSWORD
valueFrom:
secretKeyRef:
@ -199,6 +563,16 @@ spec:
secretKeyRef:
name: mongodb-application-readwrite
key: connectionString.standard
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: minio-secret
key: secretkey
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: minio-secret
key: accesskey
# Make sure 2+ pods of same camera are scheduled on different hosts
affinity:
@ -206,21 +580,32 @@ spec:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
- key: app
operator: In
values:
- foobar
topologyKey: topology.kubernetes.io/zone
topologyKey: kubernetes.io/hostname
# Make sure camera deployments are spread over workers
topologySpreadConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app.kubernetes.io/name: foobar
component: camera-motion-detect
component: camdetect
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: camtiler
spec:
selector: {}
podMetricsEndpoints:
- port: http
podTargetLabels:
- app.kubernetes.io/name
---
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
@ -231,21 +616,21 @@ spec:
- name: cameras
rules:
- alert: CameraLost
expr: rate(camtiler_frames_total{stage="downloaded"}[1m]) < 1
expr: rate(camdetect_rx_frames_total[2m]) < 1
for: 2m
labels:
severity: warning
annotations:
summary: Camera feed stopped
- alert: CameraServerRoomMotion
expr: rate(camtiler_events_total{app_kubernetes_io_name="server-room"}[30m]) > 0
expr: camdetect_event_active {app="camdetect-server-room"} > 0
for: 1m
labels:
severity: warning
annotations:
summary: Motion was detected in server room
- alert: CameraSlowUploads
expr: camtiler_queue_frames{stage="upload"} > 10
expr: rate(camdetect_upload_dropped_frames_total[2m]) > 1
for: 5m
labels:
severity: warning
@ -253,22 +638,14 @@ spec:
summary: Motion detect snapshots are piling up and
not getting uploaded to S3
- alert: CameraSlowProcessing
expr: camtiler_queue_frames{stage="download"} > 10
expr: rate(camdetect_download_dropped_frames_total[2m]) > 1
for: 5m
labels:
severity: warning
annotations:
summary: Motion detection processing pipeline is not keeping up
with incoming frames
- alert: CameraResourcesThrottled
expr: sum by (pod) (rate(container_cpu_cfs_throttled_periods_total{namespace="camtiler"}[1m])) > 0
for: 5m
labels:
severity: warning
annotations:
summary: CPU limits are bottleneck
---
# Referenced/linked by README.md
apiVersion: k-space.ee/v1alpha1
kind: Camera
metadata:
@ -276,7 +653,6 @@ metadata:
spec:
target: http://user@workshop.cam.k-space.ee:8080/?action=stream
secretRef: camera-secrets
replicas: 1
---
apiVersion: k-space.ee/v1alpha1
kind: Camera
@ -285,7 +661,6 @@ metadata:
spec:
target: http://user@server-room.cam.k-space.ee:8080/?action=stream
secretRef: camera-secrets
replicas: 2
---
apiVersion: k-space.ee/v1alpha1
kind: Camera
@ -294,7 +669,6 @@ metadata:
spec:
target: http://user@printer.cam.k-space.ee:8080/?action=stream
secretRef: camera-secrets
replicas: 1
---
apiVersion: k-space.ee/v1alpha1
kind: Camera
@ -303,7 +677,6 @@ metadata:
spec:
target: http://user@chaos.cam.k-space.ee:8080/?action=stream
secretRef: camera-secrets
replicas: 1
---
apiVersion: k-space.ee/v1alpha1
kind: Camera
@ -312,7 +685,6 @@ metadata:
spec:
target: http://user@cyber.cam.k-space.ee:8080/?action=stream
secretRef: camera-secrets
replicas: 1
---
apiVersion: k-space.ee/v1alpha1
kind: Camera
@ -321,36 +693,19 @@ metadata:
spec:
target: http://user@kitchen.cam.k-space.ee:8080/?action=stream
secretRef: camera-secrets
replicas: 1
---
apiVersion: k-space.ee/v1alpha1
kind: Camera
metadata:
name: back-door
spec:
target: http://user@100.102.3.3:8080/?action=stream
target: http://user@back-door.cam.k-space.ee:8080/?action=stream
secretRef: camera-secrets
replicas: 1
---
apiVersion: k-space.ee/v1alpha1
kind: Camera
metadata:
name: ground-door
spec:
target: http://user@100.102.3.1:8080/?action=stream
target: http://user@ground-door.cam.k-space.ee:8080/?action=stream
secretRef: camera-secrets
replicas: 1
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: camera-motion-detect
spec:
selector:
matchLabels:
component: camera-motion-detect
podMetricsEndpoints:
- port: http
podTargetLabels:
- app.kubernetes.io/name
- component

View File

@ -1,98 +0,0 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: camera-tiler
annotations:
keel.sh/policy: force
keel.sh/trigger: poll
spec:
revisionHistoryLimit: 0
replicas: 2
selector:
matchLabels: &selectorLabels
app.kubernetes.io/name: camtiler
component: camera-tiler
template:
metadata:
labels: *selectorLabels
spec:
serviceAccountName: camera-tiler
containers:
- name: camera-tiler
image: harbor.k-space.ee/k-space/camera-tiler:latest
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
ports:
- containerPort: 5001
name: "http"
resources:
requests:
memory: "200Mi"
cpu: "100m"
limits:
memory: "500Mi"
cpu: "4000m"
---
apiVersion: v1
kind: Service
metadata:
name: camera-tiler
labels:
app.kubernetes.io/name: camtiler
component: camera-tiler
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: camtiler
component: camera-tiler
ports:
- protocol: TCP
port: 5001
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: camera-tiler
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: camera-tiler
rules:
- apiGroups:
- ""
resources:
- services
verbs:
- list
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: camera-tiler
subjects:
- kind: ServiceAccount
name: camera-tiler
apiGroup: ""
roleRef:
kind: Role
name: camera-tiler
apiGroup: ""
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: camtiler
spec:
selector:
matchLabels:
app.kubernetes.io/name: camtiler
component: camera-tiler
podMetricsEndpoints:
- port: http
podTargetLabels:
- app.kubernetes.io/name
- component

View File

@ -1,131 +0,0 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<!-- Generated by graphviz version 2.40.1 (20161225.0304)
-->
<!-- Title: G Pages: 1 -->
<svg width="658pt" height="387pt" viewBox="0.00 0.00 658.36 386.80" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 382.8)">
<title>G</title>
<polygon fill="#ffffff" stroke="transparent" points="-4,4 -4,-382.8 654.3562,-382.8 654.3562,4 -4,4"/>
<!-- camera&#45;operator -->
<g id="node1" class="node">
<title>camera-operator</title>
<ellipse fill="none" stroke="#000000" cx="356.22" cy="-360.8" rx="74.095" ry="18"/>
<text text-anchor="middle" x="356.22" y="-356.6" font-family="Times,serif" font-size="14.00" fill="#000000">camera-operator</text>
</g>
<!-- camera&#45;motion&#45;detect -->
<g id="node2" class="node">
<title>camera-motion-detect</title>
<ellipse fill="none" stroke="#000000" cx="356.22" cy="-272" rx="95.5221" ry="18"/>
<text text-anchor="middle" x="356.22" y="-267.8" font-family="Times,serif" font-size="14.00" fill="#000000">camera-motion-detect</text>
</g>
<!-- camera&#45;operator&#45;&gt;camera&#45;motion&#45;detect -->
<g id="edge1" class="edge">
<title>camera-operator-&gt;camera-motion-detect</title>
<path fill="none" stroke="#000000" d="M356.22,-342.4006C356.22,-330.2949 356.22,-314.2076 356.22,-300.4674"/>
<polygon fill="#000000" stroke="#000000" points="359.7201,-300.072 356.22,-290.072 352.7201,-300.0721 359.7201,-300.072"/>
<text text-anchor="middle" x="377.9949" y="-312.2" font-family="Times,serif" font-size="14.00" fill="#000000">deploys</text>
</g>
<!-- mongo -->
<g id="node6" class="node">
<title>mongo</title>
<ellipse fill="none" stroke="#000000" cx="292.22" cy="-199" rx="37.7256" ry="18"/>
<text text-anchor="middle" x="292.22" y="-194.8" font-family="Times,serif" font-size="14.00" fill="#000000">mongo</text>
</g>
<!-- camera&#45;motion&#45;detect&#45;&gt;mongo -->
<g id="edge5" class="edge">
<title>camera-motion-detect-&gt;mongo</title>
<path fill="none" stroke="#000000" d="M340.3997,-253.9551C332.3383,-244.76 322.4178,-233.4445 313.6783,-223.476"/>
<polygon fill="#000000" stroke="#000000" points="316.2049,-221.0485 306.9807,-215.8365 310.9413,-225.6632 316.2049,-221.0485"/>
</g>
<!-- Minio S3 -->
<g id="node7" class="node">
<title>Minio S3</title>
<ellipse fill="none" stroke="#000000" cx="396.22" cy="-145" rx="47.0129" ry="18"/>
<text text-anchor="middle" x="396.22" y="-140.8" font-family="Times,serif" font-size="14.00" fill="#000000">Minio S3</text>
</g>
<!-- camera&#45;motion&#45;detect&#45;&gt;Minio S3 -->
<g id="edge6" class="edge">
<title>camera-motion-detect-&gt;Minio S3</title>
<path fill="none" stroke="#000000" d="M361.951,-253.804C368.6045,-232.6791 379.6542,-197.5964 387.4031,-172.9935"/>
<polygon fill="#000000" stroke="#000000" points="390.8337,-173.7518 390.4996,-163.1622 384.157,-171.6489 390.8337,-173.7518"/>
</g>
<!-- camera&#45;tiler -->
<g id="node3" class="node">
<title>camera-tiler</title>
<ellipse fill="none" stroke="#000000" cx="527.22" cy="-272" rx="57.8558" ry="18"/>
<text text-anchor="middle" x="527.22" y="-267.8" font-family="Times,serif" font-size="14.00" fill="#000000">camera-tiler</text>
</g>
<!-- cam.k&#45;space.ee/tiled -->
<g id="node4" class="node">
<title>cam.k-space.ee/tiled</title>
<ellipse fill="none" stroke="#000000" cx="527.22" cy="-199" rx="89.7229" ry="18"/>
<text text-anchor="middle" x="527.22" y="-194.8" font-family="Times,serif" font-size="14.00" fill="#000000">cam.k-space.ee/tiled</text>
</g>
<!-- camera&#45;tiler&#45;&gt;cam.k&#45;space.ee/tiled -->
<g id="edge2" class="edge">
<title>camera-tiler-&gt;cam.k-space.ee/tiled</title>
<path fill="none" stroke="#000000" d="M527.22,-253.9551C527.22,-245.8828 527.22,-236.1764 527.22,-227.1817"/>
<polygon fill="#000000" stroke="#000000" points="530.7201,-227.0903 527.22,-217.0904 523.7201,-227.0904 530.7201,-227.0903"/>
</g>
<!-- camera -->
<g id="node5" class="node">
<title>camera</title>
<ellipse fill="none" stroke="#000000" cx="513.22" cy="-360.8" rx="51.565" ry="18"/>
<text text-anchor="middle" x="513.22" y="-356.6" font-family="Times,serif" font-size="14.00" fill="#000000">📸 camera</text>
</g>
<!-- camera&#45;&gt;camera&#45;motion&#45;detect -->
<g id="edge4" class="edge">
<title>camera-&gt;camera-motion-detect</title>
<path fill="none" stroke="#000000" d="M485.8726,-345.3322C460.8217,-331.1633 423.4609,-310.0318 395.271,-294.0875"/>
<polygon fill="#000000" stroke="#000000" points="396.8952,-290.9851 386.4679,-289.1084 393.449,-297.078 396.8952,-290.9851"/>
</g>
<!-- camera&#45;&gt;camera&#45;tiler -->
<g id="edge3" class="edge">
<title>camera-&gt;camera-tiler</title>
<path fill="none" stroke="#000000" d="M516.1208,-342.4006C518.0482,-330.175 520.6159,-313.8887 522.7961,-300.0599"/>
<polygon fill="#000000" stroke="#000000" points="526.2706,-300.4951 524.3708,-290.072 519.356,-299.4049 526.2706,-300.4951"/>
</g>
<!-- camtiler&#45;event&#45;broker -->
<g id="node9" class="node">
<title>camtiler-event-broker</title>
<ellipse fill="none" stroke="#000000" cx="95.22" cy="-91" rx="95.4404" ry="18"/>
<text text-anchor="middle" x="95.22" y="-86.8" font-family="Times,serif" font-size="14.00" fill="#000000">camtiler-event-broker</text>
</g>
<!-- mongo&#45;&gt;camtiler&#45;event&#45;broker -->
<g id="edge8" class="edge">
<title>mongo-&gt;camtiler-event-broker</title>
<path fill="none" stroke="#000000" d="M254.6316,-196.5601C185.4398,-191.6839 43.6101,-179.7471 28.9976,-163 18.4783,-150.9441 20.8204,-140.7526 28.9976,-127 32.2892,-121.4639 36.7631,-116.7259 41.8428,-112.6837"/>
<polygon fill="#000000" stroke="#000000" points="43.9975,-115.4493 50.2411,-106.8896 40.0224,-109.6875 43.9975,-115.4493"/>
<text text-anchor="middle" x="153.8312" y="-140.8" font-family="Times,serif" font-size="14.00" fill="#000000">transforms object to add (signed) URL to S3</text>
</g>
<!-- cam.k&#45;space.ee -->
<g id="node8" class="node">
<title>cam.k-space.ee</title>
<ellipse fill="none" stroke="#000000" cx="292.22" cy="-18" rx="70.0229" ry="18"/>
<text text-anchor="middle" x="292.22" y="-13.8" font-family="Times,serif" font-size="14.00" fill="#000000">cam.k-space.ee</text>
</g>
<!-- Minio S3&#45;&gt;cam.k&#45;space.ee -->
<g id="edge10" class="edge">
<title>Minio S3-&gt;cam.k-space.ee</title>
<path fill="none" stroke="#000000" d="M394.7596,-126.8896C392.7231,-111.3195 387.8537,-88.922 376.22,-73 366.0004,-59.0134 351.0573,-47.5978 336.5978,-38.8647"/>
<polygon fill="#000000" stroke="#000000" points="338.1215,-35.7041 327.7038,-33.7748 334.6446,-41.7796 338.1215,-35.7041"/>
<text text-anchor="middle" x="521.2881" y="-86.8" font-family="Times,serif" font-size="14.00" fill="#000000">using signed URL from camtiler-event-broker</text>
<polyline fill="none" stroke="#000000" points="650.3562,-82.6 392.22,-82.6 392.9753,-115.8309 "/>
</g>
<!-- cam.k&#45;space.ee&#45;&gt;mongo -->
<g id="edge7" class="edge">
<title>cam.k-space.ee-&gt;mongo</title>
<path fill="none" stroke="#000000" d="M292.22,-36.2125C292.22,-67.8476 292.22,-133.1569 292.22,-170.7273"/>
<polygon fill="#000000" stroke="#000000" points="288.7201,-170.9833 292.22,-180.9833 295.7201,-170.9833 288.7201,-170.9833"/>
<text text-anchor="middle" x="332.0647" y="-86.8" font-family="Times,serif" font-size="14.00" fill="#000000">queries events</text>
<polyline fill="none" stroke="#000000" points="371.9094,-82.6 292.22,-82.6 292.22,-91.3492 "/>
</g>
<!-- camtiler&#45;event&#45;broker&#45;&gt;cam.k&#45;space.ee -->
<g id="edge9" class="edge">
<title>camtiler-event-broker-&gt;cam.k-space.ee</title>
<path fill="none" stroke="#000000" d="M138.9406,-74.7989C169.6563,-63.417 210.7924,-48.1737 242.716,-36.3441"/>
<polygon fill="#000000" stroke="#000000" points="244.1451,-39.5472 252.3059,-32.7905 241.7128,-32.9833 244.1451,-39.5472"/>
</g>
</g>
</svg>

Before

Width:  |  Height:  |  Size: 7.8 KiB

View File

@ -1,85 +0,0 @@
---
apiVersion: codemowers.cloud/v1beta1
kind: OIDCMiddlewareClient
metadata:
name: sso
spec:
displayName: Cameras
uri: 'https://cam.k-space.ee/tiled'
allowedGroups:
- k-space:floor
- k-space:friends
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: camtiler
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: camtiler-sso@kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
external-dns.alpha.kubernetes.io/hostname: cams.k-space.ee,cam.k-space.ee
spec:
rules:
- host: cam.k-space.ee
http:
paths:
- pathType: Prefix
path: "/tiled"
backend:
service:
name: camera-tiler
port:
number: 5001
- pathType: Prefix
path: "/m"
backend:
service:
name: camera-tiler
port:
number: 5001
- pathType: Prefix
path: "/events"
backend:
service:
name: logmower-eventsource
port:
number: 3002
- pathType: Prefix
path: "/"
backend:
service:
name: logmower-frontend
port:
number: 8080
tls:
- hosts:
- "*.k-space.ee"
---
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: cams-redirect
spec:
redirectRegex:
regex: ^https://cams.k-space.ee/(.*)$
replacement: https://cam.k-space.ee/$1
permanent: true
---
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: cams
spec:
entryPoints:
- websecure
routes:
- match: Host(`cams.k-space.ee`)
kind: Rule
middlewares:
- name: cams-redirect
services:
- kind: TraefikService
name: api@internal

View File

@ -1,182 +0,0 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: logmower-eventsource
spec:
revisionHistoryLimit: 0
replicas: 2
selector:
matchLabels: &selectorLabels
app.kubernetes.io/name: camtiler
component: logmower-eventsource
template:
metadata:
labels: *selectorLabels
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- camtiler
- key: component
operator: In
values:
- logmower-eventsource
topologyKey: topology.kubernetes.io/zone
nodeSelector:
dedicated: storage
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule
containers:
- name: logmower-eventsource
image: harbor.k-space.ee/k-space/logmower-eventsource
ports:
- containerPort: 3002
name: nodejs
env:
- name: MONGO_COLLECTION
value: eventlog
- name: MONGODB_HOST
valueFrom:
secretKeyRef:
name: mongodb-application-readonly
key: connectionString.standard
- name: BACKEND
value: 'camtiler'
- name: BACKEND_BROKER_URL
value: 'http://logmower-event-broker'
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: logmower-event-broker
spec:
revisionHistoryLimit: 0
replicas: 2
selector:
matchLabels: &selectorLabels
app.kubernetes.io/name: camtiler
component: logmower-event-broker
template:
metadata:
labels: *selectorLabels
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- camtiler
- key: component
operator: In
values:
- logmower-event-broker
topologyKey: topology.kubernetes.io/zone
nodeSelector:
dedicated: storage
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule
containers:
- name: logmower-event-broker
image: harbor.k-space.ee/k-space/camera-event-broker
ports:
- containerPort: 3000
env:
- name: MINIO_BUCKET
valueFrom:
secretKeyRef:
name: miniobucket-camtiler-owner-secrets
key: BUCKET_NAME
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: miniobucket-camtiler-owner-secrets
key: AWS_SECRET_ACCESS_KEY
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: miniobucket-camtiler-owner-secrets
key: AWS_ACCESS_KEY_ID
- name: MINIO_HOSTNAME
value: 'dedicated-5ee6428f-4cb5-4c2e-90b5-364668f515c2.minio-clusters.k-space.ee'
- name: MINIO_PORT
value: '443'
- name: MINIO_SCHEMA
value: 'https'
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: logmower-frontend
spec:
revisionHistoryLimit: 0
replicas: 2
selector:
matchLabels: &selectorLabels
app.kubernetes.io/name: camtiler
component: logmower-frontend
template:
metadata:
labels: *selectorLabels
spec:
containers:
- name: logmower-frontend
image: harbor.k-space.ee/k-space/logmower-frontend
ports:
- containerPort: 8080
name: http
---
apiVersion: v1
kind: Service
metadata:
name: logmower-frontend
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: camtiler
component: logmower-frontend
ports:
- protocol: TCP
port: 8080
---
apiVersion: v1
kind: Service
metadata:
name: logmower-eventsource
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: camtiler
component: logmower-eventsource
ports:
- protocol: TCP
port: 3002
---
apiVersion: v1
kind: Service
metadata:
name: logmower-event-broker
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: camtiler
component: logmower-event-broker
ports:
- protocol: TCP
port: 80
targetPort: 3000

1
camtiler/minio-support.yml Symbolic link
View File

@ -0,0 +1 @@
../shared/minio-support.yml

View File

@ -1,195 +0,0 @@
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: camera-motion-detect
spec:
podSelector:
matchLabels:
component: camera-motion-detect
policyTypes:
- Ingress
# - Egress # Something wrong with using minio-clusters as namespaceSelector.
ingress:
- from:
- podSelector:
matchLabels:
app.kubernetes.io/name: camtiler
component: camera-tiler
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus
egress:
- to:
- ipBlock:
# Permit access to cameras outside the cluster
cidr: 100.102.0.0/16
- to:
- podSelector:
matchLabels:
app: mongodb-svc
ports:
- port: 27017
- to:
- podSelector:
matchLabels:
app.kubernetes.io/name: minio
ports:
- port: 9000
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: camera-tiler
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: camtiler
component: camera-tiler
policyTypes:
- Ingress
- Egress
egress:
- to:
- podSelector:
matchLabels:
component: camera-motion-detect
ports:
- port: 5000
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: logmower-eventsource
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: camtiler
component: logmower-eventsource
policyTypes:
- Ingress
# - Egress # Something wrong with using mongodb-svc as podSelector.
egress:
- to:
- podSelector:
matchLabels:
app: mongodb-svc
- podSelector:
matchLabels:
component: logmower-event-broker
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: logmower-event-broker
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: camtiler
component: logmower-event-broker
policyTypes:
- Ingress
- Egress
egress:
- to:
# Minio access via Traefik's public endpoint
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
ingress:
- from:
- podSelector:
matchLabels:
component: logmower-eventsource
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: logmower-frontend
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: camtiler
component: logmower-frontend
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
# Config drift: Added by ArgoCD
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: minio
spec:
egress:
- ports:
- port: http
protocol: TCP
to:
- podSelector:
matchLabels:
app.kubernetes.io/name: minio
ingress:
- from:
- podSelector: {}
ports:
- port: http
protocol: TCP
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus
podSelector:
matchLabels:
app.kubernetes.io/name: minio
policyTypes:
- Ingress
- Egress

View File

@ -4,16 +4,12 @@ kind: MongoDBCommunity
metadata:
name: mongodb
spec:
agent:
logLevel: ERROR
maxLogFileDurationHours: 1
additionalMongodConfig:
systemLog:
quiet: true
members: 2
arbiters: 1
members: 3
type: ReplicaSet
version: "6.0.3"
version: "5.0.9"
security:
authentication:
modes: ["SCRAM"]
@ -31,7 +27,7 @@ spec:
passwordSecretRef:
name: mongodb-application-readonly-password
roles:
- name: read
- name: readOnly
db: application
scramCredentialsSecretName: mongodb-application-readonly
statefulSet:
@ -39,24 +35,6 @@ spec:
logLevel: WARN
template:
spec:
containers:
- name: mongod
resources:
requests:
cpu: 100m
memory: 512Mi
limits:
cpu: 500m
memory: 1Gi
volumeMounts:
- name: journal-volume
mountPath: /data/journal
- name: mongodb-agent
resources:
requests:
cpu: 1m
memory: 100Mi
limits: {}
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
@ -66,7 +44,7 @@ spec:
operator: In
values:
- mongodb-svc
topologyKey: topology.kubernetes.io/zone
topologyKey: kubernetes.io/hostname
nodeSelector:
dedicated: storage
tolerations:
@ -77,34 +55,76 @@ spec:
volumeClaimTemplates:
- metadata:
name: logs-volume
labels:
usecase: logs
spec:
storageClassName: mongo
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
- metadata:
name: journal-volume
labels:
usecase: journal
spec:
storageClassName: mongo
storageClassName: local-path
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storage: 512Mi
- metadata:
name: data-volume
labels:
usecase: data
spec:
storageClassName: mongo
storageClassName: local-path
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
apiVersion: minio.min.io/v2
kind: Tenant
metadata:
name: minio
annotations:
prometheus.io/path: /minio/prometheus/metrics
prometheus.io/port: "9000"
prometheus.io/scrape: "true"
spec:
credsSecret:
name: minio-secret
buckets:
- name: application
requestAutoCert: false
users:
- name: minio-user-0
pools:
- name: pool-0
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: v1.min.io/tenant
operator: In
values:
- minio
- key: v1.min.io/pool
operator: In
values:
- pool-0
topologyKey: kubernetes.io/hostname
resources:
requests:
cpu: '1'
memory: 512Mi
servers: 4
volumesPerServer: 1
volumeClaimTemplate:
metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: '30Gi'
storageClassName: local-path
status: {}
nodeSelector:
dedicated: storage
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule

View File

@ -1 +0,0 @@
cert-manager.yaml

View File

@ -1,33 +1,18 @@
# cert-manager
`cert-manager` is used to obtain TLS certificates from Let's Encrypt.
It uses DNS-01 challenge in conjunction with Bind primary
at `ns1.k-space.ee`.
Refer to the [Bind primary Ansible playbook](https://git.k-space.ee/k-space/ansible/src/branch/main/authoritative-nameserver.yaml) and
[Bind namespace on Kubernetes cluster](https://git.k-space.ee/k-space/kube/src/branch/master/bind)
for more details
# For user
Use `Certificate` CRD of cert-manager, refer to
[official documentation](https://cert-manager.io/docs/usage/certificate/).
To find usage examples in this repository use
`grep -r -A10 "^kind: Certificate" .`
# For administrator
Deployed with:
Added manifest with:
```
curl -L https://github.com/jetstack/cert-manager/releases/download/v1.15.1/cert-manager.yaml -O
kubectl apply -f cert-manager.yaml
curl -L https://github.com/jetstack/cert-manager/releases/download/v1.6.1/cert-manager.yaml -O
```
To update the issuer configuration or TSIG secret:
To update certificate issuer
```
kubectl apply -f default-issuer.yml
kubectl apply -f namespace.yml -f cert-manager.yaml
kubectl apply -f issuer.yml
kubectl -n cert-manager create secret generic tsig-secret \
--from-literal=TSIG_SECRET=<secret>
```

File diff suppressed because it is too large Load Diff

17329
cert-manager/cert-manager.yaml Normal file

File diff suppressed because it is too large Load Diff

View File

@ -1,21 +0,0 @@
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: default
namespace: cert-manager
spec:
acme:
email: info@k-space.ee
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: example-issuer-account-key
solvers:
- dns01:
rfc2136:
nameserver: 193.40.103.2
tsigKeyName: readwrite.
tsigAlgorithm: HMACSHA512
tsigSecretSecretRef:
name: tsig-secret
key: TSIG_SECRET

19
cert-manager/issuer.yml Normal file
View File

@ -0,0 +1,19 @@
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: default
spec:
acme:
email: info@k-space.ee
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: example-issuer-account-key
solvers:
- dns01:
rfc2136:
nameserver: 193.40.103.2
tsigKeyName: acme.
tsigAlgorithm: HMACSHA512
tsigSecretSecretRef:
name: tsig-secret
key: TSIG_SECRET

View File

@ -1,11 +1,12 @@
---
# AD/Samba group "Kubernetes Admins" members have full access
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-admins
subjects:
- kind: Group
name: "k-space:kubernetes:admins"
name: "Kubernetes Admins"
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole

View File

@ -1,8 +0,0 @@
# CloudNativePG
To deploy:
```
kubectl apply --server-side -f \
https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/release-1.24/releases/cnpg-1.24.1.yaml
```

View File

@ -1,44 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: netshoot
spec:
replicas: 1
selector:
matchLabels:
app: netshoot
template:
metadata:
creationTimestamp: null
labels:
app: netshoot
spec:
containers:
- name: netshoot
image: nicolaka/netshoot
command:
- /bin/bash
args:
- '-c'
- while true; do ping localhost; sleep 60;done
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: Always
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
securityContext: {}
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
revisionHistoryLimit: 10
progressDeadlineSeconds: 600

View File

@ -1,382 +0,0 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: discourse
annotations:
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
spec:
tls:
- hosts:
- "*.k-space.ee"
secretName:
rules:
- host: "discourse.k-space.ee"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: discourse
port:
name: http
---
apiVersion: v1
kind: Service
metadata:
name: discourse
spec:
type: ClusterIP
ipFamilyPolicy: SingleStack
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
selector:
app.kubernetes.io/instance: discourse
app.kubernetes.io/name: discourse
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: discourse
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: discourse
annotations:
reloader.stakater.com/auto: "true"
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/instance: discourse
app.kubernetes.io/name: discourse
strategy:
type: Recreate
template:
metadata:
labels:
app.kubernetes.io/instance: discourse
app.kubernetes.io/name: discourse
spec:
serviceAccountName: discourse
securityContext:
fsGroup: 0
fsGroupChangePolicy: Always
initContainers:
containers:
- name: discourse
image: docker.io/bitnami/discourse:3.3.2-debian-12-r0
imagePullPolicy: "IfNotPresent"
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- CHOWN
- SYS_CHROOT
- FOWNER
- SETGID
- SETUID
- DAC_OVERRIDE
drop:
- ALL
privileged: false
readOnlyRootFilesystem: false
runAsGroup: 0
runAsNonRoot: false
runAsUser: 0
seLinuxOptions: {}
seccompProfile:
type: RuntimeDefault
env:
- name: BITNAMI_DEBUG
value: "true"
- name: DISCOURSE_USERNAME
valueFrom:
secretKeyRef:
name: discourse-password
key: username
- name: DISCOURSE_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-password
key: password
- name: DISCOURSE_PORT_NUMBER
value: "8080"
- name: DISCOURSE_EXTERNAL_HTTP_PORT_NUMBER
value: "80"
- name: DISCOURSE_DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-postgresql
key: password
- name: POSTGRESQL_CLIENT_CREATE_DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-postgres-superuser
key: password
- name: POSTGRESQL_CLIENT_POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-postgres-superuser
key: password
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-redis
key: redis-password
envFrom:
- configMapRef:
name: discourse
- secretRef:
name: discourse-email
ports:
- name: http
containerPort: 8080
protocol: TCP
livenessProbe:
tcpSocket:
port: http
initialDelaySeconds: 500
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
readinessProbe:
httpGet:
path: /srv/status
port: http
initialDelaySeconds: 100
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
resources:
limits:
cpu: "6.0"
ephemeral-storage: 2Gi
memory: 12288Mi
requests:
cpu: "1.0"
ephemeral-storage: 50Mi
memory: 3072Mi
volumeMounts:
- name: discourse-data
mountPath: /bitnami/discourse
subPath: discourse
- name: sidekiq
image: docker.io/bitnami/discourse:3.3.2-debian-12-r0
imagePullPolicy: "IfNotPresent"
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- CHOWN
- SYS_CHROOT
- FOWNER
- SETGID
- SETUID
- DAC_OVERRIDE
drop:
- ALL
privileged: false
readOnlyRootFilesystem: false
runAsGroup: 0
runAsNonRoot: false
runAsUser: 0
seLinuxOptions: {}
seccompProfile:
type: RuntimeDefault
command:
- /opt/bitnami/scripts/discourse/entrypoint.sh
args:
- /opt/bitnami/scripts/discourse-sidekiq/run.sh
env:
- name: BITNAMI_DEBUG
value: "true"
- name: DISCOURSE_USERNAME
valueFrom:
secretKeyRef:
name: discourse-password
key: username
- name: DISCOURSE_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-password
key: password
- name: DISCOURSE_DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-postgresql
key: password
- name: DISCOURSE_POSTGRESQL_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-postgres-superuser
key: password
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-redis
key: redis-password
envFrom:
- configMapRef:
name: discourse
- secretRef:
name: discourse-email
livenessProbe:
exec:
command: ["/bin/sh", "-c", "pgrep -f ^sidekiq"]
initialDelaySeconds: 500
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
readinessProbe:
exec:
command: ["/bin/sh", "-c", "pgrep -f ^sidekiq"]
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
resources:
limits:
cpu: 750m
ephemeral-storage: 2Gi
memory: 768Mi
requests:
cpu: 500m
ephemeral-storage: 50Mi
memory: 512Mi
volumeMounts:
- name: discourse-data
mountPath: /bitnami/discourse
subPath: discourse
volumes:
- name: discourse-data
persistentVolumeClaim:
claimName: discourse-data
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: discourse-data
namespace: discourse
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "3Gi"
storageClassName: "proxmox-nas"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: discourse
namespace: discourse
data:
DISCOURSE_HOST: "discourse.k-space.ee"
DISCOURSE_SKIP_INSTALL: "yes"
DISCOURSE_PRECOMPILE_ASSETS: "no"
DISCOURSE_SITE_NAME: "K-Space Discourse"
DISCOURSE_USERNAME: "k-space"
DISCOURSE_EMAIL: "dos4dev@k-space.ee"
DISCOURSE_REDIS_HOST: "discourse-redis"
DISCOURSE_REDIS_PORT_NUMBER: "6379"
DISCOURSE_DATABASE_HOST: "discourse-postgres-rw"
DISCOURSE_DATABASE_PORT_NUMBER: "5432"
DISCOURSE_DATABASE_NAME: "discourse"
DISCOURSE_DATABASE_USER: "discourse"
POSTGRESQL_CLIENT_DATABASE_HOST: "discourse-postgres-rw"
POSTGRESQL_CLIENT_DATABASE_PORT_NUMBER: "5432"
POSTGRESQL_CLIENT_POSTGRES_USER: "postgres"
POSTGRESQL_CLIENT_CREATE_DATABASE_NAME: "discourse"
POSTGRESQL_CLIENT_CREATE_DATABASE_EXTENSIONS: "hstore,pg_trgm"
---
apiVersion: codemowers.cloud/v1beta1
kind: OIDCClient
metadata:
name: discourse
namespace: discourse
spec:
displayName: Discourse
uri: https://discourse.k-space.ee
redirectUris:
- https://discourse.k-space.ee/auth/oidc/callback
allowedGroups:
- k-space:floor
- k-space:friends
grantTypes:
- authorization_code
- refresh_token
responseTypes:
- code
availableScopes:
- openid
- profile
pkce: false
---
apiVersion: codemowers.cloud/v1beta1
kind: SecretClaim
metadata:
name: discourse-redis
namespace: discourse
spec:
size: 32
mapping:
- key: redis-password
value: "%(plaintext)s"
- key: REDIS_URI
value: "redis://:%(plaintext)s@discourse-redis"
---
apiVersion: dragonflydb.io/v1alpha1
kind: Dragonfly
metadata:
name: discourse-redis
namespace: discourse
spec:
authentication:
passwordFromSecret:
key: redis-password
name: discourse-redis
replicas: 3
resources:
limits:
cpu: 1000m
memory: 1Gi
topologySpreadConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: discourse-redis
app.kubernetes.io/part-of: dragonfly
---
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: discourse-postgres
namespace: discourse
spec:
instances: 1
enableSuperuserAccess: true
bootstrap:
initdb:
database: discourse
owner: discourse
secret:
name: discourse-postgresql
dataChecksums: true
encoding: 'UTF8'
storage:
size: 10Gi
storageClass: postgres

View File

@ -1,38 +0,0 @@
# Dragonfly Operator
Dragonfly operator is the preferred way to add Redis support to your application
as it is modern Go rewrite and it supports high availability.
Following alternatives were considered, but are discouraged:
* Vanilla Redis without replication is unusable during pod reschedule or Kubernetes worker outage
* Vanilla Redis' replication is clunky and there is no reliable operator for Kubernetes
to use vanilla redis
* KeyDB Cluster was unable to guarantee strong consistency
Note that vanilla Redis
[has changed it's licensing policy](https://redis.io/blog/redis-adopts-dual-source-available-licensing/)
# For users
Refer to [official documentation on usage](https://www.dragonflydb.io/docs/getting-started/kubernetes-operator#create-a-dragonfly-instance-with-replicas)
For example deployment see
[here](https://git.k-space.ee/k-space/kube/src/branch/master/passmower/dragonfly.yaml).
To find other instances in this repository use `grep -r "kind: Dragonfly"`
Use storage class `redis` for persistent instances.
To achieve high availabilllity use 2+ replicas with correctly configured
`topologySpreadConstraints`.
# For administrators
The operator was deployed with following snippet:
```
kubectl apply -f https://raw.githubusercontent.com/dragonflydb/dragonfly-operator/v1.1.6/manifests/dragonfly-operator.yaml
```
To upgrade refer to
[github.com/dragonflydb/dragonfly-operator](https://github.com/dragonflydb/dragonfly-operator/releases),
bump version and reapply

13
drone-execution/README.md Normal file
View File

@ -0,0 +1,13 @@
To deply:
```
kubectl apply -n drone-execution -f application.yml
```
To bootstrap secrets:
```
kubectl create secret generic -n drone-execution application-secrets \
--from-literal=DRONE_RPC_SECRET=$(kubectl get secret -n drone application-secrets -o jsonpath="{.data.DRONE_RPC_SECRET}" | base64 -d) \
--from-literal=DRONE_SECRET_PLUGIN_TOKEN=$(cat /dev/urandom | base64 | head -c 30)
```

View File

@ -0,0 +1,177 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: drone-runner-kube
---
apiVersion: v1
kind: ConfigMap
metadata:
name: application-config
data:
DRONE_DEBUG: "false"
DRONE_TRACE: "false"
DRONE_NAMESPACE_DEFAULT: "drone-execution"
DRONE_RPC_HOST: "drone.k-space.ee"
DRONE_RPC_PROTO: "https"
PLUGIN_MTU: "1300"
DRONE_SECRET_PLUGIN_ENDPOINT: "http://secrets:3000"
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: drone-runner-kube
namespace: "drone-execution"
labels:
app: drone-runner-kube
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- create
- delete
- apiGroups:
- ""
resources:
- pods
- pods/log
verbs:
- get
- create
- delete
- list
- watch
- update
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: drone-runner-kube
namespace: drone-execution
labels:
app: drone-runner-kube
subjects:
- kind: ServiceAccount
name: drone-runner-kube
namespace: drone-execution
roleRef:
kind: Role
name: drone-runner-kube
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Service
metadata:
name: drone-runner-kube
labels:
app: drone-runner-kube
spec:
type: ClusterIP
ports:
- port: 3000
targetPort: http
protocol: TCP
name: http
selector:
app: drone-runner-kube
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: drone-runner-kube
annotations:
keel.sh/policy: force
keel.sh/trigger: poll
keel.sh/pollSchedule: "@midnight"
spec:
replicas: 1
selector:
matchLabels:
app: drone-runner-kube
template:
metadata:
labels:
app: drone-runner-kube
spec:
serviceAccountName: drone-runner-kube
terminationGracePeriodSeconds: 3600
containers:
- name: server
securityContext:
{}
image: drone/drone-runner-kube
imagePullPolicy: Always
ports:
- name: http
containerPort: 3000
protocol: TCP
envFrom:
- configMapRef:
name: application-config
- secretRef:
name: application-secrets
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: drone-kubernetes-secrets
annotations:
keel.sh/policy: force
keel.sh/trigger: poll
keel.sh/pollSchedule: "@midnight"
spec:
replicas: 1
selector:
matchLabels:
app: drone-kubernetes-secrets
template:
metadata:
labels:
app: drone-kubernetes-secrets
spec:
containers:
- name: secrets
image: drone/kubernetes-secrets
imagePullPolicy: Always
ports:
- containerPort: 3000
env:
- name: SECRET_KEY
valueFrom:
secretKeyRef:
name: application-secrets
key: DRONE_SECRET_PLUGIN_TOKEN
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: drone-kubernetes-secrets
spec:
podSelector:
matchLabels:
app: drone-kubernetes-secrets
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: drone-runner-kube
ports:
- port: 3000
---
# Following should block access to pods in other namespaces, but should permit
# Git checkout, pip install, talking to Traefik via public IP etc
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: drone-runner-kube
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0

25
drone/.helmignore Normal file
View File

@ -0,0 +1,25 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/
# Chart dirs/files
docs/
ci/

161
drone/README.md Normal file
View File

@ -0,0 +1,161 @@
# Deployment
To deploy:
```
kubectl apply -n drone -f application.yml
```
To bootstrap secrets:
```
kubectl create secret generic -n drone application-secrets \
--from-literal=DRONE_GITEA_CLIENT_ID=... \
--from-literal=DRONE_GITEA_CLIENT_SECRET=... \
--from-literal=DRONE_RPC_SECRET=$(cat /dev/urandom | base64 | head -c 30)
```
# Integrating with Docker registry
We use harbor.k-space.ee to host own images.
Set up robot account `robot$k-space+drone` in Harbor first.
In Drone associate `docker_username` and `docker_password` secrets with the
`k-space`.
Instead of click marathon you can also pull the CLI configuration for Drone
from https://drone.k-space.ee/account
```
drone orgsecret add k-space docker_username 'robot$k-space+drone'
drone orgsecret add k-space docker_password '...'
```
# Integrating with e-mail
To (re)set e-mail credentials:
```
drone orgsecret add k-space email_password '...'
```
To issue build hit the button in Drone web interface or alternatively:
```
drone build create k-space/...
```
# Using templates
Templates unfortunately aren't pulled in from this Git repo.
Current `docker.yaml` template includes following:
```
kind: pipeline
type: kubernetes
name: build-arm64
platform:
arch: arm64
os: linux
node_selector:
kubernetes.io/arch: arm64
tolerations:
- key: arch
operator: Equal
value: arm64
effect: NoSchedule
steps:
- name: submodules
image: alpine/git
commands:
- touch .gitmodules
- sed -i -e 's/git@git.k-space.ee:/https:\\/\\/git.k-space.ee\\//g' .gitmodules
- git submodule update --init --recursive
- echo "ENV GIT_COMMIT=$(git rev-parse HEAD)" >> Dockerfile
- echo "ENV GIT_COMMIT_TIMESTAMP=$(git log -1 --format=%cd --date=iso-strict)" >> Dockerfile
- cat Dockerfile
- name: docker
image: plugins/docker
settings:
repo: harbor.k-space.ee/${DRONE_REPO}
tags: latest-arm64
registry: harbor.k-space.ee
squash: true
experimental: true
mtu: 1300
username:
from_secret: docker_username
password:
from_secret: docker_password
---
kind: pipeline
type: kubernetes
name: build-amd64
platform:
arch: amd64
os: linux
node_selector:
kubernetes.io/arch: amd64
steps:
- name: submodules
image: alpine/git
commands:
- touch .gitmodules
- sed -i -e 's/git@git.k-space.ee:/https:\\/\\/git.k-space.ee\\//g' .gitmodules
- git submodule update --init --recursive
- echo "ENV GIT_COMMIT=$(git rev-parse HEAD)" >> Dockerfile
- echo "ENV GIT_COMMIT_TIMESTAMP=$(git log -1 --format=%cd --date=iso-strict)" >> Dockerfile
- cat Dockerfile
- name: docker
image: plugins/docker
settings:
repo: harbor.k-space.ee/${DRONE_REPO}
tags: latest-amd64
registry: harbor.k-space.ee
squash: true
experimental: true
mtu: 1300
storage_driver: vfs
username:
from_secret: docker_username
password:
from_secret: docker_password
---
kind: pipeline
type: kubernetes
name: manifest
steps:
- name: manifest
image: plugins/manifest
settings:
target: harbor.k-space.ee/${DRONE_REPO}:latest
template: harbor.k-space.ee/${DRONE_REPO}:latest-ARCH
platforms:
- linux/amd64
- linux/arm64
username:
from_secret: docker_username
password:
from_secret: docker_password
depends_on:
- build-amd64
- build-arm64
---
kind: pipeline
type: kubernetes
name: gitlint
steps:
- name: gitlint
image: harbor.k-space.ee/k-space/gitlint-bundle
# https://git.k-space.ee/k-space/gitlint-bundle
---
kind: pipeline
type: kubernetes
name: flake8
steps:
- name: flake8
image: harbor.k-space.ee/k-space/flake8-bundle
# https://git.k-space.ee/k-space/flake8-bundle
```

106
drone/application.yml Normal file
View File

@ -0,0 +1,106 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: application-config
data:
DRONE_GITEA_SERVER: "https://git.k-space.ee"
DRONE_GIT_ALWAYS_AUTH: "false"
DRONE_PROMETHEUS_ANONYMOUS_ACCESS: "true"
DRONE_SERVER_HOST: "drone.k-space.ee"
DRONE_SERVER_PROTO: "https"
DRONE_USER_CREATE: "username:lauri,admin:true"
---
apiVersion: v1
kind: Service
metadata:
name: drone
spec:
type: ClusterIP
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
selector:
app: drone
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: drone
annotations:
keel.sh/policy: minor
keel.sh/trigger: poll
keel.sh/pollSchedule: "@midnight"
spec:
serviceName: drone
replicas: 1
selector:
matchLabels:
app: drone
template:
metadata:
labels:
app: drone
spec:
automountServiceAccountToken: false
securityContext:
{}
containers:
- name: server
securityContext:
{}
image: drone/drone:2
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
envFrom:
- secretRef:
name: application-secrets
- configMapRef:
name: application-config
volumeMounts:
- name: drone-data
mountPath: /data
volumeClaimTemplates:
- metadata:
name: drone-data
spec:
storageClassName: longhorn
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: drone
annotations:
cert-manager.io/cluster-issuer: default
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
spec:
tls:
- hosts:
- "drone.k-space.ee"
secretName: drone-tls
rules:
- host: "drone.k-space.ee"
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: drone
port:
number: 80

View File

@ -1,2 +0,0 @@
crds.yaml
operator.yaml

View File

@ -1,7 +1,7 @@
# elastic-operator
```
wget https://download.elastic.co/downloads/eck/2.13.0/crds.yaml
wget https://download.elastic.co/downloads/eck/2.13.0/operator.yaml
wget https://download.elastic.co/downloads/eck/2.4.0/crds.yaml
wget https://download.elastic.co/downloads/eck/2.4.0/operator.yaml
kubectl apply -n elastic-system -f application.yml -f crds.yaml -f operator.yaml
```

View File

@ -5,9 +5,11 @@ metadata:
name: filebeat
spec:
type: filebeat
version: 8.14.3
version: 8.4.1
elasticsearchRef:
name: elasticsearch
kibanaRef:
name: kibana
config:
logging:
level: warning
@ -27,9 +29,6 @@ spec:
- /var/log/containers/*${data.kubernetes.container.id}.log
daemonSet:
podTemplate:
metadata:
annotations:
co.elastic.logs/enabled: 'false'
spec:
serviceAccountName: filebeat
automountServiceAccountToken: true
@ -86,9 +85,11 @@ metadata:
name: filebeat-syslog
spec:
type: filebeat
version: 8.4.3
version: 8.4.1
elasticsearchRef:
name: elasticsearch
kibanaRef:
name: kibana
config:
logging:
level: warning
@ -108,9 +109,6 @@ spec:
deployment:
replicas: 2
podTemplate:
metadata:
annotations:
co.elastic.logs/enabled: 'false'
spec:
terminationGracePeriodSeconds: 30
containers:
@ -150,7 +148,7 @@ metadata:
spec:
type: LoadBalancer
externalTrafficPolicy: Local
loadBalancerIP: 172.21.51.4
loadBalancerIP: 172.20.51.4
ports:
- name: filebeat-syslog
port: 514
@ -169,7 +167,7 @@ metadata:
spec:
type: LoadBalancer
externalTrafficPolicy: Local
loadBalancerIP: 172.21.51.4
loadBalancerIP: 172.20.51.4
ports:
- name: filebeat-syslog
port: 514
@ -218,12 +216,10 @@ kind: Elasticsearch
metadata:
name: elasticsearch
spec:
version: 8.14.3
version: 8.4.1
nodeSets:
- name: default
count: 2
config:
node.roles: [ "data_content", "data_hot", "ingest", "master", "remote_cluster_client", "data_cold", "remote_cluster_client" ]
count: 1
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
@ -234,13 +230,17 @@ spec:
requests:
storage: 5Gi
storageClassName: longhorn
http:
tls:
selfSignedCertificate:
disabled: true
---
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: kibana
spec:
version: 8.14.3
version: 8.4.1
count: 1
elasticsearchRef:
name: elasticsearch
@ -252,13 +252,20 @@ spec:
server.publicBaseUrl: https://kibana.k-space.ee
xpack.reporting.enabled: false
xpack.apm.ui.enabled: false
xpack.security.authc.providers:
anonymous.anonymous1:
order: 0
credentials:
username: "elastic"
secureSettings:
- secretName: elasticsearch-es-elastic-user
entries:
- key: elastic
path: xpack.security.authc.providers.anonymous.anonymous1.credentials.password
podTemplate:
metadata:
annotations:
co.elastic.logs/enabled: 'false'
spec:
containers:
- name: kibana
- name: kibana
readinessProbe:
httpGet:
path: /app/home
@ -276,6 +283,7 @@ metadata:
name: kibana
annotations:
kubernetes.io/ingress.class: traefik
cert-manager.io/cluster-issuer: default
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true"
@ -294,7 +302,8 @@ spec:
number: 5601
tls:
- hosts:
- "*.k-space.ee"
- kibana.k-space.ee
secretName: kibana-tls
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
@ -317,28 +326,3 @@ spec:
app.kubernetes.io/name: elasticsearch-exporter
podMetricsEndpoints:
- port: exporter
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kibana
annotations:
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: 'true'
spec:
tls:
- hosts:
- '*.k-space.ee'
rules:
- host: kibana.k-space.ee
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kibana-kb-http
port:
number: 5601

File diff suppressed because it is too large Load Diff

View File

@ -9,13 +9,12 @@ metadata:
# Source: eck-operator/templates/service-account.yaml
apiVersion: v1
kind: ServiceAccount
automountServiceAccountToken: true
metadata:
name: elastic-operator
namespace: elastic-system
labels:
control-plane: elastic-operator
app.kubernetes.io/version: "2.13.0"
app.kubernetes.io/version: "2.4.0"
---
# Source: eck-operator/templates/webhook.yaml
apiVersion: v1
@ -25,7 +24,7 @@ metadata:
namespace: elastic-system
labels:
control-plane: elastic-operator
app.kubernetes.io/version: "2.13.0"
app.kubernetes.io/version: "2.4.0"
---
# Source: eck-operator/templates/configmap.yaml
apiVersion: v1
@ -35,7 +34,7 @@ metadata:
namespace: elastic-system
labels:
control-plane: elastic-operator
app.kubernetes.io/version: "2.13.0"
app.kubernetes.io/version: "2.4.0"
data:
eck.yaml: |-
log-verbosity: 0
@ -46,7 +45,6 @@ data:
ca-cert-rotate-before: 24h
cert-validity: 8760h
cert-rotate-before: 24h
disable-config-watch: false
exposed-node-labels: [topology.kubernetes.io/.*,failure-domain.beta.kubernetes.io/.*]
set-default-security-context: auto-detect
kube-client-timeout: 60s
@ -56,11 +54,7 @@ data:
validate-storage-class: true
enable-webhook: true
webhook-name: elastic-webhook.k8s.elastic.co
webhook-port: 9443
operator-namespace: elastic-system
enable-leader-election: true
elasticsearch-observation-interval: 10s
ubi-only: false
---
# Source: eck-operator/templates/cluster-roles.yaml
apiVersion: rbac.authorization.k8s.io/v1
@ -69,7 +63,7 @@ metadata:
name: elastic-operator
labels:
control-plane: elastic-operator
app.kubernetes.io/version: "2.13.0"
app.kubernetes.io/version: "2.4.0"
rules:
- apiGroups:
- "authorization.k8s.io"
@ -157,19 +151,6 @@ rules:
- create
- update
- patch
- apiGroups:
- autoscaling.k8s.elastic.co
resources:
- elasticsearchautoscalers
- elasticsearchautoscalers/status
- elasticsearchautoscalers/finalizers # needed for ownerReferences with blockOwnerDeletion on OCP
verbs:
- get
- list
- watch
- create
- update
- patch
- apiGroups:
- kibana.k8s.elastic.co
resources:
@ -248,32 +229,6 @@ rules:
- create
- update
- patch
- apiGroups:
- stackconfigpolicy.k8s.elastic.co
resources:
- stackconfigpolicies
- stackconfigpolicies/status
- stackconfigpolicies/finalizers # needed for ownerReferences with blockOwnerDeletion on OCP
verbs:
- get
- list
- watch
- create
- update
- patch
- apiGroups:
- logstash.k8s.elastic.co
resources:
- logstashes
- logstashes/status
- logstashes/finalizers # needed for ownerReferences with blockOwnerDeletion on OCP
verbs:
- get
- list
- watch
- create
- update
- patch
- apiGroups:
- storage.k8s.io
resources:
@ -313,14 +268,11 @@ metadata:
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-admin: "true"
control-plane: elastic-operator
app.kubernetes.io/version: "2.13.0"
app.kubernetes.io/version: "2.4.0"
rules:
- apiGroups: ["elasticsearch.k8s.elastic.co"]
resources: ["elasticsearches"]
verbs: ["get", "list", "watch"]
- apiGroups: ["autoscaling.k8s.elastic.co"]
resources: ["elasticsearchautoscalers"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apm.k8s.elastic.co"]
resources: ["apmservers"]
verbs: ["get", "list", "watch"]
@ -339,12 +291,6 @@ rules:
- apiGroups: ["maps.k8s.elastic.co"]
resources: ["elasticmapsservers"]
verbs: ["get", "list", "watch"]
- apiGroups: ["stackconfigpolicy.k8s.elastic.co"]
resources: ["stackconfigpolicies"]
verbs: ["get", "list", "watch"]
- apiGroups: ["logstash.k8s.elastic.co"]
resources: ["logstashes"]
verbs: ["get", "list", "watch"]
---
# Source: eck-operator/templates/cluster-roles.yaml
apiVersion: rbac.authorization.k8s.io/v1
@ -355,14 +301,11 @@ metadata:
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-admin: "true"
control-plane: elastic-operator
app.kubernetes.io/version: "2.13.0"
app.kubernetes.io/version: "2.4.0"
rules:
- apiGroups: ["elasticsearch.k8s.elastic.co"]
resources: ["elasticsearches"]
verbs: ["create", "delete", "deletecollection", "patch", "update"]
- apiGroups: ["autoscaling.k8s.elastic.co"]
resources: ["elasticsearchautoscalers"]
verbs: ["create", "delete", "deletecollection", "patch", "update"]
- apiGroups: ["apm.k8s.elastic.co"]
resources: ["apmservers"]
verbs: ["create", "delete", "deletecollection", "patch", "update"]
@ -381,12 +324,6 @@ rules:
- apiGroups: ["maps.k8s.elastic.co"]
resources: ["elasticmapsservers"]
verbs: ["create", "delete", "deletecollection", "patch", "update"]
- apiGroups: ["stackconfigpolicy.k8s.elastic.co"]
resources: ["stackconfigpolicies"]
verbs: ["create", "delete", "deletecollection", "patch", "update"]
- apiGroups: ["logstash.k8s.elastic.co"]
resources: ["logstashes"]
verbs: ["create", "delete", "deletecollection", "patch", "update"]
---
# Source: eck-operator/templates/role-bindings.yaml
apiVersion: rbac.authorization.k8s.io/v1
@ -395,7 +332,7 @@ metadata:
name: elastic-operator
labels:
control-plane: elastic-operator
app.kubernetes.io/version: "2.13.0"
app.kubernetes.io/version: "2.4.0"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
@ -413,7 +350,7 @@ metadata:
namespace: elastic-system
labels:
control-plane: elastic-operator
app.kubernetes.io/version: "2.13.0"
app.kubernetes.io/version: "2.4.0"
spec:
ports:
- name: https
@ -430,7 +367,7 @@ metadata:
namespace: elastic-system
labels:
control-plane: elastic-operator
app.kubernetes.io/version: "2.13.0"
app.kubernetes.io/version: "2.4.0"
spec:
selector:
matchLabels:
@ -443,29 +380,21 @@ spec:
# Rename the fields "error" to "error.message" and "source" to "event.source"
# This is to avoid a conflict with the ECS "error" and "source" documents.
"co.elastic.logs/raw": "[{\"type\":\"container\",\"json.keys_under_root\":true,\"paths\":[\"/var/log/containers/*${data.kubernetes.container.id}.log\"],\"processors\":[{\"convert\":{\"mode\":\"rename\",\"ignore_missing\":true,\"fields\":[{\"from\":\"error\",\"to\":\"_error\"}]}},{\"convert\":{\"mode\":\"rename\",\"ignore_missing\":true,\"fields\":[{\"from\":\"_error\",\"to\":\"error.message\"}]}},{\"convert\":{\"mode\":\"rename\",\"ignore_missing\":true,\"fields\":[{\"from\":\"source\",\"to\":\"_source\"}]}},{\"convert\":{\"mode\":\"rename\",\"ignore_missing\":true,\"fields\":[{\"from\":\"_source\",\"to\":\"event.source\"}]}}]}]"
"checksum/config": 8b10381ca4067cf2c56aecc94c799473b09486202e146d2d7e5d6714f4c2e533
"checksum/config": a99a5f63f628a1ca8df440c12506cdfbf17827a1175dc5765b05f22f92b12b95
labels:
control-plane: elastic-operator
spec:
terminationGracePeriodSeconds: 10
serviceAccountName: elastic-operator
automountServiceAccountToken: true
securityContext:
runAsNonRoot: true
containers:
- image: "docker.elastic.co/eck/eck-operator:2.13.0"
- image: "docker.elastic.co/eck/eck-operator:2.4.0"
imagePullPolicy: IfNotPresent
name: manager
args:
- "manager"
- "--config=/conf/eck.yaml"
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
env:
- name: OPERATOR_NAMESPACE
valueFrom:
@ -511,9 +440,10 @@ metadata:
name: elastic-webhook.k8s.elastic.co
labels:
control-plane: elastic-operator
app.kubernetes.io/version: "2.13.0"
app.kubernetes.io/version: "2.4.0"
webhooks:
- clientConfig:
caBundle: Cg==
service:
name: elastic-webhook-server
namespace: elastic-system
@ -521,7 +451,7 @@ webhooks:
failurePolicy: Ignore
name: elastic-agent-validation-v1alpha1.k8s.elastic.co
matchPolicy: Exact
admissionReviewVersions: [v1, v1beta1]
admissionReviewVersions: [v1beta1]
sideEffects: None
rules:
- apiGroups:
@ -534,6 +464,7 @@ webhooks:
resources:
- agents
- clientConfig:
caBundle: Cg==
service:
name: elastic-webhook-server
namespace: elastic-system
@ -541,7 +472,7 @@ webhooks:
failurePolicy: Ignore
name: elastic-apm-validation-v1.k8s.elastic.co
matchPolicy: Exact
admissionReviewVersions: [v1, v1beta1]
admissionReviewVersions: [v1beta1]
sideEffects: None
rules:
- apiGroups:
@ -554,6 +485,7 @@ webhooks:
resources:
- apmservers
- clientConfig:
caBundle: Cg==
service:
name: elastic-webhook-server
namespace: elastic-system
@ -561,7 +493,7 @@ webhooks:
failurePolicy: Ignore
name: elastic-apm-validation-v1beta1.k8s.elastic.co
matchPolicy: Exact
admissionReviewVersions: [v1, v1beta1]
admissionReviewVersions: [v1beta1]
sideEffects: None
rules:
- apiGroups:
@ -574,6 +506,7 @@ webhooks:
resources:
- apmservers
- clientConfig:
caBundle: Cg==
service:
name: elastic-webhook-server
namespace: elastic-system
@ -581,7 +514,7 @@ webhooks:
failurePolicy: Ignore
name: elastic-beat-validation-v1beta1.k8s.elastic.co
matchPolicy: Exact
admissionReviewVersions: [v1, v1beta1]
admissionReviewVersions: [v1beta1]
sideEffects: None
rules:
- apiGroups:
@ -594,6 +527,7 @@ webhooks:
resources:
- beats
- clientConfig:
caBundle: Cg==
service:
name: elastic-webhook-server
namespace: elastic-system
@ -601,7 +535,7 @@ webhooks:
failurePolicy: Ignore
name: elastic-ent-validation-v1.k8s.elastic.co
matchPolicy: Exact
admissionReviewVersions: [v1, v1beta1]
admissionReviewVersions: [v1beta1]
sideEffects: None
rules:
- apiGroups:
@ -614,6 +548,7 @@ webhooks:
resources:
- enterprisesearches
- clientConfig:
caBundle: Cg==
service:
name: elastic-webhook-server
namespace: elastic-system
@ -621,7 +556,7 @@ webhooks:
failurePolicy: Ignore
name: elastic-ent-validation-v1beta1.k8s.elastic.co
matchPolicy: Exact
admissionReviewVersions: [v1, v1beta1]
admissionReviewVersions: [v1beta1]
sideEffects: None
rules:
- apiGroups:
@ -634,6 +569,7 @@ webhooks:
resources:
- enterprisesearches
- clientConfig:
caBundle: Cg==
service:
name: elastic-webhook-server
namespace: elastic-system
@ -641,7 +577,7 @@ webhooks:
failurePolicy: Ignore
name: elastic-es-validation-v1.k8s.elastic.co
matchPolicy: Exact
admissionReviewVersions: [v1, v1beta1]
admissionReviewVersions: [v1beta1]
sideEffects: None
rules:
- apiGroups:
@ -654,6 +590,7 @@ webhooks:
resources:
- elasticsearches
- clientConfig:
caBundle: Cg==
service:
name: elastic-webhook-server
namespace: elastic-system
@ -661,7 +598,7 @@ webhooks:
failurePolicy: Ignore
name: elastic-es-validation-v1beta1.k8s.elastic.co
matchPolicy: Exact
admissionReviewVersions: [v1, v1beta1]
admissionReviewVersions: [v1beta1]
sideEffects: None
rules:
- apiGroups:
@ -674,26 +611,7 @@ webhooks:
resources:
- elasticsearches
- clientConfig:
service:
name: elastic-webhook-server
namespace: elastic-system
path: /validate-ems-k8s-elastic-co-v1alpha1-mapsservers
failurePolicy: Ignore
name: elastic-ems-validation-v1alpha1.k8s.elastic.co
matchPolicy: Exact
admissionReviewVersions: [v1, v1beta1]
sideEffects: None
rules:
- apiGroups:
- maps.k8s.elastic.co
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- mapsservers
- clientConfig:
caBundle: Cg==
service:
name: elastic-webhook-server
namespace: elastic-system
@ -701,7 +619,7 @@ webhooks:
failurePolicy: Ignore
name: elastic-kb-validation-v1.k8s.elastic.co
matchPolicy: Exact
admissionReviewVersions: [v1, v1beta1]
admissionReviewVersions: [v1beta1]
sideEffects: None
rules:
- apiGroups:
@ -714,6 +632,7 @@ webhooks:
resources:
- kibanas
- clientConfig:
caBundle: Cg==
service:
name: elastic-webhook-server
namespace: elastic-system
@ -721,7 +640,7 @@ webhooks:
failurePolicy: Ignore
name: elastic-kb-validation-v1beta1.k8s.elastic.co
matchPolicy: Exact
admissionReviewVersions: [v1, v1beta1]
admissionReviewVersions: [v1beta1]
sideEffects: None
rules:
- apiGroups:
@ -733,64 +652,4 @@ webhooks:
- UPDATE
resources:
- kibanas
- clientConfig:
service:
name: elastic-webhook-server
namespace: elastic-system
path: /validate-autoscaling-k8s-elastic-co-v1alpha1-elasticsearchautoscaler
failurePolicy: Ignore
name: elastic-esa-validation-v1alpha1.k8s.elastic.co
matchPolicy: Exact
admissionReviewVersions: [v1, v1beta1]
sideEffects: None
rules:
- apiGroups:
- autoscaling.k8s.elastic.co
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- elasticsearchautoscalers
- clientConfig:
service:
name: elastic-webhook-server
namespace: elastic-system
path: /validate-scp-k8s-elastic-co-v1alpha1-stackconfigpolicies
failurePolicy: Ignore
name: elastic-scp-validation-v1alpha1.k8s.elastic.co
matchPolicy: Exact
admissionReviewVersions: [v1, v1beta1]
sideEffects: None
rules:
- apiGroups:
- stackconfigpolicy.k8s.elastic.co
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- stackconfigpolicies
- clientConfig:
service:
name: elastic-webhook-server
namespace: elastic-system
path: /validate-logstash-k8s-elastic-co-v1alpha1-logstash
failurePolicy: Ignore
name: elastic-logstash-validation-v1alpha1.k8s.elastic.co
matchPolicy: Exact
admissionReviewVersions: [v1, v1beta1]
sideEffects: None
rules:
- apiGroups:
- logstash.k8s.elastic.co
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- logstashes

View File

@ -1,16 +1,12 @@
# Etherpad namespace
# For users
Etherpad is a simple publicly available application for taking notes
running at [pad.k-space.ee](https://pad.k-space.ee/)
# For administrators
This application is managed by [ArgoCD](https://argocd.k-space.ee/applications/argocd/etherpad)
In case ArgoCD is broken you can manually deploy changes with:
To apply changes:
```
kubectl apply -n etherpad -f application.yml
kubectl apply -n etherpad -f application.yml -f networkpolicy-base.yml
```
Initialize MySQL secrets:
```
kubectl create secret generic -n etherpad mariadb-secrets \
--from-literal=MYSQL_ROOT_PASSWORD=$(cat /dev/urandom | base64 | head -c 30) \
--from-literal=MYSQL_PASSWORD=$(cat /dev/urandom | base64 | head -c 30)

View File

@ -1,22 +1,16 @@
---
apiVersion: codemowers.io/v1alpha1
kind: OIDCGWMiddlewareClient
metadata:
name: sso
namespace: etherpad
spec:
displayName: Etherpad
uri: 'https://pad.k-space.ee/'
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: etherpad
namespace: etherpad
annotations:
keel.sh/policy: minor
keel.sh/trigger: poll
keel.sh/pollSchedule: "@midnight"
spec:
# Etherpad does NOT support running multiple replicas due to
# in-application caching https://github.com/ether/etherpad-lite/issues/3680
revisionHistoryLimit: 0
replicas: 1
serviceName: etherpad
selector:
@ -29,7 +23,7 @@ spec:
spec:
containers:
- name: etherpad
image: etherpad/etherpad:2
image: etherpad/etherpad:1
securityContext:
# Etherpad writes session key during start
readOnlyRootFilesystem: false
@ -38,8 +32,6 @@ spec:
ports:
- containerPort: 9001
env:
- name: MINIFY
value: 'false'
- name: DB_TYPE
value: mysql
- name: DB_HOST
@ -77,8 +69,8 @@ spec:
selector:
app: etherpad
ports:
- protocol: TCP
port: 9001
- protocol: TCP
port: 9001
---
apiVersion: networking.k8s.io/v1
kind: Ingress
@ -87,24 +79,26 @@ metadata:
namespace: etherpad
annotations:
kubernetes.io/ingress.class: traefik
cert-manager.io/cluster-issuer: default
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
spec:
rules:
- host: pad.k-space.ee
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: etherpad
port:
number: 9001
- host: pad.k-space.ee
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: etherpad
port:
number: 9001
tls:
- hosts:
- "*.k-space.ee"
- hosts:
- pad.k-space.ee
secretName: pad-tls
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
@ -116,20 +110,97 @@ spec:
matchLabels:
app: etherpad
policyTypes:
- Ingress
- Egress
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
ports:
- port: 9001
protocol: TCP
- protocol: TCP
port: 9001
egress:
- ports:
- port: 3306
protocol: TCP
to:
- to:
- ipBlock:
cidr: 172.20.36.1/32
ports:
- protocol: TCP
port: 3306
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: mysql-operator
spec:
podSelector:
matchLabels:
app: etherpad
policyTypes:
- Ingress
- Egress
ingress:
- # TODO: Not sure why mysql-operator needs to be able to connect
from:
- namespaceSelector:
matchExpressions:
- key: kubernetes.io/metadata.name
operator: In
values:
- mysql-operator
ports:
- protocol: TCP
port: 3306
- # Allow connecting from other MySQL pods in same namespace
from:
- podSelector:
matchLabels:
app.kubernetes.io/managed-by: mysql-operator
ports:
- protocol: TCP
port: 3306
egress:
- # Allow connecting to other MySQL pods in same namespace
to:
- podSelector:
matchLabels:
app.kubernetes.io/managed-by: mysql-operator
ports:
- protocol: TCP
port: 3306
---
apiVersion: mysql.oracle.com/v2
kind: InnoDBCluster
metadata:
name: mysql-cluster
spec:
secretName: mysql-secrets
instances: 3
router:
instances: 1
tlsUseSelfSigned: true
datadirVolumeClaimTemplate:
storageClassName: local-path
accessModes:
- ReadWriteOnce
resources:
requests:
storage: "10Gi"
podSpec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/managed-by
operator: In
values:
- mysql-operator
topologyKey: kubernetes.io/hostname
nodeSelector:
dedicated: storage
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule

15
external-dns/README.md Normal file
View File

@ -0,0 +1,15 @@
Before applying replace the secret with the actual one.
For debugging add `- --log-level=debug`:
```
kubectl apply -n external-dns -f external-dns.yml
```
Insert TSIG secret:
```
kubectl -n external-dns create secret generic tsig-secret \
--from-literal=TSIG_SECRET=<secret>
```

View File

@ -0,0 +1,84 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: external-dns
namespace: external-dns
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- pods
- nodes
verbs:
- get
- watch
- list
- apiGroups:
- extensions
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: external-dns
namespace: external-dns
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: external-dns-viewer
namespace: external-dns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: external-dns
subjects:
- kind: ServiceAccount
name: external-dns
namespace: external-dns
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns
namespace: external-dns
spec:
revisionHistoryLimit: 0
selector:
matchLabels:
app: external-dns
template:
metadata:
labels:
app: external-dns
spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: k8s.gcr.io/external-dns/external-dns:v0.10.2
envFrom:
- secretRef:
name: tsig-secret
args:
- --registry=txt
- --txt-prefix=external-dns-
- --txt-owner-id=k8s
- --provider=rfc2136
- --source=ingress
- --source=service
- --domain-filter=k-space.ee
- --rfc2136-host=193.40.103.2
- --rfc2136-port=53
- --rfc2136-zone=k-space.ee
- --rfc2136-tsig-keyname=acme
- --rfc2136-tsig-secret-alg=hmac-sha512
- --rfc2136-tsig-secret=$(TSIG_SECRET)
# https://github.com/kubernetes-sigs/external-dns/issues/2446

View File

@ -1,30 +0,0 @@
# Freescout
# For user
Freescout scrapes `info@k-space.ee` and `accounting@k-space.ee` mailboxes
from Wildduck and builds issue tracker on top the mailbox.
The Freescout user interface is accessible at
[freescout.k-space.ee](https://freescout.k-space.ee/)
Note that Freescout notifications are sent to `@k-space.ee` mailboxes.
Forwarding to personal eg. `@gmail.com` mailbox can be configured via
[Wildduck webmail](https://webmail.k-space.ee/account/profile)
# For administrator
This application is managed by [ArgoCD](https://argocd.k-space.ee/applications/argocd/freescout)
Should ArgoCD be down manifests here can be applied with:
```
kubectl apply -n freescout -f application.yaml
```
If the Kubernetes cronjob for picking up mail is not working for more than
3 days the mails will not get synced by default. To manually synchronize
Freescout head to [Freescout system tools](https://freescout.k-space.ee/system/tools)
page, increase `Days` to appropriate number and hit `Fetch Emails` button.
Select `All` if some mails have been opened via Wildduck Webmail during debug process.

Some files were not shown because too many files have changed in this diff Show More