Initial commit
All checks were successful
continuous-integration/drone Build is passing

This commit is contained in:
Lauri Võsandi 2022-08-16 12:40:54 +03:00
commit 7c5cad55e1
122 changed files with 51731 additions and 0 deletions

10
.drone.yml Normal file
View File

@ -0,0 +1,10 @@
---
kind: pipeline
type: kubernetes
name: gitleaks
steps:
- name: gitleaks
image: zricethezav/gitleaks
commands:
- gitleaks detect --source=/drone/src

5
.gitignore vendored Normal file
View File

@ -0,0 +1,5 @@
*secrets.yml
*secret.yml
*.swp
*.save
*.1

12
CONTRIBUTORS.md Normal file
View File

@ -0,0 +1,12 @@
# Kubernetes cluster configuration contributors
Following people have helped one or the other way making
this Git repository happen:
* Lauri Võsandi <lauri@k-space.ee>
* Madis Mägi <madis@k-space.ee>
* Marvin Martinson <marvin@k-space.ee>
* Nejc Povsse <nejcp@k-space.ee>
* Song Meo <songmeo@k-space.ee>
* Rasmus Kallas <rasmus@k-space.ee>
* Kristjan Kuusk <kkuusk@k-space.ee>

20
LICENSE.md Normal file
View File

@ -0,0 +1,20 @@
Copyright (c) 2012-2022 Lauri Võsandi and others
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

258
README.md Normal file
View File

@ -0,0 +1,258 @@
# Kubernetes cluster manifests
## Introduction
This is the Kubernetes manifests of services running on k-space.ee domains:
- [Authelia](https://auth.k-space.ee) for authentication
- [Drone.io](https://drone.k-space.ee) for building Docker images
- [Harbor](https://harbor.k-space.ee) for hosting Docker images
- [ArgoCD](https://argocd.k-space.ee) for deploying Kubernetes manifests and
Helm charts into the cluster
- [camtiler](https://cams.k-space.ee) for cameras
- [Longhorn Dashboard](https://longhorn.k-space.ee) for administering
Longhorn storage
- [Kubernetes Dashboard](https://kubernetes-dashboard.k-space.ee/) for read-only overview
of the Kubernetes cluster
- [Wildduck Webmail](https://webmail.k-space.ee/)
Most endpoints are protected by OIDC autentication or Authelia SSO middleware.
## Cluster access
General discussion is happening in the `#kube` Slack channel.
For bootstrap access obtain `/etc/kubernetes/admin.conf` from one of the master
nodes and place it under `~/.kube/config` on your machine.
Once Authelia is working, OIDC access for others can be enabled with
running following on Kubernetes masters:
```bash
patch /etc/kubernetes/manifests/kube-apiserver.yaml - << EOF
@@ -23,6 +23,10 @@
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379
+ - --oidc-issuer-url=https://auth.k-space.ee
+ - --oidc-client-id=kubelogin
+ - --oidc-username-claim=preferred_username
+ - --oidc-groups-claim=groups
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
EOF
sudo systemctl daemon-reload
systemctl restart kubelet
```
Afterwards following can be used to talk to the Kubernetes cluster using
OIDC credentials:
```bash
kubectl krew install oidc-login
mkdir -p ~/.kube
cat << EOF > ~/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1EVXdNakEzTXpVMU1Wb1hEVE15TURReU9UQTNNelUxTVZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS2J2CjY3UFlXVHJMc3ZCQTZuWHUvcm55SlVhNnppTnNWTVN6N2w4ekhxM2JuQnhqWVNPUDJhN1RXTnpUTmZDanZBWngKTmlNbXJya1hpb2dYQWpVVkhSUWZlYm81TFIrb0JBOTdLWlcrN01UMFVJRXBuWVVaaTdBRHlaS01vcEJFUXlMNwp1SlU5UDhnNUR1T29FRHZieGJSMXFuV1JZRXpteFNmSFpocllpMVA3bFd4emkxR243eGRETFZaMjZjNm0xR3Y1CnViRjZyaFBXK1JSVkhiQzFKakJGeTBwRXdhYlUvUTd0Z2dic0JQUjk5NVZvMktCeElBelRmbHhVanlYVkJ3MjEKU2d3ZGI1amlpemxEM0NSbVdZZ0ZrRzd0NTVZeGF3ZmpaQjh5bW4xYjhUVjkwN3dRcG8veU8zM3RaaEE3L3BFUwpBSDJYeDk5bkpMbFVGVUtSY1A4Q0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZKNnZKeVk1UlJ1aklQWGxIK2ZvU3g2QzFRT2RNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBQ04zcGtCTVM3ekkrbUhvOWdTZQp6SzdXdjl3bXlCTVE5Q3crQXBSNnRBQXg2T1VIN0d1enc5TTV2bXNkYjkrYXBKMHBlZFB4SUg3YXZ1aG9SUXNMCkxqTzRSVm9BMG9aNDBZV3J3UStBR0dvdkZuaWNleXRNcFVSNEZjRXc0ZDRmcGl6V3d0TVNlRlRIUXR6WG84V2MKNFJGWC9xUXNVR1NWa01PaUcvcVVrSFpXQVgyckdhWXZ1Tkw2eHdSRnh5ZHpsRTFSUk56TkNvQzVpTXhjaVRNagpackEvK0pqVEFWU2FuNXZnODFOSmthZEphbmNPWmEwS3JEdkZzd1JJSG5CMGpMLzh3VmZXSTV6czZURU1VZUk1ClF6dU01QXUxUFZ4VXZJUGhlMHl6UXZjWDV5RlhnMkJGU3MzKzJBajlNcENWVTZNY2dSSTl5TTRicitFTUlHL0kKY0pjPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://master.kube.k-space.ee:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: oidc
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: oidc
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- oidc-login
- get-token
- --oidc-issuer-url=https://auth.k-space.ee
- --oidc-client-id=kubelogin
- --oidc-use-pkce
- --oidc-extra-scope=profile,email,groups
- --listen-address=127.0.0.1:27890
command: kubectl
env: null
provideClusterInfo: false
EOF
```
For access control mapping see [cluster-role-bindings.yml](cluster-role-bindings.yml)
# Technology mapping
Our self-hosted Kubernetes stack compared to AWS based deployments:
| Hipster startup | Self-hosted hackerspace | Purpose |
|-----------------|-------------------------------------|---------------------------------------------------------------------|
| AWS EC2 | Proxmox | Virtualization layer |
| AWS EKS | kubeadm | Provision Kubernetes master nodes |
| AWS EBS | Longhorn | Block storage for arbitrary applications needing persistent storage |
| AWS NLB | MetalLB | L2/L3 level load balancing |
| AWS ALB | Traefik | Reverse proxy also known as ingress controller in Kubernetes jargon |
| AWS ECR | Harbor | Docker registry |
| AWS DocumentDB | MongoDB | NoSQL database |
| AWS S3 | Minio | Object storage |
| GitHub OAuth2 | Samba (Active Directory compatible) | Source of truth for authentication and authorization |
| Dex | Authelia | ACL mapping and OIDC provider which integrates with GitHub/Samba |
| GitHub | Gitea | Source code management, issue tracking |
| GitHub Actions | Drone | Build Docker images |
| Gmail | Wildduck | E-mail |
| AWS Route53 | Bind and RFC2136 | DNS records and Let's Encrypt DNS validation |
| AWS VPC | Calico | Overlay network |
External dependencies running as classic virtual machines:
- Samba as Authelia's source of truth
- Bind as DNS server
## Adding applications
Deploy applications via [ArgoCD](https://argocd.k-space.ee)
We use Treafik with Authelia for Ingress.
Applications where possible and where applicable should use `Remote-User`
authentication. This prevents application exposure on public Internet.
Otherwise use OpenID Connect for authentication,
see Argo itself as an example how that is done.
See `kspace-camtiler/ingress.yml` for commented Ingress example.
Note that we do not use IngressRoute objects because they don't
support `external-dns` out of the box.
Do NOT add nginx annotations, we use Traefik.
Do NOT manually add DNS records, they are added by `external-dns`.
Do NOT manually create Certificate objects,
these should be handled by `tls:` section in Ingress.
## Cluster formation
Create Ubuntu 20.04 VM-s on Proxmox with local storage.
After machines have booted up and you can reach them via SSH:
```bash
# Enable required kernel modules
cat > /etc/modules << EOF
overlay
br_netfilter
EOF
cat /etc/modules | xargs -L 1 -t modprobe
# Finetune sysctl:
cat > /etc/sysctl.d/99-k8s.conf << EOF
net.ipv4.conf.all.accept_redirects = 0
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sysctl --system
# Disable Ubuntu caching DNS resolver
systemctl disable systemd-resolved.service
systemctl stop systemd-resolved
rm -fv /etc/resolv.conf
cat > /etc/resolv.conf << EOF
nameserver 1.1.1.1
nameserver 8.8.8.8
EOF
# Disable multipathd as Longhorn handles that itself
systemctl mask multipathd
systemctl disable multipathd
systemctl stop multipathd
# Disable Snapcraft
systemctl mask snapd
systemctl disable snapd
systemctl stop snapd
# Permit root login
sed -i -e 's/PermitRootLogin no/PermitRootLogin without-password/' /etc/ssh/sshd_config
systemctl reload ssh
cat << EOF > /root/.ssh/authorized_keys
sk-ecdsa-sha2-nistp256@openssh.com AAAAInNrLWVjZHNhLXNoYTItbmlzdHAyNTZAb3BlbnNzaC5jb20AAAAIbmlzdHAyNTYAAABBBD4/e9SWYWYoNZMkkF+NirhbmHuUgjoCap42kAq0pLIXFwIqgVTCre03VPoChIwBClc8RspLKqr5W3j0fG8QwnQAAAAEc3NoOg== lauri@lauri-x13
EOF
userdel -f ubuntu
apt-get remove -yq cloud-init
```
Install packages, for Raspbian set `OS=Debian_11`
```bash
OS=xUbuntu_20.04
VERSION=1.23
cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /
EOF
cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list
deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /
EOF
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers.gpg add -
curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers-cri-o.gpg add -
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list
apt-get update
apt-get install -yqq apt-transport-https curl cri-o cri-o-runc kubelet=1.23.5-00 kubectl=1.23.5-00 kubeadm=1.23.5-00
sudo systemctl daemon-reload
sudo systemctl enable crio --now
apt-mark hold kubelet kubeadm kubectl
sed -i -e 's/unqualified-search-registries = .*/unqualified-search-registries = ["docker.io"]/' /etc/containers/registries.conf
```
On master:
```
kubeadm init --token-ttl=120m --pod-network-cidr=10.244.0.0/16 --control-plane-endpoint "master.kube.k-space.ee:6443" --upload-certs --apiserver-cert-extra-sans master.kube.k-space.ee --node-name master1.kube.k-space.ee
```
For the `kubeadm join` command specify FQDN via `--node-name $(hostname -f)`.
After forming the cluster add taints:
```bash
for j in $(seq 1 9); do
kubectl label nodes worker${j}.kube.k-space.ee node-role.kubernetes.io/worker=''
done
for j in $(seq 1 3); do
kubectl taint nodes mon${j}.kube.k-space.ee dedicated=monitoring:NoSchedule
kubectl label nodes mon${j}.kube.k-space.ee dedicated=monitoring
done
for j in $(seq 1 4); do
kubectl taint nodes storage${j}.kube.k-space.ee dedicated=storage:NoSchedule
kubectl label nodes storage${j}.kube.k-space.ee dedicated=storage
done
```
On Raspberry Pi you need to take additonal steps:
* Manually enable cgroups by appending
`cgroup_memory=1 cgroup_enable=memory` to `/boot/cmdline.txt`,
* Disable swap with `swapoff -a; apt-get purge -y dphys-swapfile`
* For mounting Longhorn volumes on Rasbian install `open-iscsi`
For `arm64` nodes add suitable taint to prevent scheduling non-multiarch images on them:
```bash
kubectl taint nodes worker9.kube.k-space.ee arch=arm64:NoSchedule
```

4
argocd/.gitignore vendored Normal file
View File

@ -0,0 +1,4 @@
argocd.yml
repo-credentials.yml
id_*
ssh_known_hosts

59
argocd/README.md Normal file
View File

@ -0,0 +1,59 @@
# Workflow
Most applications in our Kubernetes cluster are managed by ArgoCD.
# Deployment
To deploy ArgoCD:
```bash
helm repo add argo-cd https://argoproj.github.io/argo-helm
kubectl create secret -n argocd generic argocd-secret # Initialize empty secret for sessions
helm template -n argocd --release-name k6 argo-cd/argo-cd --include-crds -f values.yaml > argocd.yml
kubectl apply -f argocd.yml -n argocd
kubectl -n argocd rollout restart deployment/k6-argocd-redis
kubectl -n argocd rollout restart deployment/k6-argocd-repo-server
kubectl -n argocd rollout restart deployment/k6-argocd-server
kubectl -n argocd rollout restart deployment/k6-argocd-notifications-controller
kubectl -n argocd rollout restart statefulset/k6-argocd-application-controller
```
Note: Refer to Authelia README for OIDC secret setup
# Setting up Git secrets
Generate SSH key to access Gitea:
```
ssh-keygen -t ecdsa -f id_ecdsa -C argocd.k-space.ee -P ''
kubectl -n argocd create secret generic gitea-kube \
--from-literal=type=git \
--from-literal=url=git@git.k-space.ee:k-space/kube \
--from-file=sshPrivateKey=id_ecdsa
kubectl -n argocd create secret generic gitea-kube-staging \
--from-literal=type=git \
--from-literal=url=git@git.k-space.ee:k-space/kube-staging \
--from-file=sshPrivateKey=id_ecdsa
kubectl label -n argocd secret gitea-kube argocd.argoproj.io/secret-type=repository
kubectl label -n argocd secret gitea-kube-staging argocd.argoproj.io/secret-type=repository
rm -fv id_ecdsa
```
Have Gitea admin reset password for user `argocd` and log in with that account.
Add the SSH key for user `argocd` from file `id_ecdsa.pub`.
Delete any other SSH keys associated with Gitea user `argocd`.
# Adding applications
To add application make sure it's manifest is placed as `application.yml` in
the relevant namespace:
```
./update.sh
kubectl apply -n argocd -f applications --recursive
```
Do not manually add manifests under `applications/`

17
argocd/application.tpl Normal file
View File

@ -0,0 +1,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: foobar
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: foobar
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: foobar
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -0,0 +1,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: authelia
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: authelia
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: authelia
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -0,0 +1,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: camtiler
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: camtiler
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: camtiler
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -0,0 +1,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: drone-execution
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: drone-execution
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: drone-execution
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -0,0 +1,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: drone
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: drone
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: drone
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -0,0 +1,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: etherpad
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: etherpad
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: etherpad
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -0,0 +1,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: external-dns
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: external-dns
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: external-dns
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -0,0 +1,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: harbor
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: harbor
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: harbor
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -0,0 +1,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: keel
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: keel
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: keel
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -0,0 +1,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: kubernetes-dashboard
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: kubernetes-dashboard
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: kubernetes-dashboard
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -0,0 +1,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: logging
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: logging
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: logging
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -0,0 +1,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: metallb-system
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: metallb-system
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: metallb-system
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -0,0 +1,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: monitoring
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: monitoring
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: monitoring
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -0,0 +1,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: mysql-operator
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: mysql-operator
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: mysql-operator
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -0,0 +1,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: phpmyadmin
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: phpmyadmin
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: phpmyadmin
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -0,0 +1,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: reloader
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: reloader
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: reloader
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -0,0 +1,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: rosdump
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: rosdump
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: rosdump
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -0,0 +1,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: wildduck
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: wildduck
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: wildduck
syncPolicy:
syncOptions:
- CreateNamespace=true

9
argocd/update.sh Executable file
View File

@ -0,0 +1,9 @@
#!/bin/bash
path=$(dirname $(dirname $(realpath $0)))
for j in $path/*/application.yml; do
app=$(dirname $j)
test -f "$app/values.yml" && continue
test -f "$app/values.yaml" && continue
appname=$(basename $app)
cat application.tpl | sed -e "s/foobar/$appname/g" > applications/$appname.yml
done

122
argocd/values.yaml Normal file
View File

@ -0,0 +1,122 @@
global:
logLevel: warn
# We use Authelia OIDC instead of Dex
dex:
enabled: false
# Maybe one day switch to Redis HA?
redis-ha:
enabled: false
server:
# HTTPS is implemented by Traefik
extraArgs:
- --insecure
ingress:
enabled: true
annotations:
cert-manager.io/cluster-issuer: default
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
hosts:
- argocd.k-space.ee
tls:
- hosts:
- argocd.k-space.ee
secretName: argocd-server-tls
configEnabled: true
config:
admin.enabled: "false"
url: https://argocd.k-space.ee
application.instanceLabelKey: argocd.argoproj.io/instance
oidc.config: |
name: Authelia
issuer: https://auth.k-space.ee
clientID: argocd
cliClientID: argocd
clientSecret: $oidc.config.clientSecret
requestedIDTokenClaims:
groups:
essential: true
requestedScopes:
- openid
- profile
- email
- groups
resource.customizations: |
# https://github.com/argoproj/argo-cd/issues/1704
networking.k8s.io/Ingress:
health.lua: |
hs = {}
hs.status = "Healthy"
return hs
# Members of ArgoCD Admins group in AD/Samba are allowed to administer Argo
rbacConfig:
policy.default: role:readonly
policy.csv: |
# Map AD groups to ArgoCD roles
g, Developers, role:developers
g, ArgoCD Admins, role:admin
# Allow developers to read objects
p, role:developers, applications, get, */*, allow
p, role:developers, certificates, get, *, allow
p, role:developers, clusters, get, *, allow
p, role:developers, repositories, get, *, allow
p, role:developers, projects, get, *, allow
p, role:developers, accounts, get, *, allow
p, role:developers, gpgkeys, get, *, allow
p, role:developers, logs, get, */*, allow
p, role:developers, applications, restart, default/camtiler, allow
p, role:developers, applications, override, default/camtiler, allow
p, role:developers, applications, action/apps/Deployment/restart, default/camtiler, allow
p, role:developers, applications, sync, default/camtiler, allow
p, role:developers, applications, update, default/camtiler, allow
metrics:
enabled: true
service:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8083"
# We don't use ApplicationSet CRD-s (yet)
applicationSet:
enabled: false
repoServer:
metrics:
enabled: true
service:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8084"
notifications:
metrics:
enabled: true
service:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9001"
controller:
metrics:
enabled: true
service:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8082"
configs:
secret:
createSecret: false
knownHosts:
data:
ssh_known_hosts: |
# Copy-pasted from `ssh-keyscan git.k-space.ee`
git.k-space.ee ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCF1+/TDRXuGwsu4SZQQwQuJusb7W1OciGAQp/ZbTTvKD+0p7fV6dXyUlWjdFmITrFNYDreDnMiOS+FvE62d2Z0=
git.k-space.ee ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDsLyRuubdIUnTKEqOipu+9x+FforrC8+oxulVrl0ECgdIRBQnLQXIspTNwuC3MKJ4z+DPbndSt8zdN33xWys8UNEs3V5/W6zsaW20tKiaX75WK5eOL4lIDJi/+E97+c0aZBXamhxTrgkRVJ5fcAkY6C5cKEmVM5tlke3v3ihLq78/LpJYv+P947NdnthYE2oc+XGp/elZ0LNfWRPnd///+ykbwWirvQm+iiDz7PMVKkb+Q7l3vw4+zneKJWAyFNrm+aewyJV9lFZZJuHliwlHGTriSf6zhMAWyJzvYqDAN6iT5yi9KGKw60J6vj2GLuK4ULVblTyP9k9+3iELKSWW5
git.k-space.ee ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL1jaIn/5dZcqN+cwcs/c2xMVJH/ReA84v8Mm73jqDAG

2
authelia/.gitignore vendored Normal file
View File

@ -0,0 +1,2 @@
application-secrets.y*ml
oidc-secrets.y*ml

173
authelia/README.md Normal file
View File

@ -0,0 +1,173 @@
# Authelia
## Background
Authelia works in conjunction with Traefik to provide SSO with
credentials stored in Samba (Active Directory compatible) directory tree.
Samba resides outside Kubernetes cluster as it's difficuilt to containerize
while keeping it usable from outside the cluster due to Samba's networking.
The MariaDB instance is used to store MFA tokens.
Redis is used to store session info.
## Deployment
Inspect changes with `git diff` and proceed to deploy:
```
kubectl apply -n authelia -f application.yml -f keydb.yml -f mariadb.yml
kubectl create secret generic -n authelia mysql-secrets \
--from-literal=rootPassword=$(cat /dev/urandom | base64 | head -c 30)
kubectl create secret generic -n authelia mariadb-secrets \
--from-literal=MYSQL_ROOT_PASSWORD=$(cat /dev/urandom | base64 | head -c 30) \
--from-literal=MYSQL_PASSWORD=$(cat /dev/urandom | base64 | head -c 30)
kubectl create secret generic -n authelia redis-secrets \
--from-literal=REDIS_PASSWORD=$(cat /dev/urandom | base64 | head -c 30)
kubectl -n authelia rollout restart deployment/authelia
```
To change secrets create `secret.yml`:
```
---
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: application-secrets
data:
JWT_TOKEN: ...
SESSION_ENCRYPTION_KEY: ...
STORAGE_PASSWORD: ...
STORAGE_ENCRYPTION_KEY: ...
LDAP_PASSWORD: ...
STORAGE_PASSWORD: ...
SMTP_PASSWORD: ...
```
Apply with:
```
kubectl apply -n authelia -f application-secrets.yml
kubectl annotate -n authelia secret application-secrets reloader.stakater.com/match=true
```
## OIDC secrets
OIDC secrets are separated from the main configuration until
Authelia will add CRD-s for these.
Generally speaking for untrusted applications, that is stuff that is running
outside the Kubernetes cluster eg web browser based (JS) and
local command line clients one
should use `public: true` and omit `secret: ...`.
Populate `oidc-secrets.yml` with approximately following:
```
identity_providers:
oidc:
clients:
- id: kubelogin
description: Kubernetes cluster
secret: ...
authorization_policy: two_factor
redirect_uris:
- http://localhost:27890
scopes:
- openid
- groups
- email
- profile
- id: proxmox
description: Proxmox Virtual Environment
secret: ...
authorization_policy: two_factor
redirect_uris:
- https://pve.k-space.ee
scopes:
- openid
- groups
- email
- profile
- id: argocd
description: ArgoCD
secret: ...
authorization_policy: two_factor
redirect_uris:
- https://argocd.k-space.ee/auth/callback
scopes:
- openid
- groups
- email
- profile
- id: harbor
description: Harbor
secret: ...
authorization_policy: two_factor
redirect_uris:
- https://harbor.k-space.ee/c/oidc/callback
scopes:
- openid
- groups
- email
- profile
- id: gitea
description: Gitea
secret: ...
authorization_policy: one_factor
redirect_uris:
- https://git.k-space.ee/user/oauth2/authelia/callback
scopes:
- openid
- profile
- email
- groups
grant_types:
- refresh_token
- authorization_code
response_types:
- code
userinfo_signing_algorithm: none
- id: grafana
description: Grafana
secret: ...
authorization_policy: one_factor
redirect_uris:
- https://grafana.k-space.ee/login/generic_oauth
scopes:
- openid
- groups
- email
- profile
```
To upload the file to Kubernetes secrets:
```
kubectl -n authelia delete secret oidc-secrets
kubectl -n authelia create secret generic oidc-secrets \
--from-file=oidc-secrets.yml=oidc-secrets.yml
kubectl annotate -n authelia secret oidc-secrets reloader.stakater.com/match=true
kubectl -n authelia rollout restart deployment/authelia
```
Synchronize OIDC secrets:
```
kubectl -n argocd delete secret argocd-secret
kubectl -n argocd create secret generic argocd-secret \
--from-literal=server.secretkey=$(cat /dev/urandom | base64 | head -c 30) \
--from-literal=oidc.config.clientSecret=$( \
kubectl get secret -n authelia oidc-secrets -o json \
| jq '.data."oidc-secrets.yml"' -r | base64 -d | yq -o json \
| jq '.identity_providers.oidc.clients[] | select(.id == "argocd") | .secret' -r)
kubectl -n monitoring delete secret oidc-secret
kubectl -n monitoring create secret generic oidc-secret \
--from-literal=GF_AUTH_GENERIC_OAUTH_CLIENT_SECRET=$( \
kubectl get secret -n authelia oidc-secrets -o json \
| jq '.data."oidc-secrets.yml"' -r | base64 -d | yq -o json \
| jq '.identity_providers.oidc.clients[] | select(.id == "grafana") | .secret' -r)
```

409
authelia/application.yml Normal file
View File

@ -0,0 +1,409 @@
---
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: authelia-certificates
labels:
app.kubernetes.io/name: authelia
data:
ldaps.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUZJekNDQXd1Z0F3SUJBZ0lVRzNaYnI0MGVVMlRHak1Lek5XaDhOTDJkRDRZd0RRWUpLb1pJaHZjTkFRRUwKQlFBd0lURWZNQjBHQTFVRUF3d1dVMkZ0WW1FZ1lYUWdZV1F1YXkxemNHRmpaUzVsWlRBZUZ3MHlNVEV5TVRRdwpOekk0TlRGYUZ3MHlOakV5TVRNd056STROVEZhTUNFeEh6QWRCZ05WQkFNTUZsTmhiV0poSUdGMElHRmtMbXN0CmMzQmhZMlV1WldVd2dnSWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUNEd0F3Z2dJS0FvSUNBUURub3hEZFlicjAKVFJHOEErdk0xT0I1Rzg4Z05YL1pXeFNLb0VaM2p0ekF0NEc3blV0aUhoVzI1cUhEeXZGVGEzTzJiMUFVSEhzbwpVQXpVTWNVV1FRb3J2RjF4L1VsYitpcnk0QkxFTklaVTdYMVpxb2ZKYXgwZTcrbit1YVM3R015dnB4VXliNGlYCkd3djdZZEh5SmM4WjZROHd2MTdNV2F2ejNaOE5CWFdoeG1xc3ljTlphVkl2S1lNRVpGazNUTnA3T20vSTFpdkYKWDJuNVNtb2d2NmdBVmpVODhSeWc2NlRFVStiaGY5QWdiU0VxWjhMaVd6c20xdHc0WnJXMDVVK25JVjRzTHdlaQp2SXppblFMYmFMTkc2ZUl0cUtQZGVsWWhRNHlCeHM3QXpTOCtieVVBZk9jRktzUTI5alFVdUxNbE1pUmt6MjV5Cnc5UUZxSGVuRjNIYXJqU1JTL3ZZV3J3K0RNbmo2Tit3QVdtd21SR3NzVmxPMjFSLzAzNThBK0h5VzhyLzlsTm8KV1FoMmt3VGRPdjdxMzFwRmZQQUhHUkFITnZUN0dRKzVCeFFjdG83cG1GQ2t2OTdpbmhiZG50d2ViSmM1VWI3NQpBeHNWVC9uNk9aTjJSU09NV0RKY1pjVkpXYjQxdTNTL2lBVHlvbDBuOEZMRlRRZm9xdXdvVkQ1UnpwU0NsVm50Cjd1eENyaGNsYXhTYnhUUDhqa29ERXQzc1NycWoySm5PNlhtQ3R2VlZkMmQvWVZQQ21qQm54TWc1bld1WEwwZTgKNkh3MTd5TGtYeFgzVERkdjF2VThvYTdpTmZyNmc3Vlcrd2ZsUkJoVW5WRUluNXZEdm80STVSdWRXaEJxcHN6VQo3bGQrUDVjZE5GWEdjUlRQdFFlbXkxUllKMG5ZejkybGtRSURBUUFCbzFNd1VUQWRCZ05WSFE0RUZnUVVjZ1JrCnZ4U3V1QnNFaktzbXQvN3dpRHIxbHVRd0h3WURWUjBqQkJnd0ZvQVVjZ1JrdnhTdXVCc0VqS3NtdC83d2lEcjEKbHVRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBTkJna3Foa2lHOXcwQkFRc0ZBQU9DQWdFQVNlNXM1aU04QjQ2agp6bXZMOUQ4dUJrQ0VIOW9mMnc1VFluL1NPZkFRVnhBOGxBYndORitlWmgyakdGSUN6citNYmlTMlhZdkxJNnVrClZ5cFJrN28vdExmdmY0alpqZnRpeEliWEM1MjYrUk1xOEcvV2xGbzJnWFZ0eW5BcXp5bXJVYjV1MVZJcG53QWYKNTBzNHlDOURFUXF1aGErYzJCWTBRQ3ZySnUvYy9KTUs3QTdYOFdRSzVDUy8wZkNPdzBPY2xkZzA0c3VWVlU2eQp0MEZmV0kvTlhURFFrU2JWVXN5OElmaXd4a0o5NmNsTjFNWVArQ015Mkh1eWF0aTZySnhVZFBEbS9tYzdRWXNPClNTSzQyNXJQOFFZMmduNlNXUXJXdUJic2dLSEpoVzRBYjdTTldkb0Q0QytwVDA2V1MzVXphMnhZd09TV1IvTWMKR1V5YXRwLzlxR05tOWM1d2RFQ3FtdkVQc2twQkp5ZWR6MUk2V2lxdjRuK0UvRk9qRGl0VVpFd3BFZXRUQktXZgoyRnZRa1pGRmpRU3VIdG5KT040cVRvWmlaNW4vbis4Z1k2Z1Y5Wnd0NHM5OGlpdnUwTFc4UlZGSTNkS0tiYm5lCkY1KzltNE9vMjF0SlU2QThXVGpqVXpLUnFKdEZSa1JpWGtOTGRoY2MrdTdMOFFlZTFOUjIyalg5N2NaVDNGUGoKYmpOUlpId3k5K1dhMG1zcC9EYUR5RnlaOStPUUhReUJJazdCSS9LdU0rT2dta3dlSHBNSE5CMUs1NHZQenZKawpHaFN1QUNIeTRybmdvQTBvMzNhZzJ6a3lEY3NocVRtK2Q3UXFWOWUzU2pONFpUUXlTeWNpa0I1bFJKVHAydVFkCk5jVjBtcG5nREl1aFVlSFRKWkJ0SVZCZnp4bHdHd2c9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
---
apiVersion: v1
kind: ConfigMap
metadata:
name: authelia-config
labels:
app.kubernetes.io/name: authelia
annotations:
reloader.stakater.com/match: "true"
data:
authelia-config.yml: |
---
log:
level: warn
certificates_directory: /certificates
theme: light
default_redirection_url: https://members.k-space.ee
totp:
issuer: K-SPACE
authentication_backend:
ldap:
implementation: activedirectory
url: ldaps://ad.k-space.ee
base_dn: dc=ad,dc=k-space,dc=ee
username_attribute: sAMAccountName
additional_users_dn: ou=Membership
users_filter: (&({username_attribute}={input})(objectCategory=person)(objectClass=user))
additional_groups_dn: cn=Users
groups_filter: (&(member={dn})(objectclass=group))
group_name_attribute: cn
mail_attribute: mail
display_name_attribute: displayName
user: cn=authelia,cn=Users,dc=ad,dc=k-space,dc=ee
session:
domain: k-space.ee
same_site: lax
expiration: 1M
inactivity: 120h
remember_me_duration: "0"
redis:
host: redis
port: 6379
regulation:
ban_time: 5m
find_time: 2m
max_retries: 3
storage:
mysql:
host: mariadb
database: authelia
username: authelia
notifier:
disable_startup_check: true
smtp:
host: mail.k-space.ee
port: 465
username: authelia
sender: authelia@k-space.ee
subject: "[Authelia] {title}"
startup_check_address: lauri@k-space.ee
access_control:
default_policy: deny
rules:
# Longhorn dashboard
- domain: longhorn.k-space.ee
policy: two_factor
subject: group:Longhorn Admins
- domain: longhorn.k-space.ee
policy: deny
# Members site
- domain: members.k-space.ee
policy: bypass
resources:
- ^/?$
- domain: members.k-space.ee
policy: two_factor
resources:
- ^/login/authelia/?$
- domain: members.k-space.ee
policy: bypass
# Webmail
- domain: webmail.k-space.ee
policy: two_factor
# Etherpad
- domain: pad.k-space.ee
policy: two_factor
resources:
- ^/p/board-
subject: group:Board Members
- domain: pad.k-space.ee
policy: deny
resources:
- ^/p/board-
- domain: pad.k-space.ee
policy: two_factor
resources:
- ^/p/members-
- domain: pad.k-space.ee
policy: deny
resources:
- ^/p/members-
- domain: pad.k-space.ee
policy: bypass
# phpMyAdmin
- domain: phpmyadmin.k-space.ee
policy: two_factor
# Require login for everything else protected by traefik-sso middleware
- domain: '*.k-space.ee'
policy: one_factor
...
---
apiVersion: v1
kind: Service
metadata:
name: authelia
labels:
app.kubernetes.io/name: authelia
spec:
type: ClusterIP
sessionAffinity: None
selector:
app.kubernetes.io/name: authelia
ports:
- name: http
protocol: TCP
port: 80
targetPort: http
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: authelia
labels:
app.kubernetes.io/name: authelia
annotations:
reloader.stakater.com/search: "true"
spec:
selector:
matchLabels:
app.kubernetes.io/name: authelia
replicas: 2
revisionHistoryLimit: 0
template:
metadata:
labels:
app.kubernetes.io/name: authelia
spec:
enableServiceLinks: false
containers:
- name: authelia
image: authelia/authelia:4
command:
- authelia
- --config=/config/authelia-config.yml
- --config=/config/oidc-secrets.yml
resources:
limits:
cpu: "4.00"
memory: 125Mi
requests:
cpu: "0.25"
memory: 50Mi
env:
- name: AUTHELIA_SERVER_DISABLE_HEALTHCHECK
value: "true"
- name: AUTHELIA_JWT_SECRET_FILE
value: /secrets/JWT_TOKEN
- name: AUTHELIA_SESSION_SECRET_FILE
value: /secrets/SESSION_ENCRYPTION_KEY
- name: AUTHELIA_AUTHENTICATION_BACKEND_LDAP_PASSWORD_FILE
value: /secrets/LDAP_PASSWORD
- name: AUTHELIA_SESSION_REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: redis-secrets
key: REDIS_PASSWORD
- name: AUTHELIA_STORAGE_ENCRYPTION_KEY_FILE
value: /secrets/STORAGE_ENCRYPTION_KEY
- name: AUTHELIA_STORAGE_MYSQL_PASSWORD_FILE
value: /mariadb-secrets/MYSQL_PASSWORD
- name: AUTHELIA_IDENTITY_PROVIDERS_OIDC_HMAC_SECRET_FILE
value: /secrets/OIDC_HMAC_SECRET
- name: AUTHELIA_IDENTITY_PROVIDERS_OIDC_ISSUER_PRIVATE_KEY_FILE
value: /secrets/OIDC_PRIVATE_KEY
- name: AUTHELIA_NOTIFIER_SMTP_PASSWORD_FILE
value: /secrets/SMTP_PASSWORD
- name: TZ
value: Europe/Tallinn
startupProbe:
failureThreshold: 6
httpGet:
path: /api/health
port: http
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 5
livenessProbe:
failureThreshold: 5
httpGet:
path: /api/health
port: http
scheme: HTTP
initialDelaySeconds: 0
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 5
readinessProbe:
failureThreshold: 5
httpGet:
path: /api/health
port: http
scheme: HTTP
initialDelaySeconds: 0
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 5
ports:
- name: http
containerPort: 9091
protocol: TCP
volumeMounts:
- mountPath: /config/authelia-config.yml
name: authelia-config
readOnly: true
subPath: authelia-config.yml
- mountPath: /config/oidc-secrets.yml
name: oidc-secrets
readOnly: true
subPath: oidc-secrets.yml
- mountPath: /secrets
name: secrets
readOnly: true
- mountPath: /certificates
name: certificates
readOnly: true
- mountPath: /mariadb-secrets
name: mariadb-secrets
readOnly: true
volumes:
- name: authelia-config
configMap:
name: authelia-config
- name: secrets
secret:
secretName: application-secrets
items:
- key: JWT_TOKEN
path: JWT_TOKEN
- key: SESSION_ENCRYPTION_KEY
path: SESSION_ENCRYPTION_KEY
- key: STORAGE_ENCRYPTION_KEY
path: STORAGE_ENCRYPTION_KEY
- key: STORAGE_PASSWORD
path: STORAGE_PASSWORD
- key: LDAP_PASSWORD
path: LDAP_PASSWORD
- key: OIDC_PRIVATE_KEY
path: OIDC_PRIVATE_KEY
- key: OIDC_HMAC_SECRET
path: OIDC_HMAC_SECRET
- key: SMTP_PASSWORD
path: SMTP_PASSWORD
- name: certificates
secret:
secretName: authelia-certificates
- name: mariadb-secrets
secret:
secretName: mariadb-secrets
- name: redis-secrets
secret:
secretName: redis-secrets
- name: oidc-secrets
secret:
secretName: oidc-secrets
items:
- key: oidc-secrets.yml
path: oidc-secrets.yml
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: authelia
labels:
app.kubernetes.io/name: authelia
annotations:
cert-manager.io/cluster-issuer: default
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
kubernetes.io/tls-acme: "true"
traefik.ingress.kubernetes.io/router.entryPoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: authelia-chain-k6-authelia@kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true"
spec:
rules:
- host: auth.k-space.ee
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: authelia
port:
number: 80
tls:
- hosts:
- auth.k-space.ee
secretName: authelia-tls
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: forwardauth-k6-authelia
labels:
app.kubernetes.io/name: authelia
spec:
forwardAuth:
address: http://authelia.authelia.svc.cluster.local/api/verify?rd=https://auth.k-space.ee/
trustForwardHeader: true
authResponseHeaders:
- Remote-User
- Remote-Name
- Remote-Email
- Remote-Groups
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: headers-k6-authelia
labels:
app.kubernetes.io/name: authelia
spec:
headers:
browserXssFilter: true
customFrameOptionsValue: "SAMEORIGIN"
customResponseHeaders:
Cache-Control: "no-store"
Pragma: "no-cache"
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: chain-k6-authelia-auth
labels:
app.kubernetes.io/name: authelia
spec:
chain:
middlewares:
- name: forwardauth-k6-authelia
namespace: authelia
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: chain-k6-authelia
labels:
app.kubernetes.io/name: authelia
spec:
chain:
middlewares:
- name: headers-k6-authelia
namespace: authelia
---
apiVersion: mysql.oracle.com/v2
kind: InnoDBCluster
metadata:
name: mysql-cluster
spec:
secretName: mysql-secrets
instances: 3
router:
instances: 2
tlsUseSelfSigned: true
datadirVolumeClaimTemplate:
storageClassName: local-path
accessModes:
- ReadWriteOnce
resources:
requests:
storage: "1Gi"
podSpec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/managed-by
operator: In
values:
- mysql-operator
topologyKey: kubernetes.io/hostname
nodeSelector:
dedicated: storage
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule

1
authelia/keydb.yml Symbolic link
View File

@ -0,0 +1 @@
../shared/keydb.yml

1
authelia/mariadb.yml Symbolic link
View File

@ -0,0 +1 @@
../shared/mariadb.yml

1
camtiler/.gitignore vendored Normal file
View File

@ -0,0 +1 @@
deployments/

29
camtiler/README.md Normal file
View File

@ -0,0 +1,29 @@
To apply changes:
```
kubectl apply -n camtiler -f application.yml -f mongoexpress.yml -f mongodb-support.yml -f networkpolicy-base.yml -f minio-support.yml
```
To deploy changes:
```
kubectl -n camtiler rollout restart deployment.apps/camtiler
```
To initialize secrets:
```
kubectl create secret generic -n camtiler mongodb-application-readwrite-password --from-literal="password=$(cat /dev/urandom | base64 | head -c 30)"
kubectl create secret generic -n camtiler mongodb-application-readonly-password --from-literal="password=$(cat /dev/urandom | base64 | head -c 30)"
kubectl create secret generic -n camtiler minio-secret \
--from-literal=accesskey=application \
--from-literal=secretkey=$(cat /dev/urandom | base64 | head -c 30)
kubectl create secret generic -n camtiler minio-env-configuration \
--from-literal="MINIO_BROWSER=off" \
--from-literal="MINIO_ROOT_USER=root" \
--from-literal="MINIO_ROOT_PASSWORD=$(cat /dev/urandom | base64 | head -c 30)" \
--from-literal="MINIO_STORAGE_CLASS_STANDARD=EC:4"
kubectl -n camtiler create secret generic camera-secrets \
--from-literal=username=... \
--from-literal=password=...
```

474
camtiler/application.yml Normal file
View File

@ -0,0 +1,474 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: camtiler
annotations:
keel.sh/policy: force
keel.sh/trigger: poll
spec:
revisionHistoryLimit: 0
replicas: 1
selector:
matchLabels:
app: camtiler
template:
metadata:
labels:
app: camtiler
component: camtiler
spec:
serviceAccountName: camtiler
containers:
- name: camtiler
image: harbor.k-space.ee/k-space/camera-tiler:latest
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: log-viewer-frontend
annotations:
keel.sh/policy: force
keel.sh/trigger: poll
spec:
revisionHistoryLimit: 0
replicas: 2
selector:
matchLabels:
app: log-viewer-frontend
template:
metadata:
labels:
app: log-viewer-frontend
spec:
containers:
- name: log-viewer-frontend
image: harbor.k-space.ee/k-space/log-viewer-frontend:latest
# securityContext:
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: log-viewer-backend
annotations:
keel.sh/policy: force
keel.sh/trigger: poll
spec:
revisionHistoryLimit: 0
replicas: 3
selector:
matchLabels:
app: log-viewer-backend
template:
metadata:
labels:
app: log-viewer-backend
spec:
containers:
- name: log-backend-backend
image: harbor.k-space.ee/k-space/log-viewer:latest
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
env:
- name: MONGO_URI
valueFrom:
secretKeyRef:
name: mongodb-application-readwrite
key: connectionString.standard
- name: MINIO_BUCKET
value: application
- name: MINIO_HOSTNAME
value: cams-s3.k-space.ee
- name: MINIO_PORT
value: "443"
- name: MINIO_SCHEME
value: "https"
- name: MINIO_SECRET_KEY
valueFrom:
secretKeyRef:
name: minio-secret
key: secretkey
- name: MINIO_ACCESS_KEY
valueFrom:
secretKeyRef:
name: minio-secret
key: accesskey
---
apiVersion: v1
kind: Service
metadata:
name: log-viewer-frontend
spec:
type: ClusterIP
selector:
app: log-viewer-frontend
ports:
- protocol: TCP
port: 3003
---
apiVersion: v1
kind: Service
metadata:
name: log-viewer-backend
spec:
type: ClusterIP
selector:
app: log-viewer-backend
ports:
- protocol: TCP
port: 3002
---
apiVersion: v1
kind: Service
metadata:
name: camtiler
annotations:
prometheus.io/scrape: 'true'
labels:
component: camtiler
spec:
type: ClusterIP
selector:
app: camtiler
component: camtiler
ports:
- protocol: TCP
port: 5001
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: camtiler
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: camtiler
rules:
- apiGroups: [""]
resources: ["services"]
verbs: ["list"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: camtiler
subjects:
- kind: ServiceAccount
name: camtiler
apiGroup: ""
roleRef:
kind: Role
name: camtiler
apiGroup: ""
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: camtiler
annotations:
kubernetes.io/ingress.class: traefik
# Following specifies the certificate issuer defined in
# ../cert-manager/issuer.yml
# This is where the HTTPS certificates for the
# `tls:` section below are obtained from
cert-manager.io/cluster-issuer: default
# This tells Traefik this Ingress object is associated with the
# https:// entrypoint
# Global http:// to https:// redirect is enabled in
# ../traefik/values.yml using `globalArguments`
traefik.ingress.kubernetes.io/router.entrypoints: websecure
# Following enables Authelia intercepting middleware
# which makes sure user is authenticated and then
# proceeds to inject Remote-User header for the application
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true"
# Following tells external-dns to add CNAME entry which makes
# cams.k-space.ee point to same IP address as traefik.k-space.ee
# The A record for traefik.k-space.ee is created via annotation
# added in ../traefik/ingress.yml
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
spec:
rules:
- host: cams.k-space.ee
http:
paths:
- pathType: Prefix
path: "/tiled"
backend:
service:
name: camtiler
port:
number: 5001
- pathType: Prefix
path: "/events"
backend:
service:
name: log-viewer-backend
port:
number: 3002
- pathType: Prefix
path: "/"
backend:
service:
name: log-viewer-frontend
port:
number: 3003
tls:
- hosts:
- cams.k-space.ee
secretName: camtiler-tls
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: camera-operator
annotations:
keel.sh/policy: force
keel.sh/trigger: poll
spec:
revisionHistoryLimit: 0
replicas: 1
serviceName: camera-operator
selector:
matchLabels:
app: camera-operator
template:
metadata:
labels:
app: camera-operator
spec:
serviceAccount: camera-operator
containers:
- name: camera-operator
image: harbor.k-space.ee/k-space/camera-operator:latest
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
env:
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: camera-operator
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- create
- delete
- list
- update
- apiGroups:
- apps
resources:
- deployments
verbs:
- create
- delete
- list
- update
- apiGroups:
- k-space.ee
resources:
- cams
verbs:
- get
- list
- watch
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: camera-operator
subjects:
- kind: ServiceAccount
name: camera-operator
roleRef:
kind: Role
name: camera-operator
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: camera-operator
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: camera-motion-detect
spec:
podSelector:
matchLabels:
component: camdetect
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
component: camtiler
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
podSelector:
matchLabels:
app: prometheus
egress:
- to:
- ipBlock:
# Permit access to cameras outside the cluster
cidr: 100.102.0.0/16
- to:
- podSelector:
matchLabels:
app: mongodb-svc
ports:
- port: 27017
- to:
- podSelector:
matchLabels:
v1.min.io/tenant: minio
ports:
- port: 9000
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: camera-tiler
spec:
podSelector:
matchLabels:
component: camtiler
policyTypes:
- Ingress
- Egress
egress:
- to:
- podSelector:
matchLabels:
component: camdetect
ports:
- port: 5000
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
podSelector:
matchLabels:
app: prometheus
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: log-viewer-backend
spec:
podSelector:
matchLabels:
app: log-viewer-backend
policyTypes:
- Ingress
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: mongodb-svc
- to:
- podSelector:
matchLabels:
v1.min.io/tenant: minio
ports:
- port: 9000
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: log-viewer-frontend
spec:
podSelector:
matchLabels:
app: log-viewer-frontend
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minio
annotations:
kubernetes.io/ingress.class: traefik
cert-manager.io/cluster-issuer: default
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
spec:
rules:
- host: cams-s3.k-space.ee
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: minio
port:
number: 80
tls:
- hosts:
- cams-s3.k-space.ee
secretName: cams-s3-tls

1
camtiler/minio-support.yml Symbolic link
View File

@ -0,0 +1 @@
../shared/minio-support.yml

View File

@ -0,0 +1 @@
../mongodb-operator/mongodb-support.yml

1
camtiler/mongoexpress.yml Symbolic link
View File

@ -0,0 +1 @@
../shared/mongoexpress.yml

View File

@ -0,0 +1 @@
../shared/networkpolicy-base.yml

126
camtiler/persistence.yml Normal file
View File

@ -0,0 +1,126 @@
---
apiVersion: mongodbcommunity.mongodb.com/v1
kind: MongoDBCommunity
metadata:
name: mongodb
spec:
members: 3
type: ReplicaSet
version: "5.0.9"
security:
authentication:
modes: ["SCRAM"]
users:
- name: readwrite
db: application
passwordSecretRef:
name: mongodb-application-readwrite-password
roles:
- name: readWrite
db: application
scramCredentialsSecretName: mongodb-application-readwrite
- name: readonly
db: application
passwordSecretRef:
name: mongodb-application-readonly-password
roles:
- name: readOnly
db: application
scramCredentialsSecretName: mongodb-application-readonly
statefulSet:
spec:
template:
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- mongodb-svc
topologyKey: kubernetes.io/hostname
nodeSelector:
dedicated: storage
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule
volumeClaimTemplates:
- metadata:
name: logs-volume
spec:
storageClassName: local-path
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 512Mi
- metadata:
name: data-volume
spec:
storageClassName: local-path
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
apiVersion: minio.min.io/v2
kind: Tenant
metadata:
name: minio
annotations:
prometheus.io/path: /minio/prometheus/metrics
prometheus.io/port: "9000"
prometheus.io/scrape: "true"
spec:
credsSecret:
name: minio-secret
buckets:
- name: application
requestAutoCert: false
users:
- name: minio-user-0
pools:
- name: pool-0
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: v1.min.io/tenant
operator: In
values:
- minio
- key: v1.min.io/pool
operator: In
values:
- pool-0
topologyKey: kubernetes.io/hostname
resources:
requests:
cpu: '1'
memory: 512Mi
servers: 4
volumesPerServer: 1
volumeClaimTemplate:
metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: '30Gi'
storageClassName: local-path
status: {}
nodeSelector:
dedicated: storage
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule

26
cert-manager/README.md Normal file
View File

@ -0,0 +1,26 @@
# cert-manager
`cert-manager` is used to obtain TLS certificates from Let's Encrypt.
Added manifest with:
```
curl -L https://github.com/jetstack/cert-manager/releases/download/v1.6.1/cert-manager.yaml -O
```
To update certificate issuer
```
kubectl apply -f namespace.yml -f cert-manager.yaml
kubectl apply -f issuer.yml
kubectl -n cert-manager create secret generic tsig-secret \
--from-literal=TSIG_SECRET=<secret>
```
Workaround for webhook timeout issue https://github.com/jetstack/cert-manager/issues/2602
It's not very clear why this is happening, deserves further investigation - presumably Calico related somehow:
```
kubectl delete mutatingwebhookconfiguration.admissionregistration.k8s.io cert-manager-webhook
kubectl delete validatingwebhookconfigurations.admissionregistration.k8s.io cert-manager-webhook
```

File diff suppressed because it is too large Load Diff

17329
cert-manager/cert-manager.yaml Normal file

File diff suppressed because it is too large Load Diff

19
cert-manager/issuer.yml Normal file
View File

@ -0,0 +1,19 @@
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: default
spec:
acme:
email: info@k-space.ee
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: example-issuer-account-key
solvers:
- dns01:
rfc2136:
nameserver: 193.40.103.2
tsigKeyName: acme.
tsigAlgorithm: HMACSHA512
tsigSecretSecretRef:
name: tsig-secret
key: TSIG_SECRET

90
cluster-role-bindings.yml Normal file
View File

@ -0,0 +1,90 @@
---
# AD/Samba group "Kubernetes Admins" members have full access
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-admins
subjects:
- kind: Group
name: "Kubernetes Admins"
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
# AD/Samba group "Developers" members have view access for everything
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-developers
subjects:
- kind: Group
name: Developers
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: view
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: developers
namespace: camtiler
subjects:
- kind: Group
name: Developers
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: developers
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: developers
namespace: members-site
subjects:
- kind: Group
name: Developers
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: developers
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: developers
rules:
- verbs:
- create
- delete
- patch
- update
apiGroups:
- ''
resources:
- configmaps
- pods/attach
- pods/exec
- pods/portforward
- pods/proxy
- verbs:
- patch
apiGroups:
- apps
resources:
- deployments
- statefulsets
- deployments/scale
- statefulsets/scale
- verbs:
- delete
apiGroups:
- ''
resources:
- pods

13
drone-execution/README.md Normal file
View File

@ -0,0 +1,13 @@
To deply:
```
kubectl apply -n drone-execution -f application.yml
```
To bootstrap secrets:
```
kubectl create secret generic -n drone-execution application-secrets \
--from-literal=DRONE_RPC_SECRET=$(kubectl get secret -n drone application-secrets -o jsonpath="{.data.DRONE_RPC_SECRET}" | base64 -d) \
--from-literal=DRONE_SECRET_PLUGIN_TOKEN=$(cat /dev/urandom | base64 | head -c 30)
```

View File

@ -0,0 +1,177 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: drone-runner-kube
---
apiVersion: v1
kind: ConfigMap
metadata:
name: application-config
data:
DRONE_DEBUG: "false"
DRONE_TRACE: "false"
DRONE_NAMESPACE_DEFAULT: "drone-execution"
DRONE_RPC_HOST: "drone.k-space.ee"
DRONE_RPC_PROTO: "https"
PLUGIN_MTU: "1300"
DRONE_SECRET_PLUGIN_ENDPOINT: "http://secrets:3000"
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: drone-runner-kube
namespace: "drone-execution"
labels:
app: drone-runner-kube
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- create
- delete
- apiGroups:
- ""
resources:
- pods
- pods/log
verbs:
- get
- create
- delete
- list
- watch
- update
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: drone-runner-kube
namespace: drone-execution
labels:
app: drone-runner-kube
subjects:
- kind: ServiceAccount
name: drone-runner-kube
namespace: drone-execution
roleRef:
kind: Role
name: drone-runner-kube
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Service
metadata:
name: drone-runner-kube
labels:
app: drone-runner-kube
spec:
type: ClusterIP
ports:
- port: 3000
targetPort: http
protocol: TCP
name: http
selector:
app: drone-runner-kube
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: drone-runner-kube
annotations:
keel.sh/policy: force
keel.sh/trigger: poll
keel.sh/pollSchedule: "@midnight"
spec:
replicas: 1
selector:
matchLabels:
app: drone-runner-kube
template:
metadata:
labels:
app: drone-runner-kube
spec:
serviceAccountName: drone-runner-kube
terminationGracePeriodSeconds: 3600
containers:
- name: server
securityContext:
{}
image: drone/drone-runner-kube
imagePullPolicy: Always
ports:
- name: http
containerPort: 3000
protocol: TCP
envFrom:
- configMapRef:
name: application-config
- secretRef:
name: application-secrets
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: drone-kubernetes-secrets
annotations:
keel.sh/policy: force
keel.sh/trigger: poll
keel.sh/pollSchedule: "@midnight"
spec:
replicas: 1
selector:
matchLabels:
app: drone-kubernetes-secrets
template:
metadata:
labels:
app: drone-kubernetes-secrets
spec:
containers:
- name: secrets
image: drone/kubernetes-secrets
imagePullPolicy: Always
ports:
- containerPort: 3000
env:
- name: SECRET_KEY
valueFrom:
secretKeyRef:
name: application-secrets
key: DRONE_SECRET_PLUGIN_TOKEN
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: drone-kubernetes-secrets
spec:
podSelector:
matchLabels:
app: drone-kubernetes-secrets
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: drone-runner-kube
ports:
- port: 3000
---
# Following should block access to pods in other namespaces, but should permit
# Git checkout, pip install, talking to Traefik via public IP etc
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: drone-runner-kube
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0

View File

@ -0,0 +1 @@
../shared/networkpolicy-base.yml

25
drone/.helmignore Normal file
View File

@ -0,0 +1,25 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/
# Chart dirs/files
docs/
ci/

161
drone/README.md Normal file
View File

@ -0,0 +1,161 @@
# Deployment
To deploy:
```
kubectl apply -n drone -f application.yml
```
To bootstrap secrets:
```
kubectl create secret generic -n drone application-secrets \
--from-literal=DRONE_GITEA_CLIENT_ID=... \
--from-literal=DRONE_GITEA_CLIENT_SECRET=... \
--from-literal=DRONE_RPC_SECRET=$(cat /dev/urandom | base64 | head -c 30)
```
# Integrating with Docker registry
We use harbor.k-space.ee to host own images.
Set up robot account `robot$k-space+drone` in Harbor first.
In Drone associate `docker_username` and `docker_password` secrets with the
`k-space`.
Instead of click marathon you can also pull the CLI configuration for Drone
from https://drone.k-space.ee/account
```
drone orgsecret add k-space docker_username 'robot$k-space+drone'
drone orgsecret add k-space docker_password '...'
```
# Integrating with e-mail
To (re)set e-mail credentials:
```
drone orgsecret add k-space email_password '...'
```
To issue build hit the button in Drone web interface or alternatively:
```
drone build create k-space/...
```
# Using templates
Templates unfortunately aren't pulled in from this Git repo.
Current `docker.yaml` template includes following:
```
kind: pipeline
type: kubernetes
name: build-arm64
platform:
arch: arm64
os: linux
node_selector:
kubernetes.io/arch: arm64
tolerations:
- key: arch
operator: Equal
value: arm64
effect: NoSchedule
steps:
- name: submodules
image: alpine/git
commands:
- touch .gitmodules
- sed -i -e 's/git@git.k-space.ee:/https:\\/\\/git.k-space.ee\\//g' .gitmodules
- git submodule update --init --recursive
- echo "ENV GIT_COMMIT=$(git rev-parse HEAD)" >> Dockerfile
- echo "ENV GIT_COMMIT_TIMESTAMP=$(git log -1 --format=%cd --date=iso-strict)" >> Dockerfile
- cat Dockerfile
- name: docker
image: plugins/docker
settings:
repo: harbor.k-space.ee/${DRONE_REPO}
tags: latest-arm64
registry: harbor.k-space.ee
squash: true
experimental: true
mtu: 1300
username:
from_secret: docker_username
password:
from_secret: docker_password
---
kind: pipeline
type: kubernetes
name: build-amd64
platform:
arch: amd64
os: linux
node_selector:
kubernetes.io/arch: amd64
steps:
- name: submodules
image: alpine/git
commands:
- touch .gitmodules
- sed -i -e 's/git@git.k-space.ee:/https:\\/\\/git.k-space.ee\\//g' .gitmodules
- git submodule update --init --recursive
- echo "ENV GIT_COMMIT=$(git rev-parse HEAD)" >> Dockerfile
- echo "ENV GIT_COMMIT_TIMESTAMP=$(git log -1 --format=%cd --date=iso-strict)" >> Dockerfile
- cat Dockerfile
- name: docker
image: plugins/docker
settings:
repo: harbor.k-space.ee/${DRONE_REPO}
tags: latest-amd64
registry: harbor.k-space.ee
squash: true
experimental: true
mtu: 1300
storage_driver: vfs
username:
from_secret: docker_username
password:
from_secret: docker_password
---
kind: pipeline
type: kubernetes
name: manifest
steps:
- name: manifest
image: plugins/manifest
settings:
target: harbor.k-space.ee/${DRONE_REPO}:latest
template: harbor.k-space.ee/${DRONE_REPO}:latest-ARCH
platforms:
- linux/amd64
- linux/arm64
username:
from_secret: docker_username
password:
from_secret: docker_password
depends_on:
- build-amd64
- build-arm64
---
kind: pipeline
type: kubernetes
name: gitlint
steps:
- name: gitlint
image: harbor.k-space.ee/k-space/gitlint-bundle
# https://git.k-space.ee/k-space/gitlint-bundle
---
kind: pipeline
type: kubernetes
name: flake8
steps:
- name: flake8
image: harbor.k-space.ee/k-space/flake8-bundle
# https://git.k-space.ee/k-space/flake8-bundle
```

109
drone/application.yml Normal file
View File

@ -0,0 +1,109 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: application-config
data:
DRONE_GITEA_SERVER: "https://git.k-space.ee"
DRONE_GIT_ALWAYS_AUTH: "false"
DRONE_PROMETHEUS_ANONYMOUS_ACCESS: "true"
DRONE_SERVER_HOST: "drone.k-space.ee"
DRONE_SERVER_PROTO: "https"
DRONE_USER_CREATE: "username:lauri,admin:true"
---
apiVersion: v1
kind: Service
metadata:
name: drone
spec:
type: ClusterIP
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
selector:
app: drone
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: drone
annotations:
keel.sh/policy: minor
keel.sh/trigger: poll
keel.sh/pollSchedule: "@midnight"
spec:
serviceName: drone
replicas: 1
selector:
matchLabels:
app: drone
template:
metadata:
labels:
app: drone
annotations:
prometheus.io/port: "80"
prometheus.io/scrape: "true"
spec:
automountServiceAccountToken: false
securityContext:
{}
containers:
- name: server
securityContext:
{}
image: drone/drone:2
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
envFrom:
- secretRef:
name: application-secrets
- configMapRef:
name: application-config
volumeMounts:
- name: drone-data
mountPath: /data
volumeClaimTemplates:
- metadata:
name: drone-data
spec:
storageClassName: longhorn
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: drone
annotations:
cert-manager.io/cluster-issuer: default
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
spec:
tls:
- hosts:
- "drone.k-space.ee"
secretName: drone-tls
rules:
- host: "drone.k-space.ee"
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: drone
port:
number: 80

12
etherpad/README.md Normal file
View File

@ -0,0 +1,12 @@
To apply changes:
```
kubectl apply -n etherpad -f application.yml -f networkpolicy-base.yml
```
Initialize MySQL secrets:
```
kubectl create secret generic -n etherpad mariadb-secrets \
--from-literal=MYSQL_ROOT_PASSWORD=$(cat /dev/urandom | base64 | head -c 30) \
--from-literal=MYSQL_PASSWORD=$(cat /dev/urandom | base64 | head -c 30)

206
etherpad/application.yml Normal file
View File

@ -0,0 +1,206 @@
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: etherpad
namespace: etherpad
annotations:
keel.sh/policy: minor
keel.sh/trigger: poll
keel.sh/pollSchedule: "@midnight"
spec:
# Etherpad does NOT support running multiple replicas due to
# in-application caching https://github.com/ether/etherpad-lite/issues/3680
replicas: 1
serviceName: etherpad
selector:
matchLabels:
app: etherpad
template:
metadata:
labels:
app: etherpad
spec:
containers:
- name: etherpad
image: etherpad/etherpad:1
securityContext:
# Etherpad writes session key during start
readOnlyRootFilesystem: false
runAsNonRoot: true
runAsUser: 5001
ports:
- containerPort: 9001
env:
- name: DB_TYPE
value: mysql
- name: DB_HOST
value: 172.20.36.1
- name: DB_NAME
value: kspace_etherpad
- name: DB_USER
value: kspace_etherpad
- name: PAD_OPTIONS_NO_COLORS
value: "true"
- name: PAD_OPTIONS_USE_MONOSPACE_FONT
value: "true"
- name: PAD_OPTIONS_SHOW_CHAT
value: "false"
- name: TRUST_PROXY
value: "true"
- name: ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: application-secrets
key: ADMIN_PASSWORD
- name: DB_PASS
valueFrom:
secretKeyRef:
name: mariadb-secrets
key: MYSQL_PASSWORD
---
apiVersion: v1
kind: Service
metadata:
name: etherpad
namespace: etherpad
spec:
type: ClusterIP
selector:
app: etherpad
ports:
- protocol: TCP
port: 9001
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: etherpad
namespace: etherpad
annotations:
kubernetes.io/ingress.class: traefik
cert-manager.io/cluster-issuer: default
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
spec:
rules:
- host: pad.k-space.ee
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: etherpad
port:
number: 9001
tls:
- hosts:
- pad.k-space.ee
secretName: pad-tls
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: etherpad
namespace: etherpad
spec:
podSelector:
matchLabels:
app: etherpad
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
ports:
- protocol: TCP
port: 9001
egress:
- to:
- ipBlock:
cidr: 172.20.36.1/32
ports:
- protocol: TCP
port: 3306
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: mysql-operator
spec:
podSelector:
matchLabels:
app: etherpad
policyTypes:
- Ingress
- Egress
ingress:
- # TODO: Not sure why mysql-operator needs to be able to connect
from:
- namespaceSelector:
matchExpressions:
- key: kubernetes.io/metadata.name
operator: In
values:
- mysql-operator
ports:
- protocol: TCP
port: 3306
- # Allow connecting from other MySQL pods in same namespace
from:
- podSelector:
matchLabels:
app.kubernetes.io/managed-by: mysql-operator
ports:
- protocol: TCP
port: 3306
egress:
- # Allow connecting to other MySQL pods in same namespace
to:
- podSelector:
matchLabels:
app.kubernetes.io/managed-by: mysql-operator
ports:
- protocol: TCP
port: 3306
---
apiVersion: mysql.oracle.com/v2
kind: InnoDBCluster
metadata:
name: mysql-cluster
spec:
secretName: mysql-secrets
instances: 3
router:
instances: 1
tlsUseSelfSigned: true
datadirVolumeClaimTemplate:
storageClassName: local-path
accessModes:
- ReadWriteOnce
resources:
requests:
storage: "10Gi"
podSpec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/managed-by
operator: In
values:
- mysql-operator
topologyKey: kubernetes.io/hostname
nodeSelector:
dedicated: storage
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule

View File

@ -0,0 +1 @@
../shared/networkpolicy-base.yml

15
external-dns/README.md Normal file
View File

@ -0,0 +1,15 @@
Before applying replace the secret with the actual one.
For debugging add `- --log-level=debug`:
```
kubectl apply -n external-dns -f external-dns.yml
```
Insert TSIG secret:
```
kubectl -n external-dns create secret generic tsig-secret \
--from-literal=TSIG_SECRET=<secret>
```

View File

@ -0,0 +1,84 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: external-dns
namespace: external-dns
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- pods
- nodes
verbs:
- get
- watch
- list
- apiGroups:
- extensions
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: external-dns
namespace: external-dns
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: external-dns-viewer
namespace: external-dns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: external-dns
subjects:
- kind: ServiceAccount
name: external-dns
namespace: external-dns
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns
namespace: external-dns
spec:
revisionHistoryLimit: 0
selector:
matchLabels:
app: external-dns
template:
metadata:
labels:
app: external-dns
spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: k8s.gcr.io/external-dns/external-dns:v0.10.2
envFrom:
- secretRef:
name: tsig-secret
args:
- --registry=txt
- --txt-prefix=external-dns-
- --txt-owner-id=k8s
- --provider=rfc2136
- --source=ingress
- --source=service
- --domain-filter=k-space.ee
- --rfc2136-host=193.40.103.2
- --rfc2136-port=53
- --rfc2136-zone=k-space.ee
- --rfc2136-tsig-keyname=acme
- --rfc2136-tsig-secret-alg=hmac-sha512
- --rfc2136-tsig-secret=$(TSIG_SECRET)
# https://github.com/kubernetes-sigs/external-dns/issues/2446

1
harbor/.gitignore vendored Normal file
View File

@ -0,0 +1 @@
harbor.yml

10
harbor/README.md Normal file
View File

@ -0,0 +1,10 @@
Deploy with:
```
kubectl create namespace harbor
kubectl apply -n harbor -f application.yml -f application-secrets.yml
```
After deployment login with Harbor admin credentials and configure OIDC:
![OIDC configuration](harbor-oidc-config.png)

1078
harbor/application.yml Normal file

File diff suppressed because it is too large Load Diff

Binary file not shown.

After

Width:  |  Height:  |  Size: 85 KiB

10
keel/README.md Normal file
View File

@ -0,0 +1,10 @@
To generate secrets and to deploy:
```
kubectl create secret generic -n $(basename $(pwd)) application-secrets \
--from-literal=BASIC_AUTH_PASSWORD=$(cat /dev/urandom | base64 | head -c 30) \
--from-literal=MAIL_SMTP_PASS=... \
--from-literal=SLACK_TOKEN=...
kubectl apply -n keel -f application.yml
kubectl -n keel rollout restart deployment.apps/keel
```

176
keel/application.yml Normal file
View File

@ -0,0 +1,176 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: keel
namespace: keel
labels:
app: keel
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: keel
rules:
- apiGroups:
- ""
resources:
- namespaces
verbs:
- watch
- list
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- watch
- list
- apiGroups:
- ""
- extensions
- apps
- batch
resources:
- pods
- replicasets
- replicationcontrollers
- statefulsets
- deployments
- daemonsets
- jobs
- cronjobs
verbs:
- get
- delete # required to delete pods during force upgrade of the same tag
- watch
- list
- update
- apiGroups:
- ""
resources:
- configmaps
- pods/portforward
verbs:
- get
- create
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: keel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: keel
subjects:
- kind: ServiceAccount
name: keel
namespace: keel
---
apiVersion: v1
kind: Service
metadata:
name: keel
namespace: keel
labels:
app: keel
spec:
type: ClusterIP
ports:
- port: 9300
targetPort: 9300
protocol: TCP
name: keel
selector:
app: keel
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: keel
labels:
app: keel
annotations:
keel.sh/policy: force
keel.sh/trigger: poll
keel.sh/pollSchedule: "@midnight"
spec:
replicas: 1
serviceName: keel
selector:
matchLabels:
app: keel
template:
metadata:
labels:
app: keel
spec:
serviceAccountName: keel
containers:
- name: keel
image: keelhq/keel:latest
imagePullPolicy: Always
command: ["/bin/keel"]
volumeMounts:
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POLL
value: "true"
- name: HELM_PROVIDER
value: "false"
- name: TILLER_NAMESPACE
value: "kube-system"
- name: TILLER_ADDRESS
value: "tiller-deploy:44134"
- name: NOTIFICATION_LEVEL
value: "info"
- name: BASIC_AUTH_USER
value: admin
- name: SLACK_CHANNELS
value: kube-prod
- name: SLACK_BOT_NAME
value: keel.k-space.ee
envFrom:
- secretRef:
name: application-secrets
ports:
- containerPort: 9300
livenessProbe:
httpGet:
path: /healthz
port: 9300
initialDelaySeconds: 30
timeoutSeconds: 10
readinessProbe:
httpGet:
path: /healthz
port: 9300
initialDelaySeconds: 30
timeoutSeconds: 10
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 50m
memory: 64Mi
volumeMounts:
- name: keel-data
mountPath: /data
volumeClaimTemplates:
- metadata:
name: keel-data
spec:
storageClassName: longhorn
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi

View File

@ -0,0 +1,18 @@
# Workflow
```bash
helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
helm template --release-name k6 -n kube-dashboard kubernetes-dashboard/kubernetes-dashboard -f values.yaml > application.yml
```
# Apply the changes:
```bash
kubectl apply -f application.yml -n kubernetes-dashboard
```
# Get token
```
kubectl -n kubernetes-dashboard get secret $(kubectl -n kube-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
```

View File

@ -0,0 +1,293 @@
apiVersion: v1
kind: Namespace
metadata:
name: kubernetes-dashboard
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/name: kubernetes-dashboard
name: kubernetes-dashboard
---
apiVersion: v1
kind: Secret
metadata:
labels:
app.kubernetes.io/name: kubernetes-dashboard
name: kubernetes-dashboard-certs
type: Opaque
---
# kubernetes-dashboard-csrf
apiVersion: v1
kind: Secret
metadata:
labels:
app.kubernetes.io/name: kubernetes-dashboard
name: kubernetes-dashboard-csrf
type: Opaque
---
# kubernetes-dashboard-key-holder
apiVersion: v1
kind: Secret
metadata:
labels:
app.kubernetes.io/name: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
type: Opaque
---
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/name: kubernetes-dashboard
name: kubernetes-dashboard-settings
data:
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: "kubernetes-dashboard-metrics"
labels:
app.kubernetes.io/name: kubernetes-dashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: "kubernetes-dashboard-metrics"
labels:
app.kubernetes.io/name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard-metrics
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: kubernetes-dashboard
labels:
app.kubernetes.io/name: kubernetes-dashboard
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-dashboard
labels:
app.kubernetes.io/name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
app.kubernetes.io/name: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Other resources
- apiGroups: [""]
resources: ["nodes", "namespaces", "pods", "serviceaccounts", "services", "configmaps", "endpoints", "persistentvolumeclaims", "replicationcontrollers", "replicationcontrollers/scale", "persistentvolumeclaims", "persistentvolumes", "bindings", "events", "limitranges", "namespaces/status", "pods/log", "pods/status", "replicationcontrollers/status", "resourcequotas", "resourcequotas/status"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources: ["daemonsets", "deployments", "deployments/scale", "replicasets", "replicasets/scale", "statefulsets"]
verbs: ["get", "list", "watch"]
- apiGroups: ["autoscaling"]
resources: ["horizontalpodautoscalers"]
verbs: ["get", "list", "watch"]
- apiGroups: ["batch"]
resources: ["cronjobs", "jobs"]
verbs: ["get", "list", "watch"]
- apiGroups: ["extensions"]
resources: ["daemonsets", "deployments", "deployments/scale", "networkpolicies", "replicasets", "replicasets/scale", "replicationcontrollers/scale"]
verbs: ["get", "list", "watch"]
- apiGroups: ["networking.k8s.io"]
resources: ["ingresses", "networkpolicies"]
verbs: ["get", "list", "watch"]
- apiGroups: ["policy"]
resources: ["poddisruptionbudgets"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses", "volumeattachments"]
verbs: ["get", "list", "watch"]
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["clusterrolebindings", "clusterroles", "roles", "rolebindings", ]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
app.kubernetes.io/name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
apiVersion: v1
kind: Service
metadata:
name: kubernetes-dashboard
labels:
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/component: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
spec:
type: ClusterIP
ports:
- port: 80
targetPort: http
name: http
selector:
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/component: kubernetes-dashboard
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubernetes-dashboard
labels:
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/component: kubernetes-dashboard
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
type: RollingUpdate
selector:
matchLabels:
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/component: kubernetes-dashboard
template:
metadata:
labels:
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/component: kubernetes-dashboard
spec:
serviceAccountName: kubernetes-dashboard
containers:
- name: kubernetes-dashboard
image: "kubernetesui/dashboard:v2.4.0"
imagePullPolicy: IfNotPresent
args:
- --namespace=kubernetes-dashboard
- --metrics-provider=none
- --enable-skip-login
- --disable-settings-authorizer
- --enable-insecure-login
- --system-banner="Just hit skip!"
ports:
- name: http
containerPort: 9090
protocol: TCP
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
resources:
limits:
cpu: 2
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsGroup: 2001
runAsUser: 1001
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kubernetes-dashboard
labels:
certManager: "true"
rewriteTarget: "true"
annotations:
cert-manager.io/cluster-issuer: default
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true"
spec:
rules:
- host: dashboard.k-space.ee
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: kubernetes-dashboard
port:
number: 80
tls:
- hosts:
- dashboard.k-space.ee
secretName: dashboard-tls

View File

@ -0,0 +1,21 @@
# Local path provisioner
Rancher's `local-path-storage` storage class enables dynamic provisioning of
persistent volumes on Kubernetes worker.
```
curl https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml -O
kubectl apply -n local-path-storage -f local-path-storage.yaml
```
# Known issues
* No volume stats exported via `kubelet_volume_stats_used_bytes` metric
* No capacity limit imposed. Not possible with ext4 filesystem,
with [XFS might be possible](https://github.com/rancher/local-path-provisioner/tree/master/examples/quota)
* No easy way to back up the volumes
Possible alternatives:
* Longhorn with no redundancy
* [metal-stack/csi-lvm](https://github.com/metal-stack/csi-lvm)

View File

@ -0,0 +1,157 @@
apiVersion: v1
kind: Namespace
metadata:
name: local-path-storage
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: local-path-provisioner-service-account
namespace: local-path-storage
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: local-path-provisioner-role
rules:
- apiGroups: [ "" ]
resources: [ "nodes", "persistentvolumeclaims", "configmaps" ]
verbs: [ "get", "list", "watch" ]
- apiGroups: [ "" ]
resources: [ "endpoints", "persistentvolumes", "pods" ]
verbs: [ "*" ]
- apiGroups: [ "" ]
resources: [ "events" ]
verbs: [ "create", "patch" ]
- apiGroups: [ "storage.k8s.io" ]
resources: [ "storageclasses" ]
verbs: [ "get", "list", "watch" ]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: local-path-provisioner-bind
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: local-path-provisioner-role
subjects:
- kind: ServiceAccount
name: local-path-provisioner-service-account
namespace: local-path-storage
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: local-path-provisioner
namespace: local-path-storage
spec:
replicas: 1
selector:
matchLabels:
app: local-path-provisioner
template:
metadata:
labels:
app: local-path-provisioner
spec:
serviceAccountName: local-path-provisioner-service-account
containers:
- name: local-path-provisioner
image: rancher/local-path-provisioner:v0.0.22
imagePullPolicy: IfNotPresent
command:
- local-path-provisioner
- --debug
- start
- --config
- /etc/config/config.json
volumeMounts:
- name: config-volume
mountPath: /etc/config/
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumes:
- name: config-volume
configMap:
name: local-path-config
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-path
provisioner: rancher.io/local-path
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
---
kind: ConfigMap
apiVersion: v1
metadata:
name: local-path-config
namespace: local-path-storage
data:
config.json: |-
{
"nodePathMap":[
{
"node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
"paths":["/opt/local-path-provisioner"]
}
]
}
setup: |-
#!/bin/sh
while getopts "m:s:p:" opt
do
case $opt in
p)
absolutePath=$OPTARG
;;
s)
sizeInBytes=$OPTARG
;;
m)
volMode=$OPTARG
;;
esac
done
mkdir -m 0777 -p ${absolutePath}
teardown: |-
#!/bin/sh
while getopts "m:s:p:" opt
do
case $opt in
p)
absolutePath=$OPTARG
;;
s)
sizeInBytes=$OPTARG
;;
m)
volMode=$OPTARG
;;
esac
done
rm -rf ${absolutePath}
helperPod.yaml: |-
apiVersion: v1
kind: Pod
metadata:
name: helper-pod
spec:
containers:
- name: helper-pod
image: busybox
imagePullPolicy: IfNotPresent

1
logging/.gitignore vendored Normal file
View File

@ -0,0 +1 @@
mongoexpress.yml

52
logging/README.md Normal file
View File

@ -0,0 +1,52 @@
# Logging infrastructure
## Background
Fluent Bit picks up the logs from Kubernetes workers and sends them to Graylog
using GELF over TCP 12201.
Graylog ingests the logs and stores them in Elasticsearch.
## Deployment
To deploy:
```
kubectl create namespace logging
kubectl apply -n logging -f mongodb-support.yml -f application.yml -f networkpolicy-base.yml
kubectl rollout restart -n logging daemonset/fluent-bit
```
To set secrets:
```
GRAYLOG_ROOT_PASSWORD=$(cat /dev/urandom | base64 | head -c 30)
echo "Graylog admin password: $GRAYLOG_ROOT_PASSWORD"
kubectl create secret generic -n logging graylog-secrets \
--from-literal=GRAYLOG_ROOT_PASSWORD_SHA2=$(echo -en $GRAYLOG_ROOT_PASSWORD | sha256sum | cut -d" " -f1) \
--from-literal=GRAYLOG_PASSWORD_SECRET=$(cat /dev/urandom | base64 | head -c 30)
kubectl create secret generic -n logging mongodb-application-readwrite-password --from-literal="password=$(cat /dev/urandom | base64 | head -c 30)"
kubectl create secret generic -n logging mongodb-application-readonly-password --from-literal="password=$(cat /dev/urandom | base64 | head -c 30)"
```
## Graylog setup
Note that Graylog is running without disk journal to
prevent SSD thrashing and to save some disk space.
This will be problematic when there are loads for logs coming in and
ElasticSearch is unable to process the entries in timely manner.
ElasticSearch default index is tuned to match the persistent volume allocated
on Longhorn to prevent running out disk space on that PV.
After Graylog deployment following steps were manually performed via web interface:
* Add Syslog TCP input for external Linux hosts
* Add Syslog UDP input for Mikrotik networking gear
* Add GELF TCP input for Kubernetes workers
* Trusted header authentication was enabled and set to `Remote-User`
https://graylog.k-space.ee/system/authentication/authenticator/edit
Note that user accounts are not provisioned automatically.
Users need to be manually created in Graylog with matching `Username`.
Automatic user account provisioning is supported in Graylog Enterprise version

634
logging/application.yml Normal file
View File

@ -0,0 +1,634 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluent-bit
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fluent-bit-read
rules:
- apiGroups: [""]
resources:
- namespaces
- pods
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: fluent-bit-read
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: fluent-bit-read
subjects:
- kind: ServiceAccount
name: fluent-bit
namespace: logging
---
apiVersion: v1
kind: ConfigMap
metadata:
name: fluent-bit-config
namespace: logging
labels:
app: fluent-bit
annotations:
reloader.stakater.com/match: "true"
data:
fluent-bit.conf: |
[SERVICE]
Flush 1
Log_Level warn
Daemon off
Parsers_File parsers.conf
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_Port 2020
@INCLUDE input-kubernetes.conf
@INCLUDE filter-kubernetes.conf
@INCLUDE output-graylog.conf
input-kubernetes.conf: |
# Following assembles the log fragments of the Kubernetes runtime
# https://github.com/fluent/fluent-bit/blob/d3c71f2ed4ff3625b85715aaefe6bc76b2ac3c2e/src/multiline/flb_ml_parser_docker.c#L57
[INPUT]
name tail
tag kube.*
path /var/log/containers/*.log
multiline.parser cri
db /var/log/flb_kube.db
mem_buf_limit 5MB
skip_long_lines on
refresh_interval 10
filter-kubernetes.conf: |
# Following reassembles stack traces
[FILTER]
name multiline
match *
multiline.key_content log
multiline.parser go,python,java
# Following annotates the Kubernetes logs using Kubernetes API-s
[FILTER]
Name kubernetes
Match kube.*
Kube_URL https://kubernetes.default.svc:443
Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token
Kube_Tag_Prefix kube.var.log.containers.
Merge_Log On
K8S-Logging.Parser On
K8S-Logging.Exclude Off
# Following unnests the kubernetes map
[FILTER]
Name nest
Match kube.*
Operation lift
Nested_under kubernetes
Add_prefix kubernetes_
output-graylog.conf: |
[OUTPUT]
Name gelf
Match *
Host graylog-gelf-tcp
Port 12201
Mode tcp
Gelf_Host_Key kubernetes_host
Gelf_Short_Message_Key log
Retry_Limit no_limits
parsers.conf: |
# http://rubular.com/r/tjUt3Awgg4
[PARSER]
Name cri
Format regex
Regex ^(?<time>[^ ]+) (?<stream>stdout|stderr) (?<logtag>[^ ]*) (?<message>.*)$
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L%z
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluent-bit
namespace: logging
annotations:
keel.sh/policy: patch
keel.sh/trigger: poll
keel.sh/pollSchedule: "@midnight"
annotations:
reloader.stakater.com/search: "true"
spec:
revisionHistoryLimit: 0
selector:
matchLabels:
app: fluent-bit
template:
metadata:
labels:
app: fluent-bit
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "2020"
prometheus.io/path: /api/v1/metrics/prometheus
spec:
containers:
- name: fluent-bit
image: fluent/fluent-bit:1.9
imagePullPolicy: Always
ports:
- containerPort: 2020
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: fluent-bit-config
mountPath: /fluent-bit/etc/
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: fluent-bit-config
configMap:
name: fluent-bit-config
serviceAccountName: fluent-bit
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
- operator: "Exists"
effect: "NoExecute"
- operator: "Exists"
effect: "NoSchedule"
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: fluent-bit
spec:
podSelector:
matchLabels:
app: fluent-bit
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
podSelector:
matchLabels:
app: prometheus
ports:
- port: 2020
egress:
- to:
- podSelector:
matchLabels:
app: graylog
ports:
- protocol: TCP
port: 12201
- # Kubernetes API endpoint kubernetes.default.svc.cluster.local
# Determine IP-s and ports with: kubectl get ep -n default kubernetes
to:
- ipBlock:
cidr: 172.21.3.0/24
ports:
- port: 6443
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch
labels:
app: elasticsearch
spec:
serviceName: elasticsearch
revisionHistoryLimit: 0
replicas: 1
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
securityContext:
fsGroup: 1000
containers:
- name: elasticsearch
image: elasticsearch:7.17.3
securityContext:
runAsNonRoot: true
runAsUser: 1000
env:
- name: discovery.type
value: single-node
- name: xpack.security.enabled
value: "false"
ports:
- containerPort: 9200
readinessProbe:
httpGet:
path: /_cluster/health
port: 9200
initialDelaySeconds: 5
periodSeconds: 10
failureThreshold: 3
successThreshold: 1
timeoutSeconds: 5
resources:
limits:
memory: "2147483648"
volumeMounts:
- name: elasticsearch-data
mountPath: /usr/share/elasticsearch/data
- name: elasticsearch-tmp
mountPath: /tmp/
volumes:
- emptyDir: {}
name: elasticsearch-keystore
- emptyDir: {}
name: elasticsearch-tmp
- emptyDir: {}
name: elasticsearch-logs
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "10Gi"
storageClassName: longhorn
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
labels:
app: elasticsearch
spec:
ports:
- name: api
port: 80
targetPort: 9200
selector:
app: elasticsearch
---
apiVersion: v1
kind: Service
metadata:
name: graylog-gelf-tcp
labels:
app: graylog
spec:
ports:
- name: graylog-gelf-tcp
port: 12201
protocol: TCP
targetPort: 12201
selector:
app: graylog
---
apiVersion: v1
kind: Service
metadata:
name: graylog-syslog-tcp
labels:
app: graylog
annotations:
external-dns.alpha.kubernetes.io/hostname: syslog.k-space.ee
metallb.universe.tf/allow-shared-ip: syslog.k-space.ee
spec:
type: LoadBalancer
externalTrafficPolicy: Local
loadBalancerIP: 172.20.51.4
ports:
- name: graylog-syslog
port: 514
protocol: TCP
selector:
app: graylog
---
apiVersion: v1
kind: Service
metadata:
name: graylog-syslog-udp
labels:
app: graylog
annotations:
external-dns.alpha.kubernetes.io/hostname: syslog.k-space.ee
metallb.universe.tf/allow-shared-ip: syslog.k-space.ee
spec:
type: LoadBalancer
externalTrafficPolicy: Local
loadBalancerIP: 172.20.51.4
ports:
- name: graylog-syslog
port: 514
protocol: UDP
selector:
app: graylog
---
apiVersion: v1
kind: Service
metadata:
name: graylog
labels:
app: graylog
spec:
ports:
- name: graylog
port: 9000
protocol: TCP
selector:
app: graylog
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: graylog
labels:
app: graylog
annotations:
keel.sh/policy: minor
keel.sh/trigger: poll
keel.sh/pollSchedule: "@midnight"
spec:
serviceName: graylog
revisionHistoryLimit: 0
replicas: 1
selector:
matchLabels:
app: graylog
template:
metadata:
labels:
app: graylog
annotations:
prometheus.io/port: "9833"
prometheus.io/scrape: "true"
spec:
securityContext:
fsGroup: 1100
volumes:
- name: graylog-config
downwardAPI:
items:
- path: id
fieldRef:
fieldPath: metadata.name
containers:
- name: graylog
image: graylog/graylog:4.3
env:
- name: GRAYLOG_MONGODB_URI
valueFrom:
secretKeyRef:
name: mongodb-application-readwrite
key: connectionString.standard
- name: GRAYLOG_PROMETHEUS_EXPORTER_ENABLED
value: "true"
- name: GRAYLOG_PROMETHEUS_EXPORTER_BIND_ADDRESS
value: "0.0.0.0:9833"
- name: GRAYLOG_NODE_ID_FILE
value: /config/id
- name: GRAYLOG_HTTP_EXTERNAL_URI
value: "https://graylog.k-space.ee/"
- name: GRAYLOG_TRUSTED_PROXIES
value: "0.0.0.0/0"
- name: GRAYLOG_ELASTICSEARCH_HOSTS
value: "http://elasticsearch"
- name: GRAYLOG_MESSAGE_JOURNAL_ENABLED
value: "false"
- name: GRAYLOG_ROTATION_STRATEGY
value: "size"
- name: GRAYLOG_ELASTICSEARCH_MAX_SIZE_PER_INDEX
value: "268435456"
- name: GRAYLOG_ELASTICSEARCH_MAX_NUMBER_OF_INDICES
value: "16"
envFrom:
- secretRef:
name: graylog-secrets
securityContext:
runAsNonRoot: true
runAsUser: 1100
ports:
- containerPort: 9000
name: graylog
- containerPort: 9833
name: graylog-metrics
livenessProbe:
httpGet:
path: /api/system/lbstatus
port: 9000
initialDelaySeconds: 5
periodSeconds: 30
failureThreshold: 3
successThreshold: 1
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /api/system/lbstatus
port: 9000
initialDelaySeconds: 5
periodSeconds: 10
failureThreshold: 3
successThreshold: 1
timeoutSeconds: 5
volumeMounts:
- name: graylog-config
mountPath: /config
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: graylog
annotations:
cert-manager.io/cluster-issuer: default
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd
spec:
rules:
- host: graylog.k-space.ee
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: graylog
port:
number: 9000
tls:
- hosts:
- graylog.k-space.ee
secretName: graylog-tls
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: graylog
spec:
podSelector:
matchLabels:
app: graylog
policyTypes:
- Ingress
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: elasticsearch
ports:
- port: 9200
- to:
- podSelector:
matchLabels:
app: mongodb-svc
ports:
- port: 27017
ingress:
- from:
- ipBlock:
cidr: 172.23.0.0/16
- ipBlock:
cidr: 172.21.0.0/16
- ipBlock:
cidr: 100.102.0.0/16
ports:
- protocol: UDP
port: 514
- protocol: TCP
port: 514
- from:
- podSelector:
matchLabels:
app: fluent-bit
ports:
- protocol: TCP
port: 12201
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
podSelector:
matchLabels:
app: prometheus
ports:
- port: 9833
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
ports:
- protocol: TCP
port: 9000
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: elasticsearch
spec:
podSelector:
matchLabels:
app: elasticsearch
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: graylog
egress:
- to:
- ipBlock:
# geoip.elastic.co updates
cidr: 0.0.0.0/0
ports:
- port: 443
---
apiVersion: mongodbcommunity.mongodb.com/v1
kind: MongoDBCommunity
metadata:
name: mongodb
spec:
members: 3
type: ReplicaSet
version: "5.0.9"
security:
authentication:
modes: ["SCRAM"]
users:
- name: readwrite
db: application
passwordSecretRef:
name: mongodb-application-readwrite-password
roles:
- name: readWrite
db: application
scramCredentialsSecretName: mongodb-application-readwrite
- name: readonly
db: application
passwordSecretRef:
name: mongodb-application-readonly-password
roles:
- name: readOnly
db: application
scramCredentialsSecretName: mongodb-application-readonly
statefulSet:
spec:
template:
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- mongodb-svc
topologyKey: kubernetes.io/hostname
nodeSelector:
dedicated: storage
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule
volumeClaimTemplates:
- metadata:
name: logs-volume
spec:
storageClassName: local-path
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 512Mi
- metadata:
name: data-volume
spec:
storageClassName: local-path
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi

1
logging/mongodb-support.yml Symbolic link
View File

@ -0,0 +1 @@
../mongodb-operator/mongodb-support.yml

View File

@ -0,0 +1 @@
../shared/networkpolicy-base.yml

20
longhorn-system/README.md Normal file
View File

@ -0,0 +1,20 @@
# Longhorn distributed block storage system
The manifest was fetched from
https://raw.githubusercontent.com/longhorn/longhorn/v1.2.4/deploy/longhorn.yaml
and then heavily modified.
To deploy Longhorn use following:
```
kubectl -n longhorn-system apply -f longhorn.yaml -f ingress.yml
```
After deploying specify `dedicated=storage:NoSchedule`
for `Kubernetes Taint Toleration` under `Setting -> General` on
[Longhorn Dashboard](https://longhorn.k-space.ee/).
Proceed to tag suitable nodes with `storage` and disable Longhorn scheduling on others.
# Known issues
* Longhorn does not support [trim](https://github.com/longhorn/longhorn/issues/836)

View File

@ -0,0 +1,28 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: longhorn-dashboard
namespace: longhorn-system
annotations:
kubernetes.io/ingress.class: traefik
cert-manager.io/cluster-issuer: default
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true"
spec:
rules:
- host: longhorn.k-space.ee
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: longhorn-frontend
port:
number: 80
tls:
- hosts:
- longhorn.k-space.ee
secretName: longhorn-tls

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,27 @@
persistence:
defaultClassReplicaCount: 2
defaultSettings:
defaultDataLocality: best-effort
taintToleration: "dedicated=storage:NoSchedule"
systemManagedComponentsNodeSelector: "dedicated:storage"
longhornDriver:
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule
longhornUI:
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule
ingress:
enabled: true
host: longhorn.k-space.ee
tls: true
tlsSecret: longhorn-tls

29
metallb-system/README.md Normal file
View File

@ -0,0 +1,29 @@
# MetalLB
## Background
MetalLB exposes services to the outside world.
## Deployment
To update manifests:
```
curl -O https://raw.githubusercontent.com/metallb/metallb-operator/v0.13.4/bin/metallb-operator.yaml
kubectl apply -f metallb-operator.yaml
kubectl apply -f application.yml
```
Set up BGP secrets:
```
kubectl delete secret -n metallb-system mikrotik-router
kubectl create secret -n metallb-system generic mikrotik-router --type=kubernetes.io/basic-auth --from-literal=password=...
```
Eventually the external IP should show up here:
```
kubectl get svc -n traefik
```

View File

@ -0,0 +1,60 @@
---
apiVersion: metallb.io/v1beta1
kind: MetalLB
metadata:
name: metallb
namespace: metallb-system
spec:
nodeSelector:
node-role.kubernetes.io/worker: ""
---
# Slice of the private Zoo subnet using MetalLB L2 method
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: zoo
namespace: metallb-system
spec:
addresses:
- 172.20.51.0/24
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: zoo
namespace: metallb-system
spec:
ipAddressPools:
- zoo
---
# Slice of public EEnet subnet using MetalLB L3 method
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: eenet
namespace: metallb-system
spec:
addresses:
- 193.40.103.36/30
---
apiVersion: metallb.io/v1beta2
kind: BGPPeer
metadata:
name: mikrotik-router
namespace: metallb-system
spec:
myASN: 65530
peerASN: 65530
peerAddress: 172.20.0.1
passwordSecret:
name: mikrotik-router
namespace: metallb-system
---
apiVersion: metallb.io/v1beta1
kind: BGPAdvertisement
metadata:
name: eenet
namespace: metallb-system
spec:
ipAddressPools:
- eenet

File diff suppressed because it is too large Load Diff

1
mongodb-operator/.gitignore vendored Normal file
View File

@ -0,0 +1 @@
application.yml

View File

@ -0,0 +1,36 @@
# MongoDB Community Kubernetes Operator
To update operator itself:
```
helm repo add mongodb https://mongodb.github.io/helm-charts
helm template -n mongodb-operator community-operator mongodb/community-operator -f values.yaml > application.yml
kubectl create namespace mongodb-operator
kubectl apply -f application.yml
```
To update RBAC rules:
```
curl https://raw.githubusercontent.com/mongodb/mongodb-kubernetes-operator/master/config/rbac/role.yaml > mongodb-support.yml
echo "---" >> mongodb-support.yml
curl https://raw.githubusercontent.com/mongodb/mongodb-kubernetes-operator/master/config/rbac/role_binding.yaml >> mongodb-support.yml
echo "---" >> mongodb-support.yml
curl https://raw.githubusercontent.com/mongodb/mongodb-kubernetes-operator/master/config/rbac/role_binding_database.yaml >> mongodb-support.yml
echo "---" >> mongodb-support.yml
curl https://raw.githubusercontent.com/mongodb/mongodb-kubernetes-operator/master/config/rbac/role_database.yaml >> mongodb-support.yml
echo "---" >> mongodb-support.yml
curl https://raw.githubusercontent.com/mongodb/mongodb-kubernetes-operator/master/config/rbac/service_account.yaml >> mongodb-support.yml
echo "---" >> mongodb-support.yml
curl https://raw.githubusercontent.com/mongodb/mongodb-kubernetes-operator/master/config/rbac/service_account_database.yaml >> mongodb-support.yml
```
# Instantiating databases
For each application:
```
ln -s ../mongodb/mongo-support.yml
kubectl apply -f mongo-support.yml
kubectl create secret generic -n default mongodb-application-user-password --from-literal="password=$(cat /dev/urandom | base64 | head -c 30)"
```

View File

@ -0,0 +1,126 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: mongodb-kubernetes-operator
rules:
- apiGroups:
- ""
resources:
- pods
- services
- configmaps
- secrets
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- apps
resources:
- statefulsets
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- mongodbcommunity.mongodb.com
resources:
- mongodbcommunity
- mongodbcommunity/status
- mongodbcommunity/spec
- mongodbcommunity/finalizers
verbs:
- get
- patch
- list
- update
- watch
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: mongodb-kubernetes-operator
subjects:
- kind: ServiceAccount
name: mongodb-kubernetes-operator
roleRef:
kind: Role
name: mongodb-kubernetes-operator
apiGroup: rbac.authorization.k8s.io
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: mongodb-database
subjects:
- kind: ServiceAccount
name: mongodb-database
roleRef:
kind: Role
name: mongodb-database
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: mongodb-database
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- apiGroups:
- ""
resources:
- pods
verbs:
- patch
- delete
- get
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: mongodb-kubernetes-operator
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: mongodb-database
---
# Allow any pod in this namespace to connect to MongoDB and
# allow cluster members to talk to eachother
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: mongodb-operator
spec:
podSelector:
matchLabels:
app: mongodb-svc
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector: {}
ports:
- port: 27017
egress:
- to:
- podSelector:
matchLabels:
app: mongodb-svc
ports:
- port: 27017

View File

@ -0,0 +1,2 @@
operator:
watchNamespace: '*'

3
mysql-operator/README.md Normal file
View File

@ -0,0 +1,3 @@
helm template mysql-operator mysql-operator/mysql-operator --namespace mysql-operator --include-crds > application.yml
kubectl apply -n mysql-operator -f application.yml -f application-extras.yml -f networkpolicy-base.yml

View File

@ -0,0 +1,16 @@
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: mysql-operator
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- # TODO: Not sure why mysql-operator needs to be able to connect
to:
- namespaceSelector: {}
ports:
- protocol: TCP
port: 3306

View File

@ -0,0 +1,608 @@
---
# Source: crds/crd.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: innodbclusters.mysql.oracle.com
spec:
group: mysql.oracle.com
versions:
- name: v2
served: true
storage: true
schema:
openAPIV3Schema:
type: object
required: ["spec"]
properties:
metadata:
type: object
properties:
name:
type: string
maxLength: 40
spec:
type: object
required: ["secretName"]
properties:
secretName:
type: string
description: "Name of a generic type Secret containing root/default account password"
tlsCASecretName:
type: string
description: "Name of a generic type Secret containing CA (ca.pem) and optional CRL (crl.pem) for SSL"
tlsSecretName:
type: string
description: "Name of a TLS type Secret containing Server certificate and private key for SSL"
tlsUseSelfSigned:
type: boolean
default: false
description: "Enables use of self-signed TLS certificates, reducing or disabling TLS based security verifications"
version:
type: string
pattern: '^\d+\.\d+\.\d+(-.+)?'
description: "MySQL Server version"
edition:
type: string
pattern: "^(community|enterprise)$"
description: "MySQL Server Edition (community or enterprise)"
imageRepository:
type: string
description: "Repository from where images must be pulled from; defaults to mysql for community and container-registry.oracle.com/mysql for enterprise"
imagePullPolicy:
type: string
description: "Defaults to Always, but set to IfNotPresent in deploy-operator.yaml when deploying Operator"
imagePullSecrets:
type: array
items:
type: object
properties:
name:
type: string
serviceAccountName:
type: string
baseServerId:
type: integer
minimum: 0
maximum: 4294967195
default: 1000
description: "Base value for MySQL server_id for instances in the cluster"
datadirVolumeClaimTemplate:
type: object
x-kubernetes-preserve-unknown-fields: true
description: "Template for a PersistentVolumeClaim, to be used as datadir"
mycnf:
type: string
description: "Custom configuration additions for my.cnf"
instances:
type: integer
minimum: 1
maximum: 9
default: 1
description: "Number of MySQL replica instances for the cluster"
podSpec:
type: object
x-kubernetes-preserve-unknown-fields: true
initDB:
type: object
properties:
clone:
type: object
required: ["donorUrl", "secretKeyRef"]
properties:
donorUrl:
type: string
description: "URL of the cluster to clone from"
rootUser:
type: string
default: "root"
description: "User name used for cloning"
secretKeyRef:
type: object
required: ["name"]
properties:
name:
type: string
description: "Secret name with key 'rootPassword' storing the password for the user specified in rootUser"
dump:
type: object
required: ["storage"]
properties:
name:
type: string
description: "Name of the dump. Not used by the operator, but a descriptive hint for the cluster administrator"
path:
type: string
description: "Path to the dump in the PVC. Use when specifying persistentVolumeClaim. Omit for ociObjectStorage."
storage:
type: object
properties:
ociObjectStorage:
type: object
required: ["bucketName", "prefix", "credentials"]
properties:
bucketName:
type: string
description: "Name of the bucket where the dump is stored"
prefix:
type: string
description: "Path in the bucket where the dump files are stored"
credentials:
type: string
description: "Secret name with data for accessing the bucket"
persistentVolumeClaim:
type: object
description : "Specification of the PVC to be used. Used 'as is' in the cloning pod."
x-kubernetes-preserve-unknown-fields: true
x-kubernetes-preserve-unknown-fields: true
router:
type: object
description: "MySQL Router specification"
properties:
instances:
type: integer
minimum: 0
default: 1
description: "Number of MySQL Router instances to deploy"
tlsSecretName:
type: string
description: "Name of a TLS type Secret containing MySQL Router certificate and private key used for SSL"
version:
type: string
pattern: '^\d+\.\d+\.\d+(-.+)?'
description: "Override MySQL Router version"
podSpec:
type: object
x-kubernetes-preserve-unknown-fields: true
backupProfiles:
type: array
description: "Backup profile specifications for the cluster, which can be referenced from backup schedules and one-off backup jobs"
items:
type: object
required: ["name"]
properties:
name:
type: string
description: "Embedded backup profile, referenced as backupProfileName elsewhere"
dumpInstance:
type: object
properties:
dumpOptions:
type: object
description: "A dictionary of key-value pairs passed directly to MySQL Shell's DumpInstance()"
x-kubernetes-preserve-unknown-fields: true
storage:
type: object
properties:
ociObjectStorage:
type: object
required: ["bucketName", "prefix", "credentials"]
properties:
bucketName:
type: string
description: "Bucket name where backup is stored"
prefix:
type: string
description: "Path in bucket where backup is stored"
credentials:
type: string
description: "Secret name with data for accessing the bucket"
persistentVolumeClaim:
type: object
description : "Specification of the PVC to be used. Used 'as is' in pod executing the backup."
x-kubernetes-preserve-unknown-fields: true
snapshot:
type: object
properties:
storage:
type: object
properties:
ociObjectStorage:
type: object
required: ["bucketName", "prefix", "credentials"]
properties:
bucketName:
type: string
description: "Bucket name where backup is stored"
prefix:
type: string
description: "Path in bucket where backup is stored"
credentials:
type: string
description: "Secret name with data for accessing the bucket"
persistentVolumeClaim:
type: object
description : "Specification of the PVC to be used. Used 'as is' in pod executing the backup."
x-kubernetes-preserve-unknown-fields: true
x-kubernetes-preserve-unknown-fields: true
backupSchedules:
type: array
description: "Schedules for periodically executed backups"
items:
type: object
required: ["name", "schedule"]
x-kubernetes-preserve-unknown-fields: true
properties:
name:
type: string
description: "Name of the backup schedule"
schedule:
type: string
description: "The schedule of the job, syntax as a cron expression"
backupProfileName:
type: string
description: "Name of the backupProfile to be used"
backupProfile:
type: object
description: "backupProfile specification if backupProfileName is not specified"
x-kubernetes-preserve-unknown-fields: true
deleteBackupData:
type: boolean
default: false
description: "Whether to delete the backup data in case the MySQLBackup object created by the job is deleted"
enabled:
type: boolean
default: true
description: "Whether the schedule is enabled or not"
status:
type: object
x-kubernetes-preserve-unknown-fields: true
subresources:
status: {}
additionalPrinterColumns:
- name: Status
type: string
description: Status of the InnoDB Cluster
jsonPath: .status.cluster.status
- name: Online
type: integer
description: Number of ONLINE InnoDB Cluster instances
jsonPath: .status.cluster.onlineInstances
- name: Instances
type: integer
description: Number of InnoDB Cluster instances configured
jsonPath: .spec.instances
- name: Routers
type: integer
description: Number of Router instances configured for the InnoDB Cluster
jsonPath: .spec.router.instances
- name: Age
type: date
jsonPath: .metadata.creationTimestamp
scope: Namespaced
names:
kind: InnoDBCluster
listKind: InnoDBClusterList
singular: innodbcluster
plural: innodbclusters
shortNames:
- ic
- ics
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: mysqlbackups.mysql.oracle.com
spec:
group: mysql.oracle.com
scope: Namespaced
names:
kind: MySQLBackup
listKind: MySQLBackupList
singular: mysqlbackup
plural: mysqlbackups
shortNames:
- mbk
versions:
- name: v2
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
required: ["clusterName"]
properties:
clusterName:
type: string
backupProfileName:
type: string
backupProfile:
type: object
x-kubernetes-preserve-unknown-fields: true
addTimestampToBackupDirectory:
type: boolean
default: true
deleteBackupData:
type: boolean
default: false
status:
type: object
properties:
status:
type: string
startTime:
type: string
completionTime:
type: string
elapsedTime:
type: string
output:
type: string
method:
type: string
source:
type: string
bucket:
type: string
ociTenancy:
type: string
spaceAvailable:
type: string
size:
type: string
subresources:
status: {}
additionalPrinterColumns:
- name: Cluster
type: string
description: Name of the target cluster
jsonPath: .spec.clusterName
- name: Status
type: string
description: Status of the Backup
jsonPath: .status.status
- name: Output
type: string
description: Name of the produced file/directory
jsonPath: .status.output
- name: Age
type: date
jsonPath: .metadata.creationTimestamp
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: clusterkopfpeerings.zalando.org
spec:
scope: Cluster
group: zalando.org
names:
kind: ClusterKopfPeering
plural: clusterkopfpeerings
singular: clusterkopfpeering
versions:
- name: v1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
status:
type: object
x-kubernetes-preserve-unknown-fields: true
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: kopfpeerings.zalando.org
spec:
scope: Namespaced
group: zalando.org
names:
kind: KopfPeering
plural: kopfpeerings
singular: kopfpeering
versions:
- name: v1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
status:
type: object
x-kubernetes-preserve-unknown-fields: true
---
# Source: mysql-operator/templates/service_account_operator.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: mysql-operator-sa
namespace: mysql-operator
---
# Source: mysql-operator/templates/cluster_role_operator.yaml
# The main role for the operator
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: mysql-operator
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch", "patch"]
- apiGroups: [""]
resources: ["pods/status"]
verbs: ["get", "patch", "update", "watch"]
# Kopf needs patch on secrets or the sidecar will throw
# The operator needs this verb to be able to pass it to the sidecar
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "create", "list", "watch", "patch"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "create", "list", "watch", "patch"]
- apiGroups: [""]
resources: ["services"]
verbs: ["get", "create"]
- apiGroups: [""]
resources: ["serviceaccounts"]
verbs: ["get", "create"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch", "update"]
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["rolebindings"]
verbs: ["get", "create"]
- apiGroups: ["policy"]
resources: ["poddisruptionbudgets"]
verbs: ["get", "create"]
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create"]
- apiGroups: ["batch"]
resources: ["cronjobs"]
verbs: ["create", "update", "delete"]
- apiGroups: ["apps"]
resources: ["deployments", "statefulsets"]
verbs: ["get", "create", "patch", "watch", "delete"]
- apiGroups: ["mysql.oracle.com"]
resources: ["*"]
verbs: ["*"]
- apiGroups: ["zalando.org"]
resources: ["*"]
verbs: ["get", "patch", "list", "watch"]
# Kopf: runtime observation of namespaces & CRDs (addition/deletion).
- apiGroups: [apiextensions.k8s.io]
resources: [customresourcedefinitions]
verbs: [list, watch]
- apiGroups: [""]
resources: [namespaces]
verbs: [list, watch]
---
# Source: mysql-operator/templates/cluster_role_sidecar.yaml
# role for the server sidecar
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: mysql-sidecar
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch", "patch"]
- apiGroups: [""]
resources: ["pods/status"]
verbs: ["get", "patch", "update", "watch"]
# Kopf needs patch on secrets or the sidecar will throw
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "create", "list", "watch", "patch"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "create", "list", "watch", "patch"]
- apiGroups: [""]
resources: ["services"]
verbs: ["get", "create"]
- apiGroups: [""]
resources: ["serviceaccounts"]
verbs: ["get", "create"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch", "update"]
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "patch"]
- apiGroups: ["mysql.oracle.com"]
resources: ["innodbclusters"]
verbs: ["get", "watch", "list"]
- apiGroups: ["mysql.oracle.com"]
resources: ["mysqlbackups"]
verbs: ["create", "get", "list", "patch", "update", "watch", "delete"]
- apiGroups: ["mysql.oracle.com"]
resources: ["mysqlbackups/status"]
verbs: ["get", "patch", "update", "watch"]
---
# Source: mysql-operator/templates/cluster_role_binding_operator.yaml
# Give access to the operator
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: mysql-operator-rolebinding
subjects:
- kind: ServiceAccount
name: mysql-operator-sa
namespace: mysql-operator
# TODO The following entry is for dev purposes only and must be deleted
#- kind: Group
# name: system:serviceaccounts
# apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: mysql-operator
apiGroup: rbac.authorization.k8s.io
---
# Source: mysql-operator/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql-operator
namespace: mysql-operator
labels:
name: mysql-operator
spec:
type: ClusterIP
ports:
- port: 9443
protocol: TCP
selector:
name: mysql-operator
---
# Source: mysql-operator/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-operator
namespace: mysql-operator
labels:
version: "8.0.30-2.0.5"
app.kubernetes.io/name: mysql-operator
app.kubernetes.io/instance: mysql-operator
app.kubernetes.io/version: "8.0.30-2.0.5"
app.kubernetes.io/component: controller
app.kubernetes.io/managed-by: helm
app.kubernetes.io/created-by: helm
spec:
replicas: 1
selector:
matchLabels:
name: mysql-operator
template:
metadata:
labels:
name: mysql-operator
spec:
containers:
- name: mysql-operator
image: mysql/mysql-operator:8.0.30-2.0.5
imagePullPolicy: IfNotPresent
args: ["mysqlsh", "--log-level=@INFO", "--pym", "mysqloperator", "operator"]
env:
- name: MYSQLSH_USER_CONFIG_HOME
value: /mysqlsh
- name: MYSQL_OPERATOR_IMAGE_PULL_POLICY
value: IfNotPresent
volumeMounts:
- name: mysqlsh-home
mountPath: /mysqlsh
securityContext:
allowPrivilegeEscalation: false
privileged: false
readOnlyRootFilesystem: true
volumes:
- name: mysqlsh-home
emptyDir: {}
serviceAccountName: mysql-operator-sa
---
# Source: mysql-operator/templates/cluster_kopf_keepering.yaml
apiVersion: zalando.org/v1
kind: ClusterKopfPeering
metadata:
name: mysql-operator

View File

@ -0,0 +1 @@
../shared/networkpolicy-base.yml

5
phpmyadmin/README.md Normal file
View File

@ -0,0 +1,5 @@
# phpMyAdmin
```
kubectl apply -n phpmyadmin -f application.yml
```

108
phpmyadmin/application.yml Normal file
View File

@ -0,0 +1,108 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: phpmyadmin
labels:
app: phpmyadmin
spec:
# phpMyAdmin session handling is not really compatible with more replicas
replicas: 1
selector:
matchLabels:
app: phpmyadmin
template:
metadata:
labels:
app: phpmyadmin
spec:
containers:
- name: phpmyadmin
image: phpmyadmin/phpmyadmin
ports:
- name: web
containerPort: 80
protocol: TCP
env:
- name: PMA_ARBITRARY
value: "1"
- name: PMA_HOSTS
value: mysql-cluster.etherpad.svc.cluster.local,mariadb.authelia,mariadb.nextcloud,172.20.36.1
- name: PMA_ABSOLUTE_URI
value: https://phpmyadmin.k-space.ee/
- name: UPLOAD_LIMIT
value: 10G
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: phpmyadmin
annotations:
kubernetes.io/ingress.class: traefik
cert-manager.io/cluster-issuer: default
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
spec:
rules:
- host: phpmyadmin.k-space.ee
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: phpmyadmin
port:
number: 80
tls:
- hosts:
- phpmyadmin.k-space.ee
secretName: phpmyadmin-tls
---
apiVersion: v1
kind: Service
metadata:
name: phpmyadmin
labels:
app: phpmyadmin
spec:
selector:
app: phpmyadmin
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: phpmyadmin
spec:
podSelector:
matchLabels:
app: phpmyadmin
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
ports:
- protocol: TCP
port: 80
egress:
- # Allow connecting to MySQL instance in any namespace
to:
- namespaceSelector: {}
ports:
- port: 3306
- # Allow connecting to any MySQL instance outside the cluster
to:
- ipBlock:
cidr: 0.0.0.0/0
ports:
- protocol: TCP
port: 3306

View File

@ -0,0 +1 @@
../shared/networkpolicy-base.yml

109
reloader/application.yml Normal file
View File

@ -0,0 +1,109 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: reloader
name: reloader
namespace: reloader
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app: reloader
name: reloader-role
namespace: reloader
rules:
- apiGroups:
- ""
resources:
- secrets
- configmaps
verbs:
- list
- get
- watch
- apiGroups:
- apps
resources:
- deployments
- daemonsets
- statefulsets
verbs:
- list
- get
- update
- patch
- apiGroups:
- "extensions"
resources:
- deployments
- daemonsets
verbs:
- list
- get
- update
- patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app: reloader
name: reloader-role-binding
namespace: reloader
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: reloader-role
subjects:
- kind: ServiceAccount
name: reloader
namespace: reloader
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: reloader
name: reloader
namespace: reloader
spec:
replicas: 1
revisionHistoryLimit: 0
selector:
matchLabels:
app: reloader
template:
metadata:
labels:
app: reloader
spec:
containers:
- image: "stakater/reloader:v0.0.118"
imagePullPolicy: Always
name: reloader
ports:
- name: http
containerPort: 9090
livenessProbe:
httpGet:
path: /metrics
port: http
timeoutSeconds: 5
failureThreshold: 5
periodSeconds: 10
successThreshold: 1
readinessProbe:
httpGet:
path: /metrics
port: http
timeoutSeconds: 5
failureThreshold: 5
periodSeconds: 10
successThreshold: 1
securityContext:
runAsNonRoot: true
runAsUser: 65534
serviceAccountName: reloader

3
rosdump/.gitignore vendored Normal file
View File

@ -0,0 +1,3 @@
rosdump
rosdump.pub
ssh_known_hosts

68
rosdump/README.md Normal file
View File

@ -0,0 +1,68 @@
# Intro
This is how we make backups of Mikrotik device configurations using Kubernetes
Cronjob. This is easy to monitor with Prometheus and integrates well with the
rest of our montioring system. Also the script/manifest is less than 100 lines,
easy to follow and to fix.
Note that this does not have anything to do with
[ecadlabs/rosdump](https://github.com/ecadlabs/rosdump)
we initially used which just generated empty commits and
there was no easy way to monitor.
We also considered [ytti/oxidized](https://github.com/ytti/oxidized),
but it does not export Prometheus metrics either.
# Deployment
To apply changes run in this directory:
```
kubectl apply -n rosdump -f cronjob.yaml
```
To trigger cronjob:
```
kubectl create job -n rosdump --from=cronjob/rosdump-cronjob rosdump-job-oneshot
```
For alerting:
```
absent(kube_cronjob_status_last_successful_time{cronjob="rosdump-cronjob"})
```
# Updating SSH public keys
Whenever Mikrotik targets are added/removed or if their SSH keys change,
use following to apply changes:
```
(for j in $(kubectl get cm -n rosdump rosdump-config -o json | jq -r '.data.targets'); do ssh-keyscan -t rsa $j; done) > ssh_known_hosts
kubectl delete -n rosdump configmap rosdump-known-hosts
kubectl create -n rosdump configmap rosdump-known-hosts --from-file=ssh_known_hosts
```
Make sure strong crypto is enabled on Mikrotik side:
```
/ip ssh set strong-crypto=yes allow-none-crypto=no
```
# Replacing SSH private key
This affects access to both Gitea and Mikrotik targets.
Generate new key and inject it to Kubernetes cluster:
```
rm -fv rosdump
ssh-keygen -P '' -b 2048 -m PEM -t rsa -f rosdump -C rosdump
kubectl delete -n rosdump secret rosdump-secrets
kubectl create -n rosdump secret generic rosdump-secrets --from-file=ssh_identity=rosdump
```
Proceed to replace the public key in Gitea with one from `rosdump.pub`

110
rosdump/application.yml Normal file
View File

@ -0,0 +1,110 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: rosdump-config
data:
script.sh: |
#!/bin/bash
set -e
if [ -d rosdump ]; then
echo "Pulling Git repo"
cd rosdump
git pull
else
echo "Cloning Git repo"
git clone git@git.k-space.ee:k-space/rosdump.git
cd rosdump
fi
git rm *.k-space.ee
for target in $(cat /config/targets | grep -v '^#'); do
echo "Exporting configuration for $target"
ssh rosdump@$target '/export' | grep -v '^# serial number =' | grep -v '^#.* by RouterOS' > $target
git add $target
done
if [[ `git status --porcelain` ]]; then
echo "Attempting Git check in"
git commit -m "Update $(git ls-files -m) file(s)"
git push
else
echo "No changes to commit"
fi
targets: |
router.mgmt.k-space.ee
sw_core01.mgmt.k-space.ee
sw_core02.mgmt.k-space.ee
sw_mgmt.mgmt.k-space.ee
sw_poe.mgmt.k-space.ee
sw_ha.mgmt.k-space.ee
sw_cyber.mgmt.k-space.ee
sw_chaos.mgmt.k-space.ee
sw_asocial.mgmt.k-space.ee
sw_kitchen.mgmt.k-space.ee
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: rosdump-cronjob
spec:
schedule: "0 * * * *"
concurrencyPolicy: Forbid
jobTemplate:
spec:
activeDeadlineSeconds: 300
template:
spec:
nodeSelector:
dedicated: monitoring
tolerations:
- key: dedicated
operator: Equal
value: monitoring
effect: NoSchedule
restartPolicy: OnFailure
containers:
- name: rosdump
image: harbor.k-space.ee/k-space/microscript-base
imagePullPolicy: Always
args:
- bash
- /config/script.sh
volumeMounts:
- name: config
mountPath: /config
volumes:
- name: config
projected:
sources:
- secret:
name: rosdump-secrets
items:
- key: ssh_identity
path: ssh_identity
mode: 0600
- configMap:
name: rosdump-known-hosts
items:
- key: ssh_known_hosts
path: ssh_known_hosts
- configMap:
name: rosdump-config
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: rosdump
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 193.40.103.0/24
- ipBlock:
cidr: 172.23.0.0/24
- ipBlock:
cidr: 100.102.1.0/24
ports:
- protocol: TCP
port: 22

72
shared/README.md Normal file
View File

@ -0,0 +1,72 @@
# KeyDB
KeyDB can be instantiated by symlinking the generated keydb.yml,
in future this could be handled by an operator.
```
helm template keydb enapter/keydb --set persistentVolume.enabled=false > keydb.yml
```
# To regenerate base network policies
It's quite odd there is no better way to generate these.
cat << EOF > networkpolicy-base.yml
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: kubedns
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
ports:
- protocol: UDP
port: 53
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: kubeprobe
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
EOF
for j in $(kubectl get nodes -o json | jq '.items[] | .spec.podCIDR' -r | cut -d "/" -f 1 | sed -e 's/\.0$/\.1\/32/' | xargs); do
cat << EOF >> networkpolicy-base.yml
- from:
- ipBlock:
cidr: $j
EOF
done
cat << EOF >> networkpolicy-base.yml
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: kubeapi
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- ports:
- port: 6443
to:
EOF
for j in $(kubectl get ep -n default kubernetes -o json | jq '.subsets[].addresses[].ip' -r | xargs); do
cat << EOF >> networkpolicy-base.yml
- ipBlock:
cidr: $j/32
EOF
done

77
shared/backup-service.yml Normal file
View File

@ -0,0 +1,77 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: backup-service
spec:
replicas: 1
selector:
matchLabels:
app: backup-service
template:
metadata:
labels:
app: backup-service
spec:
serviceAccount: backup-service
containers:
- name: backup-service
image: harbor.k-space.ee/k-space/backup-service
ports:
- name: backup-service
containerPort: 5000
env:
- name: TOKEN
value: CYdCDFIvGX
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: backup-service
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: backup-service
rules:
- apiGroups:
- ""
resources:
- namespaces
verbs:
- list
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- apiGroups:
- mongodbcommunity.mongodb.com
resources:
- mongodbcommunity
verbs:
- get
- list
- watch
- apiGroups:
- mysql.oracle.com
resources:
- innodbclusters
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: backup-service
namespace: shared
subjects:
- kind: ServiceAccount
name: backup-service
namespace: shared
roleRef:
kind: ClusterRole
name: backup-service
apiGroup: rbac.authorization.k8s.io

244
shared/keydb.yml Normal file
View File

@ -0,0 +1,244 @@
---
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: redis
labels:
app.kubernetes.io/name: redis
spec:
maxUnavailable: 1
selector:
matchLabels:
app.kubernetes.io/name: redis
---
apiVersion: v1
kind: Secret
metadata:
name: redis-utils
labels:
app.kubernetes.io/name: redis
type: Opaque
stringData:
server.sh: |
#!/bin/bash
set -euxo pipefail
host="$(hostname)"
port="6379"
replicas=()
for node in {0..2}; do
if [ "${host}" != "redis-${node}" ]; then
replicas+=("--replicaof redis-${node}.redis-headless ${port}")
fi
done
exec keydb-server /etc/keydb/redis.conf \
--active-replica "yes" \
--multi-master "yes" \
--appendonly "no" \
--bind "0.0.0.0" \
--port "${port}" \
--protected-mode "no" \
--server-threads "2" \
--masterauth "${REDIS_PASSWORD}" \
--requirepass "${REDIS_PASSWORD}" \
"${replicas[@]}"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: redis-health
labels:
app.kubernetes.io/name: redis
data:
ping_readiness_local.sh: |-
#!/bin/bash
set -e
[[ -n "${REDIS_PASSWORD}" ]] && export REDISCLI_AUTH="${REDIS_PASSWORD}"
response="$(
timeout -s 3 "${1}" \
keydb-cli \
-h localhost \
-p 6379 \
ping
)"
if [ "${response}" != "PONG" ]; then
echo "${response}"
exit 1
fi
ping_liveness_local.sh: |-
#!/bin/bash
set -e
[[ -n "${REDIS_PASSWORD}" ]] && export REDISCLI_AUTH="${REDIS_PASSWORD}"
response="$(
timeout -s 3 "${1}" \
keydb-cli \
-h localhost \
-p 6379 \
ping
)"
if [ "${response}" != "PONG" ] && [[ ! "${response}" =~ ^.*LOADING.*$ ]]; then
echo "${response}"
exit 1
fi
cleanup_tempfiles.sh: |-
#!/bin/bash
set -e
find /data/ -type f \( -name "temp-*.aof" -o -name "temp-*.rdb" \) -mmin +60 -delete
---
apiVersion: v1
kind: Service
metadata:
name: redis-headless
labels:
app.kubernetes.io/name: redis
spec:
type: ClusterIP
clusterIP: None
ports:
- name: "server"
port: 6379
protocol: TCP
targetPort: redis
selector:
app.kubernetes.io/name: redis
---
apiVersion: v1
kind: Service
metadata:
name: redis
labels:
app.kubernetes.io/name: redis
annotations:
{}
spec:
type: ClusterIP
ports:
- name: "server"
port: 6379
protocol: TCP
targetPort: redis
- name: "redis-exporter"
port: 9121
protocol: TCP
targetPort: redis-exporter
selector:
app.kubernetes.io/name: redis
sessionAffinity: ClientIP
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
labels:
app.kubernetes.io/name: redis
spec:
replicas: 3
serviceName: redis-headless
selector:
matchLabels:
app.kubernetes.io/name: redis
template:
metadata:
annotations:
prometheus.io/port: "8083"
prometheus.io/scrape: "true"
labels:
app.kubernetes.io/name: redis
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- 'redis'
topologyKey: kubernetes.io/hostname
weight: 100
containers:
- name: redis
image: eqalpha/keydb:x86_64_v6.3.1
imagePullPolicy: Always
command:
- /utils/server.sh
ports:
- name: redis
containerPort: 6379
protocol: TCP
livenessProbe:
initialDelaySeconds: 20
periodSeconds: 5
# One second longer than command timeout should prevent generation of zombie processes.
timeoutSeconds: 6
successThreshold: 1
failureThreshold: 5
exec:
command:
- sh
- -c
- /health/ping_liveness_local.sh 5
readinessProbe:
initialDelaySeconds: 20
periodSeconds: 5
# One second longer than command timeout should prevent generation of zombie processes.
timeoutSeconds: 2
successThreshold: 1
failureThreshold: 5
exec:
command:
- sh
- -c
- /health/ping_readiness_local.sh 1
startupProbe:
periodSeconds: 5
# One second longer than command timeout should prevent generation of zombie processes.
timeoutSeconds: 2
failureThreshold: 24
exec:
command:
- sh
- -c
- /health/ping_readiness_local.sh 1
resources:
{}
securityContext:
{}
volumeMounts:
- name: health
mountPath: /health
- name: redis-data
mountPath: /data
- name: utils
mountPath: /utils
readOnly: true
envFrom:
- secretRef:
name: redis-secrets
- name: redis-exporter
image: quay.io/oliver006/redis_exporter
ports:
- name: metrics
containerPort: 9121
envFrom:
- secretRef:
name: redis-secrets
imagePullSecrets:
[]
securityContext:
{}
volumes:
- name: health
configMap:
name: redis-health
defaultMode: 0755
- name: utils
secret:
secretName: redis-utils
defaultMode: 0755
items:
- key: server.sh
path: server.sh
- name: redis-data
emptyDir: {}

104
shared/mariadb.yml Normal file
View File

@ -0,0 +1,104 @@
# MariaDB 10.5 is supported until 2025
# Note that MariaDB 10.6 breaks with Nextcloud
# https://help.nextcloud.com/t/update-to-next-cloud-21-0-2-has-get-an-error/117028/7
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mariadb
annotations:
keel.sh/policy: patch
keel.sh/trigger: poll
keel.sh/pollSchedule: "@midnight"
spec:
revisionHistoryLimit: 0
serviceName: mariadb
selector:
matchLabels:
app: mariadb
replicas: 1
template:
metadata:
labels:
app: mariadb
annotations:
prometheus.io/port: '9104'
prometheus.io/scrape: 'true'
spec:
containers:
- name: exporter
image: prom/mysqld-exporter:latest
env:
- name: DATA_SOURCE_NAME
value: exporter@tcp(127.0.0.1)/
- name: mariadb
image: mariadb:10.5
imagePullPolicy: Always
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mariadb-secrets
key: MYSQL_ROOT_PASSWORD
- name: MYSQL_USER
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MYSQL_DATABASE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mariadb-secrets
key: MYSQL_PASSWORD
volumeMounts:
- name: mariadb-data
mountPath: /var/lib/mysql
- name: mariadb-init
mountPath: /docker-entrypoint-initdb.d
volumes:
- name: mariadb-init
configMap:
name: mariadb-init-config
# Make sure MariaDB instances run on storage{1..3} nodes, as close
# as possible to Longhorn instances
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule
nodeSelector:
dedicated: storage
volumeClaimTemplates:
- metadata:
name: mariadb-data
spec:
storageClassName: longhorn
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
apiVersion: v1
kind: Service
metadata:
name: mariadb
spec:
ports:
- protocol: TCP
port: 3306
selector:
app: mariadb
---
apiVersion: v1
kind: ConfigMap
metadata:
name: mariadb-init-config
data:
initdb.sql: |
CREATE USER 'exporter'@'127.0.0.1' WITH MAX_USER_CONNECTIONS 3;
GRANT PROCESS, REPLICATION CLIENT, SLAVE MONITOR, SELECT ON *.* TO 'exporter'@'127.0.0.1';

81
shared/memcached.yml Normal file
View File

@ -0,0 +1,81 @@
apiVersion: v1
kind: ServiceAccount
automountServiceAccountToken: false
metadata:
name: memcached
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: memcached
labels:
app: memcached
spec:
revisionHistoryLimit: 0
serviceName: memcached
selector:
matchLabels:
app: memcached
replicas: 1
template:
metadata:
labels:
app: memcached
spec:
securityContext:
fsGroup: 1001
affinity:
podAffinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app: memcached
topologyKey: kubernetes.io/hostname
weight: 1
serviceAccountName: memcached
containers:
- name: memcached
image: memcached:1-alpine
securityContext:
runAsUser: 1001
readOnlyRootFilesystem: true
runAsNonRoot: true
livenessProbe:
tcpSocket:
port: 11211
initialDelaySeconds: 30
timeoutSeconds: 5
failureThreshold: 6
readinessProbe:
tcpSocket:
port: 11211
initialDelaySeconds: 5
timeoutSeconds: 3
periodSeconds: 5
resources:
limits: {}
requests:
cpu: 250m
memory: 256Mi
volumeMounts:
- name: tmp
mountPath: /tmp
volumes:
- name: tmp
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: memcached
labels:
app: memcached
spec:
type: ClusterIP
ports:
- name: memcache
port: 11211
selector:
app: memcached

38
shared/minio-support.yml Normal file
View File

@ -0,0 +1,38 @@
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: minio-operator
spec:
podSelector:
matchLabels:
v1.min.io/tenant: minio
policyTypes:
- Ingress
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: minio-operator
ports:
- protocol: TCP
port: 4222
- to:
- podSelector:
matchLabels:
v1.min.io/tenant: minio
ports:
- port: 9000
ingress:
- from:
- podSelector: {}
ports:
- port: 9000
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik

89
shared/minio.yml Normal file
View File

@ -0,0 +1,89 @@
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: minio
annotations:
keel.sh/policy: force
keel.sh/trigger: poll
keel.sh/pollSchedule: "@midnight"
spec:
revisionHistoryLimit: 0
serviceName: minio
selector:
matchLabels:
app: minio
replicas: 1
template:
metadata:
labels:
app: minio
spec:
containers:
- name: minio
image: minio/minio:latest
command: ["minio"]
ports:
- name: minio
containerPort: 9000
- name: minio-console
containerPort: 9001
args: ["server", "/data", "--console-address", ":9001"]
env:
- name: MINIO_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: minio-secrets
key: MINIO_ROOT_PASSWORD
- name: MINIO_ROOT_USER
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- mountPath: /data
name: minio-data
# Make sure Minio instances run on storage{1..3} nodes, as close
# as possible to Longhorn instances
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule
nodeSelector:
dedicated: storage
volumeClaimTemplates:
- metadata:
name: minio-data
spec:
storageClassName: longhorn
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
---
apiVersion: v1
kind: Service
metadata:
name: minio
annotations:
prometheus.io/scrape: 'true'
spec:
ports:
- protocol: TCP
port: 9000
selector:
app: minio
---
apiVersion: v1
kind: Service
metadata:
name: minio-console
spec:
ports:
- protocol: TCP
port: 9001
selector:
app: minio

Some files were not shown because too many files have changed in this diff Show More